AI Could Destroy Humans, Stephen Hawking Fears. Should You Worry?

Posted: Updated:
Print
Henrik5000 via Getty Images
Henrik5000 via Getty Images

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, "the development of full artificial intelligence could spell the end of the human race." Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably "our biggest existential threat."

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December, in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Musk raised the alarm about artificial intelligence during the MIT Aeronautics and Astronautics department's Centennial Symposium in October, likening AI to "summoning the demon."

He had previously tweeted that AI was "potentially more dangerous than nukes."

The event that Hawking and Musk fear is "singularity" – when machines surpass humans in general intelligence, not just in beating us at tasks like playing chess or Jeopardy, as they have already.

Popular works of science fiction - from the latest Terminator trailer, to the Matrix trilogy, to Star Trek's borg - envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the "possible misuse of powerful technologies" such as AI. He said Hawking and Musk have good reason to be concerned.

"Once we no longer have the intellectual upper hand, then we quite literally, by definition, cannot outwit our successors. So unless we are absolutely sure that the machines we are building right now are not going to eventually become our new robot overlords, prudence is called for."

Alan Mackworth, who holds a Canada Research Chair in Artificial Intelligence at the University of British Columbia, thinks Hawking and Musk are being "a bit overdramatic," but are right to sound the alarm and spur public discussion.

He says AI is just coming out of science fiction and into the real world, in the form of technologies such as Google's self-driving cars, IBM's Jeopardy-winning robot Watson, and the increasing number of computers successfully posing as humans in the Turing test (which examines a machine's ability to exhibit intelligent behaviour that can't be distinguished from that of a human, such as having a random conversation with a person).

Mackworth invented the first soccer-playing robots. He is now developing AI technology for motorized wheelchairs to help people with dementia get around. He says machines are still far from being able to take off one their own: "If you look at what you can currently do in robot and computer learning, it's classifying Youtube videos to see which one has a cat in it and which one doesn't have a cat in it."

Military at forefront of AI development

But he is worried about the current use of AI to develop military technology, such as autonomous weapons and semi-autonomous drones.

"This technology is very, very powerful, and we have to build safeguards into it," he said.

Mackworth suggests that regulation of artificial intelligence may require international treaties and codes of ethics for robot designers, similar to those engineers must abide by.

Enforcement, however, may not be that easy. It requires technology to verify what a robot can and cannot do, when compared to its specifications – something that is underdevelopment but doesn't yet exist.

Sawyer thinks that in order to keep humans safe from the potential threats posed by AI, the technology's development needs to be out in the open in places like publicly funded universities, rather than inside military agencies.

"There should be nothing classified about this research," he said. "By the point when you sit down in front of your computer and your computer says, 'Good morning, I'm in charge now,' it's too late."

While that moment may be decades or even centuries away, Sandra Zilles, who holds a Canada Research Chair in Computational Learning Theory at the University of Regina, says machines are already able to learn some things much faster than humans, and can reprogram themselves to perform certain tasks more efficiently.

She notes that besides the military, big tech companies like Google and Apple are also at the forefront of AI research, and that too has implications.

"They can steer the development of technology in a direction that is most useful to them," she said, "but maybe not the most useful to mankind."

Collaborative machines?

Despite the dark future envisioned by science fiction, both Mackworth and Sawyer see brighter possibilities.

Mackworth says he's not really worried about machines turning on us, because humans typically design machines to be tools and extensions of our own minds and brains.

"We should make sure that these machines are built to collaborate with us and not be totally autonomous."

Sawyer envisions newly conscious, superintelligent machines cooperating with humans in his fictional Wake, Watch and Wonder trilogy. He argues that machines are developing in an environment that is very different than the scarcity and natural selection that led to the evolution of humans.

"All the things that made us basically nasty, rapacious, competitive as a species are not necessarily hard-coded into whatever passes for the DNA of artificial intelligence," Sawyer says. "There's every reason to think that they would be fundamentally different psychologically from us, and that psychology may very much predispose them to being altruistic rather than being competitive and violent the way we are."

That said, he's not ready to put all his money on his own vision.

"I don't want to say, 'Don't worry,' because one of us is right – me or Stephen Hawking.  Even I – even I would probably bet on Hawking."

Also on The Huffington Post

Close
Where Robots Are Taking Over
of
Share
Tweet
Advertisement
Share this
close
Current Slide

Suggest a correction

Around the Web

Association for the Advancement of Artificial Intelligence

A 100-year study of artificial intelligence? Microsoft Research's Eric Horvitz ...

In development: The next generation of artificial intelligence

The Military's New Year's Resolution for Artificial Intelligence

World's first artificial intelligence personal robot developed by US startup ...

Don't Fear Artificial Intelligence