Life with A.I.

Elon Musk: ‘Mark my words — A.I. is far more dangerous than nukes’

Elon Musk speaks onstage during SXSW
Photo by Chris Saucedo

Tesla and SpaceX boss Elon Musk has doubled down on his dire warnings about the danger of artificial intelligence.

The billionaire tech entrepreneur called AI more dangerous than nuclear warheads and said there needs to be a regulatory body overseeing the development of super intelligence, speaking at the South by Southwest tech conference in Austin, Texas on Sunday.

It is not the first time Musk has made frightening predictions about the potential of artificial intelligence — he has, for example, called AI vastly more dangerous than North Korea — and he has previously called for regulatory oversight.

Some have called his tough talk fear-mongering. Facebook founder Mark Zuckerberg said Musk's doomsday AI scenarios are unnecessary and "pretty irresponsible." And Harvard professor Steven Pinker also recently criticized Musk's tactics.

Musk, however, is resolute, calling those who push against his warnings "fools" at SXSW.

"The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are," said Musk. "This tends to plague smart people. They define themselves by their intelligence and they don't like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed."

Based on his knowledge of machine intelligence and its developments, Musk believes there is reason to be worried.

This CEO wants to put a computer chip in your brain
VIDEO1:1501:15
This CEO wants to put a computer chip in your brain

"I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me," said Musk. "It's capable of vastly more than almost anyone knows and the rate of improvement is exponential."

Musk pointed to machine intelligence playing the ancient Chinese strategy game Go to demonstrate rapid growth in AI's capabilities. For example, London-based company, DeepMind, which was acquired by Google in 2014, developed an artificial intelligence system, AlphaGo Zero, that learned to play Go without any human intervention. It learned simply from randomized play against itself. The Alphabet-owned company announced this development in a paper published in October.

Musk worries AI's development will outpace our ability to manage it in a safe way.

"So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one."

To do this, Musk recommended the development of artificial intelligence be regulated.

"I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public," said Musk.

"It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane," he said at SXSW.

"And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane."

Musk called for regulatory oversight of artificial intelligence in July too, speaking to the National Governors Association. "AI is a rare case where I think we need to be proactive in regulation than be reactive," Musk said in July.

Elon Musk issues yet another warning against runaway artificial intelligence
VIDEO1:0301:03
Elon Musk issues yet another warning against runaway artificial intelligence

In his analysis of the dangers of AI, Musk differentiates between case-specific applications of machine intelligence like self-driving cars and general machine intelligence, which he has described previously as having "an open-ended utility function" and having a "million times more compute power" than case-specific AI.

"I am not really all that worried about the short term stuff. Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs,and better weaponry and that kind of thing, but it is not a fundamental species level risk, whereas digital super intelligence is," explained Musk.

"So it is really all about laying the groundwork to make sure that if humanity collectively decides that creating digital super intelligence is the right move, then we should do so very very carefully — very very carefully. This is the most important thing that we could possibly do."

Still, Musk is in the business of artificial intelligence with his venture Neuralink, a company working to create a way to connect the brain with machine intelligence.

Musk hopes "that we are able to achieve a symbiosis" with artificial intelligence: "We do want a close coupling between collective human intelligence and digital intelligence, and Neuralink is trying to help in that regard by trying creating a high bandwidth interface between AI and the human brain," he said.

See also:

Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'

Elon Musk responds to Harvard professor Steven Pinker's comments on A.I.: 'Humanity is in deep trouble'

Elon Musk: 'Robots will be able to do everything better than us'

Elon Musk responds to Harvard professor Steven Pinker’s comments on A.I.: ‘Humanity is in deep trouble’
VIDEO1:0001:00
Elon Musk responds to Harvard professor Steven Pinker’s comments on A.I.: ‘Humanity is in deep trouble’

Like this story? Like CNBC Make It on Facebook.