If there’s one thing that you can definitely say about Elon Musk is that he’s tech-savvy. That’s how he got to be a multi-millionaire by the age of 28, and the co-founder, owner or CEO of companies like Tesla, SpaceX, The Boring Company, SolarCity and Neuralink.
The latter is exploring how it can establish a connection between human brains and computers in order to cure diseases and improve our intelligence, while Tesla pioneered the use of semi-autonomous technology in its vehicles in the form of the Autopilot system.
However, as CNBC reports, Musk has gone on record during a tech conference at Austin, Texas, last week stating that Artificial Intelligence (AI) is potentially more dangerous for the human species than nuclear weapons.
“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” he admitted. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”
Even though many, such as Facebook founder Mark Zuckerberg and Harvard professor Steven Pinker, claim that Musk is fear-mongering and his AI predictions are “pretty irresponsible”, the South African entrepreneur responded that those who don’t heed his warnings are “fools”.
“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are,” Musk replied.
“This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
Musk believes that the problem can be solved through regulation: “It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want.
“And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”
This isn’t the first time Musk has claimed AI will endanger humanity, though he did distinguish between specific technologies, like autonomous cars, and AI with “open-ended utility functions”.
After Uber’s fatal accident, which experts say could have been avoided, it seems that even self-driving cars could, at the moment, be a liability. Following this accident, both Uber and Toyota immediately stopped testing their autonomous vehicles.
You sure can’t fight the future, but could it be that Musk is right and there needs to be a regulatory body, or should we dismiss his warnings and trust the industry to develop AI-controlled applications that pose no threat to humans?