Here’s something fun to consider the next time you use an AI tool. Most people associated with artificial intelligence believe that it could destroy humanity. This is bad news. The good news is that the odds of this happening vary depending on who you listen to.
p(doom) is the “doomsday probability” or the chances that an AI will take over the planet or do something to destroy us, such as building a biological weapon or starting a nuclear war. At the happiest end of the p(doom) scale, Yann LeCun, one of the “Three Godfathers of AI” who currently works at Meta, puts the odds <0.01%, or an asteroid wiping us out. Chances are slim.
Sadly, no one else is even close to being so optimistic. Geoff Hinton, one of the other three godfathers of AI, says there is a 10% chance that AI will wipe us out in the next 20 years, and Yoshua Benjiu, the third of the three godfathers of AI, raises that number to 20%. has given
99.999999% chance
At the more pessimistic end of the scale is Roman Yampolsky, an AI safety scientist and director of the Cyber Security Laboratory at the University of Louisville. He believes it is pretty much guaranteed to happen. He puts the odds of AI wiping out humanity at 99.999999%.
Elon Musk, speaking at a “Great AI Debate” seminar at the four-day Abundance Summit earlier this month, said, “I think there’s some possibility that it will wipe out humanity. I might ask Geoff Hinton. Agree it’s about 10 percent or 20 percent. % or something like that,” before adding, “I think the potential positive scenario outweighs the negative scenario.”
In response, Yampolskiy stated Business Insider He believed that Musk was “a little too conservative” in his assessment and that we should abandon the development of this technology for now because once AI becomes more advanced, it will be nearly impossible to control.
“Not sure why he thinks it’s a good idea to push this technology anyway,” Yamplowski said. “if they [Musk] Concerned about competitors getting there first, it doesn’t matter because uncontrolled superintelligence is just as bad, no matter who creates it.”
At the summit, Musk had a solution to avoid AI wiping out humanity. “Don’t force him to lie, even if the truth is uncomfortable,” Musk said. “This is very important. Don’t lie to the AI.”
If you’re wondering where other AI researchers and forecasters currently rank on the p(doom) scale, you can see the list here .