The researcher estimates a 99.9 percent chance that AI will destroy mankind.

“The only way to win this game is not to play it.”

Foregone conclusion

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

In a survey earlier this year, just over half of the 2,778 researchers surveyed said there was a five percent chance that humans would be headed for extinction, with others of “extremely bad outcomes.” except.

At the same time, 68.3 percent of respondents said that “good outcomes from superhuman AI” are more likely than bad outcomes, indicating little consensus among experts on the topic.

Some are exceptionally negative. Take AI researcher and University of Louisville computer science lecturer Roman Yampolsky, who is in the Domer camp. In a recent episode of Lex Friedman's podcast, he predicted that — get this — there's a 99.9 percent chance that AI could wipe out humanity within the next 100 years.

“If we create general superintelligence, I don't see a good long-term outcome for humanity,” Yampolsky told Friedman. “The only way to win this game is not to play it.”

AI domism

It's a more worrisome move than the perceived dangers of developing AI technologies, with Yampolskiy pointing to the chaos that already large language models have caused.

He said that he had already made mistakes. “We've had accidents, they've been jailbroken. I don't think there's a single major language model today that nobody's managed to do that the developers didn't set out to do.”

For Yampolsky, it's a risk we can't yet imagine.

“Superintelligence will bring something completely new, completely super,” he told Friedman. “We don't even recognize it as a possible way to achieve” the goal of eliminating everyone.

Yampolsky argued that AI's chances of doing just that might not reach 100 percent — but it could get pretty close.

“We can quickly put in more resources and get closer, but we'll never get to 100 percent,” he said. “If a system makes a billion decisions a second and you use it for 100 years, you're still going to run into a problem.”

The topic has already attracted several high-profile members of the AI ​​community to come up with their own predictions. Meta's chief AI scientist Yann Leckoun and the tech's so-called “godfathers,” Google's head of AI in the UK, Damas Hassabis, and former Google CEO Eric Schmidt have warned that AI tech poses an existential threat. can

But as a survey from earlier this year shows, top minds are far from unanimous. The science of prediction, where our current obsession with AI will lead us in the distant future, is also still in its infancy.

In short, we should take Yampolsky's comments with a healthy grain of salt. Our days as a species on earth are not yet numbered. In addition, our planet faces many other existential threats.

More on AI: State Department Report Warns of AI Apocalypse, Recommends Limiting Compute Power for Training

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment