By next year, AI models may be able to “replicate and survive in the wild,” says Anthropic’s CEO.

“I could be wrong. But I think it might be the closest thing.”


WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

As AI seems to get more powerful every day, one of the main people investing in improving it is saying that soon, it will be self-sustaining and even self-replicating. can become

In a podcast interview with The New York Times Ezra Klein, Anthropic CEO Dario Amodei discusses the “responsible scaling” of technology – and how without governance, it may well start the race.

As Amodei explained to Klein, Anthropic uses virology lab biosafety levels as an analogy for AI. Currently, he says, the world is on ASL 2 — and ASL 4, which will include “autonomous” and “persuasive,” is probably just around the corner.

“ASL 4, on the abuse side, is going to be more about enabling state-level actors to greatly increase their capacity, rather than enabling random people,” Amudi said. It’s difficult.” “So where we would worry is that North Korea or China or Russia could increase their offensive capabilities in various military areas with AI in a way that would give them considerable leverage on a geopolitical level.”

Autonomous AI

When it comes to the “independence side” of things, however, his predictions get even wilder.

“Different measures of these models,” he continued, “are close enough to be able to replicate and survive in the wild.”

When Klein asked how long it would take to reach these different levels of risk, Amoudi – who said he wouldn’t “go into specifics” – said he thought the level of “replication and survival in the wild” Could arrive “anywhere from 2025”. 2028.”

“I’m talking about the really near future here. I’m not talking 50 years out,” said Anthropic’s CEO. “God grant me chastity, but not yet. But ‘not now’ doesn’t mean when I’m old and gray. I think that might be the nearest.”

Amodei is a serious figure in space. Back in 2021, he and his sister Daniela left OpenAI over directional differences following the creation of GPT-3 — which the CEO and cofounder helped create — and the company’s partnership with Microsoft. Soon after, the siblings co-founded Anthropic with other OpenAI expats to continue their responsible scaling efforts.

“I don’t know,” he continued during the Klein interview. “I could be wrong. But I think it might be the nearest time.”

While AI doomsday talk is fairly common these days, Amodei’s insider perspective lends a lot of weight to his reasoning — and Anthropic’s mission to “ensure transformative AI helps people and society thrive.” ” seems more worthy.

More on AI preprocessing: Mistral CEO says AI companies are trying to make God.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment