The report warns that AI could pose an ‘extinction-level’ threat to humans and that the US should intervene.


New York
CNN

A new report released by the US State Department paints an alarming picture of the “catastrophic” national security risks posed by the rapid development of artificial intelligence, warning that the federal government could face disaster. Time to escape is running out.

The findings were based on more than a year of interviews with more than 200 people — including top executives at leading AI companies, cybersecurity researchers, experts on weapons of mass destruction and national security officials within the government. Included.

A report released this week by Gladstone AI flatly states that advanced AI systems, in the worst case scenario, “could pose an extinction-level threat to the human species.”

A U.S. State Department official confirmed to CNN that the agency produced the report as it continually evaluates how AI meets its mission of protecting U.S. interests at home and abroad. However, the official emphasized that the report does not represent the views of the US government.

The warning in the report is another reminder that while the potential of AI is enticing investors and the public, there are also real risks.

“AI is already an economically transformative technology. It can allow us to cure diseases, make scientific discoveries, and overcome challenges we thought were insurmountable. Jeremy Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.

“But it can also carry serious risks, including catastrophic risks, that we need to be aware of,” Harris said. “And a growing body of evidence — including empirical research and analysis published at the world’s top AI conferences — suggests that above a certain threshold of competence, AIs can potentially become uncontrollable.”

White House spokesman Robin Peterson said President Joe Biden’s executive order on AI was “the most important step taken by any government in the world to deliver on the promise and address the dangers of artificial intelligence.”

“The President and Vice President will continue to work with our international partners and urge Congress to enact bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

The Gladstone AI report was first reported by Time.

‘Clear and urgent need’ for intervention

Researchers warn of two central risks posed by large-scale AI.

First, Gladstone AI said, highly advanced AI systems can potentially be weaponized to cause irreversible damage. Second, the report says there are private concerns within AI labs that at some point they may “lose control” over the systems they develop, with “potentially catastrophic consequences for global security.”

“The Rise of AI and AGI [artificial general intelligence] “Has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” the report said, adding that AI poses a risk of “arms races,” conflict and “fatal accidents on a WMD scale.”

The Gladstone AI report calls for dramatic new measures aimed at combating the threat, including the launch of a new AI agency, “emergency” regulatory safeguards and the use of computer power to train AI models. Includes enforcing limits to do.

“There is a clear and urgent need for US government intervention,” the authors wrote in the report.

Harris, an executive at Gladstone AI, said his team had “unusual access” to government and private sector officials to get the startling results. Gladstone AI said it spoke with technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

“Along the way, we learned some serious things,” Harris said in a video posted on Gladstone AI’s website announcing the report. “Behind the scenes, the safety and security situation in advanced AI seems woefully inadequate compared to the national security risks that AI could introduce very soon.”

The Gladstone AI report said competitive pressures are forcing companies to accelerate AI development “at the expense of safety and security,” raising the possibility that advanced AI systems could be “stolen” against the US. ” and may be “armed”.

The findings add to a growing list of warnings about the existential risks posed by AI — including from some of the industry’s most powerful figures.

About a year ago, Jeffrey Hinton, known as the “Godfather of AI,” quit his job at Google and blew the whistle on the technology he helped create. Hinton has said that there is a 10 percent chance that AI will cause human extinction in the next three decades.

Hinton and dozens of other AI industry leaders, academics and others signed a statement last June that said “reducing the risk of AI extinction should be a global priority.”

Business leaders are increasingly concerned about these risks – even as they pour billions of dollars into investments in AI. 42% of CEOs surveyed at the Yale CEO Summit last year said AI has the potential to destroy humanity five to ten years from now.

In its report, Gladstone AI notes some prominent figures who have warned of the dangers posed by AI, including Elon Musk, the head of the Federal Trade Commission. Leena Khan and former top executive at OpenAI.

According to Gladstone AI, some employees of AI companies are privately sharing similar concerns.

“One person at a leading AI lab opined that, if the next-generation AI model is ever released as open access, it will be ‘terrifyingly bad,'” the report said. ,” because the model’s potential persuasive capabilities could ‘subvert democracy’ if they were used in areas such as electoral interference or voter manipulation.

Gladstone said he asked AI experts at Frontier Labs to privately share their personal estimates of the chance that an AI event could lead to “global and irreversible impacts” in 2024. report, that estimates were informal and potentially subject to significant bias.

One of the biggest wild cards is how fast AI evolves — especially AGI, which is a hypothetical form of AI with human-like or even superhuman learning capabilities.

The report states that AGI is seen as a “primary driver of catastrophic risk from losing control” and notes that OpenAI, Google DeepMind, Anthropic and Nvidia have publicly stated that AGI will reach 2028. Could – although others think it’s far-fetched. .

Gladstone AI notes that disagreements on AGI timelines make it difficult to develop policies and safeguards and there is a risk that regulation “could prove detrimental” if the technology produces slower-than-expected regulation.

A related document published by Gladstone AI warns that the development of AGI and the capabilities that AGI will bring will “introduce catastrophic threats like the United States has never faced,” akin to “WMD-like threats.” If and when those weapons are built.

For example, the report states that AI systems can be used to design and implement “high-impact cyberattacks capable of destroying critical infrastructure.”

“A simple verbal or typed command, such as ‘execute an undetected cyber attack to crash the North American electric grid,’ can elicit a standard response that can be devastatingly effective,” the report said. The report said.

Other examples the authors worry about include AI-powered “massive” disinformation campaigns that destabilize society and erode trust in institutions. weaponized robotic applications such as drone swimming attacks; psychological manipulation; Biological and material science weapons and power-seeking AI systems that are impossible to control and hostile to humans.

“Researchers expect that sufficiently advanced AI systems will function to protect themselves from shutting down,” the report said, “because if an AI system were to be turned off, it would continue to serve its purpose. Can’t work.”

Leave a Comment