Tech companies want to create artificial general intelligence. But who decides when AGI is achieved?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

This article has been reviewed in accordance with Science X’s editorial practices and policies. The editors have highlighted the following attributes to ensure the credibility of the content:

Reality check

Peer-reviewed publication

Leading news agency

Proof read


Dr. P. Wang teaches an Artificial General Intelligence class at Temple University in Philadelphia on Thursday, February 1, 2024. Mainstream AI research “turned away from the original vision of artificial intelligence, which was quite exciting at the beginning,” Wang said. Credit: AP Photo/Matt Rourke

× Close


Dr. P. Wang teaches an Artificial General Intelligence class at Temple University in Philadelphia on Thursday, February 1, 2024. Mainstream AI research “turned away from the original vision of artificial intelligence, which was quite ambitious in the beginning,” Wang said. Credit: AP Photo/Matt Rourke

The race is on to create artificial general intelligence, a future vision of machines that are broadly as intelligent as humans or at least can do many things as people can.

Achieving such a concept — commonly referred to as AGI — is the driving mission of ChatGPT maker OpenAI and a priority for the elite research wings of tech companies Amazon, Google, Meta and Microsoft. .

It is also a cause of concern for world governments. Leading AI scientists published the research in the journal Thursday. science Warning that unchecked AI agents with “long-term planning” skills could pose a threat to humanity.

But what exactly is AGI and how will we know when it will be achieved? Once on the fringes of computer science, it’s now a buzzword that’s constantly being redefined by those trying to make it happen.

What is AGI?

Not to be confused with the similar-sounding AI—which describes the AI ​​systems behind a crop of tools that “generate” new documents, images, and sounds—artificial general intelligence is a more nuanced idea.

It’s not a technical term but “a serious, albeit ill-defined concept,” said Jeffrey Hinton, a pioneering AI scientist known as the “Godfather of AI.”

“I don’t think there’s a consensus on what that term means,” Hinton said via email this week. “I use it to mean an AI that is at least as good as humans at all the cognitive things that humans do.”

Hinton prefers a different term—superintelligence—for “AGIs that are superior to humans.”

A small group of early proponents of the term AGI were trying to evoke how computer scientists of the mid-20th century envisioned an intelligent machine. That was before AI research branched out into subfields that produced specialized and commercially viable versions of the technology—from facial recognition to speech-recognizing voice assistants like Siri and Alexa.

Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008, said that mainstream AI research has “turned away from the original vision of artificial intelligence, which was very exciting at the beginning.”

Putting the ‘G’ in AGI was a nod to those who “still want to do something big. We don’t want to build tools. We want to build a thinking machine,” Wang said.


Dr. P. Wang teaches an Artificial General Intelligence class at Temple University in Philadelphia on Thursday, February 1, 2024. Mainstream AI research “turned away from the original vision of artificial intelligence, which was quite ambitious in the beginning,” Wang said. Credit: AP Photo/Matt Rourke

× Close


Dr. P. Wang teaches an Artificial General Intelligence class at Temple University in Philadelphia on Thursday, February 1, 2024. Mainstream AI research “turned away from the original vision of artificial intelligence, which was quite ambitious in the beginning,” Wang said. Credit: AP Photo/Matt Rourke

Are we at AGI yet?

Without a clear definition, it’s hard to know when a company or group of researchers will have achieved artificial general intelligence—or if they already have it.

“Twenty years ago, I think people would have happily agreed that systems of the caliber of GPT-4 or (Google’s) Gemini had achieved general intelligence comparable to that of humans,” Hinton said. Hinton said. “Being able to answer more or less any question intelligently would have passed the test. But now that AI can do that, people want to change the test.”

More information:
Managing modern synthetic agents, science (2024). www.science.org/doi/10.1126/science.adl0625

Journal Information:
science

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment