Artificial General Intelligence 5 Years Away, Says AI Delusion Can Be Solved, Nvidia’s Jensen Huang Says

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Image credit: Haji John Campus

Artificial General Intelligence (AGI) – often referred to as “strong AI”, “full AI”, “human-level AI” or “general intelligent processes” – represents a significant future leap in the field of artificial intelligence. . Unlike narrow AI, which is designed for specific tasks (such as detecting product defects, summarizing news, or building a website for you), AGI is a set of cognitive tasks at or above human level. Will be able to perform a wide range of operations. Speaking to the press at Nvidia’s annual GTC developer conference this week, CEO Jensen Huang seemed genuinely bored when discussing the topic – not least because he finds himself misquoted a lot. , they say.

The frequency of the question is understandable: the concept raises existential questions about humanity’s role and control in a future where machines can think, learn and outperform humans in virtually every domain. At the heart of this concern lies the unpredictability of AGI decision-making processes and goals, which may not align with human values ​​or preferences (a concept that has been explored in depth in science fiction since at least the 1940s). ). There is concern that once an AGI reaches a certain level of autonomy and capability, it may become impossible to control or control it, leading to situations where its actions are unpredictable. Cannot be changed or reversed.

When the sensational press asks for a time frame, it often tempts AI professionals to put a timeline on the end of humanity — or at least the current status quo. Needless to say, AI CEOs aren’t always eager to tackle the subject.

Huang, however, spent some time explaining to the press what he was. does Think about the topic. Predicting when we’ll see viable AGI depends on how you define AGI, Huang says: Despite the complexities of time zones, you know when the new year arrives and 2025 begins. happens. If you’re driving to the San Jose Convention Center (where this year’s GTC conference is being held), you’ll usually know you’ve arrived when you see the big GTC banners. . The important point is that we can agree on how you got, whether temporally or geographically, to where you were hoping to go.

Huang explains, “If we specified AGI for something specific, a set of tests where a software program could perform very well—or maybe 8% better than most people—I believe We’ll get there in 5 years,” Huang explains. He suggests that the tests could be the ability to pass the legal bar exam, logic tests, economics tests or perhaps a pre-med exam. Unless the questioner can be very clear about what AGI means in the context of the question, he is unwilling to make a prediction. is better.

AI hallucinations are solvable.

In Tuesday’s question-and-answer session, Huang was asked what to do about AI hallucinations — the tendency of some AIs to make up answers. the sound Understandable, but not based on reality. He seemed visibly frustrated by the question, and suggested that hallucinations are easily solved – just by making sure the answers are well researched.

“Add a rule: For every answer, you have to find the answer,” says Huang, referring to this practice as ‘retrieval-augmented generation,’ an approach very similar to basic media literacy. By: Examine the source, and the idea, the context. Compare the facts in the source with known truths, and if the answer is really wrong – even partially – discard the entire source and move on to the next one. “AI shouldn’t just answer, it should research first, to determine which answers are best.”

For mission-critical answers, such as health advisories or similar, Nvidia’s CEO suggests that examining multiple resources and known sources of truth may be the way forward. Of course, this means that the answer generator must have the option to say ‘I don’t know the answer to your question,’ or ‘I can’t agree on what the correct answer is. That’s the question, or even something like, ‘Hey, the Super Bowl hasn’t happened yet, so I don’t know who won.’



WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment