MetaAI chief says big language models won’t reach human intelligence

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Meta’s head of artificial intelligence said that the big language models that power AI products like ChatGPT will never achieve the ability to reason and plan like humans, because it has “superintelligence” in machines. focused on a radical alternative approach rather than generating

Ian Lecon, chief AI scientist at the social media company that owns Facebook and Instagram, said that LLMs “have a very limited understanding of logic. . . . They don’t understand the physical world, they don’t have permanent memory, in any reasonable definition of the term.” Can’t reason and can’t plan .

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to create human-level intelligence, as these models can only give correct answers if they are fed the right training data. Be and therefore they are “intrinsically unsafe.”

Instead, he’s working to develop an entirely new generation of AI systems that he hopes will empower machines with human-level intelligence, though he said achieving that vision would take some time. It may take 10 years.

Meta is pouring billions of dollars into developing its LLMs as generative AI explodes, aiming to catch up to rival tech groups including Microsoft-backed OpenAI and Alphabet’s Google.

LeCun runs a team of about 500 staff at Meta’s Fundamental AI Research (fair) lab. They are working toward building AI that can develop common sense and learn how the world works in a way that humans do, called “world modeling.

The meta-AI chief’s experimental vision is a potentially risky and expensive gamble for the social media group at a time when investors are itching to see a quick return on AI investments.

Last month, Meta lost nearly $200 billion in value after Chief Executive Mark Zuckerberg vowed to ramp up spending and transform the social media group into “the world’s leading AI company,” Wall Street investors said. Concerned about rising costs with less immediate revenue potential.

We’re at a point where we think we’re probably on the cutting edge of next-generation AI systems,” LeCun said.

LeCun’s comments come as Meta and its competitors are pushing ahead with ever-better LLMs. Figures like Sam Altman, head of OpenAI, believe they provide an important step toward creating artificial general intelligence (AGI) — the point at which machines have greater cognitive abilities than humans.

OpenAI released its new fastest GPT-4o model last week, and Google unveiled a new “multimodal” artificial intelligence agent that can answer all real-time queries for video, audio and text. is called Project Astra, powered by an upgraded version of its Gemini model.

Meta also launched its new Llama 3 model last month. Sir Nick Clegg, the company’s head of global affairs, said his latest LLM had “much better skills like reasoning” – the ability to apply logic to queries. For example, the system will assume that a person with a headache, sore throat and runny nose has a cold, but can also recognize that allergies may be causing the symptoms.

However, LeCun said this evolution of LLMs was superficial and limited, with models learning only when human engineers intervened to train on that information, rather than AI reaching conclusions like people.

It certainly appears to most people as logical — but mostly it’s exploiting knowledge gathered from a lot of training data,” LeCun said, but added: [LLMs] are very useful despite their limitations.”

Google DeepMind has also spent years pursuing alternative methods of building AGI, including reinforcement learning methods, where AI agents learn from their surroundings in game-like virtual environments.

At an event in London on Tuesday, Sir Damus Hassabis, head of DeepMind, said that what language models were missing was that “they don’t understand the spatial context that you’re in . . . so that in the end Their usefulness should be limited.”

Metta established its FairLab in 2013 to advance AI research, hiring leading academics in the space.

However, in early 2023, Meta created a new GenAI team, led by Chief Product Officer Chris Cox. He found many of the fair’s AI researchers and engineers, and led work on Llama 3 and integrated it into products, such as its new AI assistants and image generation tools.

The creation of the GenAI team came about after some insiders argued that an academic culture within FairLab was partly responsible for Meta’s late AI boom. Zuckerberg has pushed for more commercial applications of AI under pressure from investors.

We’ve focused the fair on the long-term goal of human-level AI, primarily because GenAI is now focused on things we have a clear path toward,” LeCun said.

“[Achieving AGI] Not a product design issue, it’s not even a technology development issue, it’s a very scientific issue,” he added.

LeCun first published a paper on his World Modeling Vision 2022 and Meta has released two research models based on this vision.

Today, he said, Fair is testing different theories for achieving human-level intelligence because “there’s a lot of uncertainty and exploration involved, [so] We cannot tell which one will succeed or be picked up in the end.

Among them, LeCun’s team is feeding systems with hours of video and deliberately skipping frames, then getting AI to predict what will happen next. This is to mimic how children learn by passively observing the world around them.

He also said that Fair is looking into building “a universal text encoding system” that would allow a system to process abstract representations of knowledge in text, which could be applied to video and audio. .

Some experts doubt whether LeCun’s vision is viable.

Aaron Colotta, associate professor of computer science at Tulane University, said intelligence has long been “a thorn in AI’s side,” and that it was difficult to teach models to be efficient, which led to “these unexpected failures.” fall victim to.

A former MetaAI employee called the world modeling push “vague fluff”, adding: “It feels a lot like flagging.

Another current employee said Fair has yet to prove itself as a real competitor to research groups like DeepMind.

In the long term, LeCun believes the technology will power AI agents that users can interact with through wearable technology, including augmented reality or “smart” glasses, and electromyography (EMG) “bracelets.

“[For AI agents] To be truly useful, they must have something akin to human-level intelligence.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment