Thanks to Meta’s Llama 3 and OpenAI’s GPT-5, we can enter a whole new realm of AI big language models and chatbots, as both companies push hard to make these bots more human.
At an event earlier this week, Meta reiterated that Llama 3 will be released to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg saying that we should expect a larger language model. “Within the next month, actually less, hopefully in a very short period of time, we hope to start introducing our new suite of next-generation foundation models, the Lama 3.
Meta’s major language models are publicly available, allowing developers and researchers free and open access to the tech to build their own bots or research various aspects of artificial intelligence. Models are trained on a wealth of text-based information, and Llama 3 promises much more impressive capabilities than the current model.
No official release date has yet been announced for Meta’s Llama 3 or OpenAI’s GPT-5, but we can safely assume that models will be out in the coming weeks.
Smart up
Joelle Pineau, vice president of AI research at Meta noted that “we’re working hard to figure out how to get these models to not only talk, but to actually think, to plan. . . . to keep memory.” In an interview with the Financial Times, Openai’s chief operating officer Brad Lightcap told the Financial Times that the next GPT version will show progress in solving difficult questions with reasoning.
So, it looks like the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. “We’re going to start seeing AI that can perform more complex tasks in a more sophisticated way,” Lightcap also said, adding that “we’re just starting to scratch the surface on that ability. It’s because of these models.”
As tech companies like OpenAI and Meta continue to work on more sophisticated and ‘lifelike’ human interfaces, the thought of a chatbot that can ‘think’ with intelligence and memory is exciting and somewhat disturbing. Tools like Midjourney and Sora have championed how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in everyday life.
With so many moral and ethical concerns that still exist with the current tools available as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you have to admit, it’s all starting to feel a little like the beginning of a sci-fi horror story.