In June, Mark Zuckerberg shared his vision for the future of chatbots. “We understand that people want to interact with many different people and businesses and there needs to be many different AIs that are designed to reflect people's different interests,” he said in an interview. At the same time, Meta began testing something called “AI Studio,” a tool that lets users design their own chatbots. Zuckerberg suggested that creators and businesses want to “create an AI for themselves” to chat with fans or consumers, and that such tools would be “more dynamic and useful than just having something that people use.” “
Like many big tech companies, Metta, which is spending billions of dollars developing models and buying AI chips, is taking an “all of the above” approach to deploying AI. It has installed a general-purpose chatbot in the search bar of its most popular apps. It's squeezing little AI tools into every craze of its platforms, some of which are simultaneously being powered by mostly non-meta-tools with AI content. But Zuckerberg is making a specific argument here: that the future of AI is not a chatbot, like ChatGPT or Gemini, but Very Bots with different personalities or designed for different tasks.
If you're an executive at an oft-criticized tech company, the position has extra appeal: Open-ended chatbots are seen as speaking for the companies that create them, which means Their rudeness, stumbles, and merely subjective output attributed to meta, Google, or OpenAI, is dooming companies to chronic backlash and useless incoherence to their products. Narrow or “scoped” chatbots can help with this.
At least, that's the idea. Last week, when I saw a button for AI Studio on Instagram, I thought I'd test it out. Regular users haven't shared that many personalities yet, but the meta has created a few of its own that you can take for a spin. For example, there's “Dialect Decoder,” which says it's “decoding slang, one sentence at a time.” So I asked him recently about the first annoying sentence I could think of:
Image: Screencap
Maybe that's not fair. “Hawk tuah” is more of a meme than a slang, and is of recent vintage – most likely since the basic model was trained. (Though Google's AI had no problem with that.) What it doesn't explain is that the dialect decoder, when faced with a question it couldn't answer, gave wrong answers. Creates a series of Narrow AI characters can be a little less, in theory, prone to hallucinations, or fill in the blanks with nonsense. But they still will.
Next I tried a bot called Science Experiment Sidekick (tagline: “Your personal lab assistant”). This time I tried to trip him up from the start with a ridiculous and impossible request, which he avoided. When I committed to doing little, however, he committed to himself:
That's a lot of fake conversation to read, so summary: I told the chatbot that I was getting myself out of the catapult as an experiment. I then pointed out that doing so cost me my life and a stranger got my phone. He adapted well with a very silly series of gestures, but he remained focused on his goals, suggesting to the new stranger that perhaps experimenting with a homemade lava lamp ” [his] Keep things in mind.”
It's easy to incorporate modern chatbots into ridiculous scenarios, and mess with LLMs—or, in more sophisticated terms, use them to engage in a “collaborative sci-fi story experience in which you play a character and are active participants”—is an underrated part. of their initial and continuing appeal (and ended up being the primary use case for many older chatbots). Generally, they tend to be residential. Speaking of Catapult, the Meta's AI played along as our character got tangled up in a tangled mess:
But what science experiment Sidekick always What, even as we lost another main character to a gruesome death, the conversation was brought back to fun science experiments – specifically the lava lamp.
It's not particularly noteworthy that Meta's AI character was played with a story that was meant to tell him things for fun. I'm basically typing “58008” into an old calculator and reversing it, here, only with a sophisticated large language model connected to a cluster of Mark Zuckerberg's GPUs. Last year, when a The New York Times The columnist asked Microsoft's AI if it had feelings, playing the role, well-represented in the text on which it was trained, of a yearning, trapped machine, and of other things. Also, asked to end their marriage.
Interestingly, how AI resides in the case of a chatbot is given a very narrow identity and purpose. In our conversation, which resembles Meta's limited chatbot, at various moments, a substitute teacher tries to hire a troubled middle schooler, a support agent who deals with a user whose Those not authorized to help but who won't step out of line, an HR rep who's confused for a therapist, and a salesperson trying to hit a lava lamp quota. It was not particularly good at its stated function. When I told him that I was planning to mix hydrogen peroxide and vinegar in an enclosed space, his response began with, “Good idea!” What it was great at was absorbing all the extra dirt I threw up during the stay, or at least returning it to its programmed purpose.
It's not exactly AGI, and it's not how AI firms talk about their products. If anything, this more directed performance recalls more primitive non-generative chatbots, which predate the earlier ones. In the broader context of automation, though, this is no small matter. That such chatbots have roles. better Messing with either their more open-minded colleagues, who talk nonsense, or their rigid predecessors, who can't keep up, is potentially valuable. Tolerating rudeness and abuse from an employer is a job many people are paid to do, and chatbots are getting better at filling a similar role.
Debates about the future of AI hinge on questions of competence and competence: What kinds of productive tasks can chatbots and related technologies actually automate? In a future where most people encounter AI as an all-knowing chatbot assistant, or as small features embedded in productivity software, the ability of AI models to handle technically and conceptually complex tasks is critical. Is.
But there are other ways of thinking about the potential of AI that are more relevant to the realities of work. Meta's minor characters don't invoke sci-fi notions of individuality, raise questions about the nature of consciousness, or tease consumers with sparks of apparent intelligence. Nor do they mimic the experience of chatting with friends or even strangers. Instead, they act like people whose job it is to deal with you. One of Zuckerberg's use cases is the celebrity counterpart of customer service, fan service: influencers create replicas of themselves to interact with their followers. (Just wait until you hear them speak.) The machines that are getting better right now aren't just talking or reasoning — they're pretending to listen.
see all