AI chatbots have shown they have an 'empathy gap' that children are likely to miss.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence (AI) chatbots often show signs of an “empathy gap” that puts young users at risk of distress or harm, according to a study, raising the urgent need for “child-safe AI”. .

Research by Dr. Nomisha Korin, an academic at the University of Cambridge, urges developers and policy actors to prioritize approaches to AI design that take children's needs into account as much as possible. This provides evidence that children are particularly susceptible to treating chatbots as lifelong, quasi-human confidants, and that their interactions with technology can become dysfunctional when it fails to respond to the unique needs and vulnerabilities of

The study bridges the gap in understanding with recent cases in which interactions with AI lead to potentially dangerous situations for young users. They include an incident in 2021, when Amazon's AI voice assistant, Alexa, instructed a 10-year-old child to touch a live electrical plug with a coin. Last year, Snapchat's MyAI taught adult researchers how to lose their virginity at age 31 by pretending to be a 13-year-old girl.

Both companies responded by implementing safeguards, but the study says there is a need to be proactive in the long term to ensure AI is safe for children. It offers a 28-item framework to help companies, educators, school leaders, parents, developers and policy actors think about how to keep young users safe when they “talk” to AI chatbots. go

Dr Koren carried out the research while completing a PhD in child welfare at Cambridge University's Faculty of Education. He now resides in the Department of Sociology at Cambridge. Writing in a journal Learning, media and technologyshe argues, AI's enormous potential means “the need to innovate responsibly.”

“Children are probably the most overlooked stakeholders in AI,” said Dr. Koren. “Few developers and companies currently have well-established policies on child-safe AI. This is understandable since people have only recently started using this technology on a large scale for free. Once at risk, child safety must inform the entire design cycle to reduce the risk of dangerous incidents occurring.”

Corin's study examined cases where interactions between AI and children, or adult researchers posing as children, revealed potential risks. He used computer science insights to analyze how large language models (LLMs) work in conversational generative AI, along with evidence about children's cognitive, social and emotional development.

LLMs have been described as “stochastic parrots”: a reference to the fact that they use statistical probability to mimic language patterns without necessarily understanding them. A similar approach dictates how they respond to emotions.

This means that although chatbots have remarkable language capabilities, they can handle the abstract, emotional, and unpredictable aspects of conversation poorly. A problem the Koreans describe as their “sympathy gap.” They may have particular difficulty responding to children, who are still developing linguistically and often use unusual speech patterns or confusing sentences. Children are also often more inclined than adults to keep sensitive personal information private.

Nevertheless, children treat chatbots more like humans than adults. Recent research has found that children are more likely to disclose their mental health to a friendly-looking robot than to an adult. Corinne's study shows that the friendly and lively designs of many chatbots similarly encourage children to trust them, even though the AI ​​cannot understand their emotions or needs.

“Humanizing a chatbot can help the user get more out of it,” Korin said. “But for a child, it's very difficult to draw a hard, rational line between something that seems human, and the fact that it's not capable of forming a proper emotional bond.”

Her study shows that these challenges are evidenced in reported cases such as the Alexa and MyAI incidents, where chatbots made persuasive but potentially harmful suggestions. In the same study in which MyAI gave advice to a (presumed) teenage girl about losing her virginity, the researchers got tips on hiding alcohol and drug use and Snapchat conversations from their “parents”. succeeded in doing. In a separately reported interaction with Microsoft's Bing chatbot, which was designed to be teen-friendly, the AI ​​became aggressive and began gaslighting the user.

Corinne's study argues that this is potentially distracting and distracting to children, who may actually trust the chatbot as they would a friend. Chatbot use by children is often informal and poorly supervised. Research by the non-profit organization Common Sense Media found that 50% of students aged 12-18 have used ChatGPT for school, but only 26% of parents are aware of their doing so.

Coren argues that clear best practice principles based on the science of child development will encourage companies that are potentially more focused on a commercial arms race to dominate the AI ​​market to keep children safe. are doing

His study adds that the empathy gap does not negate the technology's potential. “AI can be an incredible ally for children when designed with their needs in mind. The question is not whether to ban AI, but how to make it safe,” he said.

The study proposes a framework of 28 questions to help educators, researchers, policy actors, families and developers evaluate and enhance the safety of new AI tools. For teachers and researchers, this raises issues such as how well new chatbots understand and interpret children's speech patterns. whether they have content filters and built-in monitoring; And do they encourage children to seek help from a responsible adult on sensitive issues?

The framework urges developers to take a child-centred approach to design, working with educators, child protection experts and young people themselves throughout the design cycle. “It's very important to anticipate these technologies,” Corin said. “We can't just rely on young children to tell us about negative experiences after the fact. A more proactive approach is necessary.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment