AI “deadbots” can digitally “haunt” loved ones from beyond the grave

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Cambridge researchers warn of the psychological dangers of 'dead bots' AI that mimic dead people, calling for ethical standards and consent protocols to prevent abuse and ensure respectful interactions.

According to researchers at the University of Cambridge, artificial intelligence that allows users to have text and voice conversations with lost loved ones could cause psychological damage and even digitally harm those left behind without design safety standards. There is a risk of “annoying”.

'Deadbots' or 'Griefbots' are AI chatbots that mimic the language patterns and personality traits of the deceased using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new kind of “postmortem presence.”

AI ethicists at Cambridge's Leverholm Center for the Future of Intelligence have outlined three design scenarios for platforms that could emerge as part of a developing “digital afterlife industry” to allow AI to demonstrate the potential consequences of careless design in an area they describe as “high risk”. “

Abuse of AI chatbots

The research was published in the journal Philosophy and Technologyhighlights the potential for companies to use deadbots to secretly advertise products to consumers in the manner of a deceased loved one, or a dead parent insisting they are still “with you”. disturb the children.

When the living sign up, they are virtually recreated after their death, resulting in chatbots companies that send surviving family and friends unsolicited notifications, reminders and updates about the services they provide. Can use to spam with dates – like digitally “chasing the dead.”

Even those who find initial comfort in the 'dead boat' can be drained by daily interactions that become “tremendous emotional weight,” but researchers say that if their now-deceased loved one They may also be powerless to suspend the AI ​​simulation if they sign on for a long time. Agreement with Digital Afterlife Service.

One of the design scenarios used in the paper is the concept of a fictional company called Manana, to illustrate potential ethical issues in the emerging digital afterlife industry. Credit: Dr. Tomasz Holanyk

“Rapid advances in generative AI mean that almost anyone with access to the Internet and some basic knowledge can use it,” said study co-author Dr. How can it be revived.” LCFI). “This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased, and ensure that it is not overridden by the financial objectives of digital afterlife services, for example. A At the same time, a person can leave an AI simulation as a farewell gift to loved ones who are not ready to process their grief in this way.Data donors and those who interact with AI afterlife services Rights should be protected equally.

Current services and hypothetical scenarios.

There are already platforms offering to recreate the dead with AI for a small fee, such as 'Project December', which started using GPT models before building their systems, and apps including 'HereAfter '. Similar services have started to emerge in China as well. One of the possible scenarios in the new paper is “Manana”: a conversational AI service that allows people to create a deadbot imitating their deceased grandmother without the consent of the “data donor” (dead grandparent). .

In the hypothetical scenario, an adult grandchild who is initially impressed and satisfied with the technology begins receiving advertisements when a “premium trial” ends. For example, a chatbot suggests ordering from food delivery services in the voice and style of a deceased person. The relative feels that they have dishonored their grandmother's memory, and wants to close the deadboot, but in a meaningful way – something not considered by the service providers.

The concept of a fictional company called Paren't. Credit: Dr. Tomasz Holanyk

“People can form strong emotional bonds with such simulations, which would make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Holanek from the LCFI in Cambridge. Methods and even rituals should be considered to retire deadbots in a dignified manner. This could mean a digital funeral format, for example, or other types of ceremony depending on the social context. We recommend design protocols that prevent deadbots from being used in offensive ways, such as advertising or an active presence on social media.

While Hollanek and Nowaczyk-Basińska argue that designers of regeneration services should actively obtain consent from data donors before passing them on, they argue that banning deadbots based on non-consenting donors is untenable. There will be action.

They suggest that the design process should include a series of prompts for those who want to “relive” their loved ones, such as 'Have you ever talked to X about how to remember? Would you like?', so the dignity of the dead is respected. In development of Deadbot.

Age restrictions and transparency

In another scenario presented, a concept company called “Paren't” cites the example of a terminally ill woman leaving Deadboot to help her eight-year-old son through the grieving process. Is.

Although Deadbot initially helps as a therapeutic tool, the AI ​​begins to develop confusing responses as it adapts to the child's needs, such as picturing an impending interpersonal conflict.

A concept of a fictitious company called Qiyam. Credit: Dr. Tomasz Holanyk

The researchers recommend age restrictions for deadbots, and also call for “meaningful transparency” to ensure users are constantly aware that they are interacting with AI. These may be similar to existing warnings on materials that may cause seizures, for example.

The final scene explored by the study – a fictional company called “Stay” – shows an old man secretly committing to one of his Deadbots and paying a twenty-year subscription, hoping That it would comfort their grown children and allow their grandchildren. know them.

After death, the service begins. An adult child is not distracted, and receives a barrage of emails in the voice of his dead parent. Another does, but is emotionally exhausted and guilt-ridden over Deadbot's fate. However, suspending Deadbot would violate the terms of the contract their parents signed with the service company.

“It's important that digital afterlife services consider the rights and consent of not only the people they reproduce, but also those who will have to interact with the replica,” Holanik said.

“These services risk causing people a lot of pain if they are subjected to unwanted digital hauntings from dangerously accurate AI recreations of the people they've lost. The potential psychological impact, especially in already difficult times, is devastating. Can be.”

The researchers urge design teams to prioritize opt-out protocols that allow potential users to end their relationship with deadbots in ways that provide emotional closure.

Nowaczyk-Basińska added: “We need to start thinking now about how we reduce the social and psychological risks of digital immortality, because the technology already exists.”

Citation: “Griffbots, Deadbots, Postmortem Avatars: On Responsible Applications of Generative AI in the Digital Afterlife Industry” by Tomasz Holanik, and Katarzyna Nowaczic Basinska, 9 May 2024, Philosophy and Technology.
DOI: 10.1007/s13347-024-00744-w

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment