Researchers Caution AI Chatbot Developers About Mimicking the Dead
A Cambridge University study warns of emotional and ethical pitfalls with using conversational AI chatbots to speak with the dead
Researchers at the University of Cambridge are urging companies that develop AI chatbots designed to simulate conversations with deceased relatives to exercise caution to avoid distressing users.
Deadbots also known as griefbots are conversational AI chatbots where users can engage with deceased loved ones through natural language. Deadbots are simply chatbots like ChatGPT but are specifically designed to mimic conversations with deceased loved ones.
Though an emerging space in the growing AI chatbot market, the researchers warn that deadbot developers could exploit consumers if they fail to design these systems responsibly.
The Cambridge academics published research in the Philosophy and Technology journal and found that deadbots users may become emotionally drained from using the chatbots, with the potential to distress users akin to a ghost “haunting” someone.
“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Katarzyna Nowaczyk-Basińska, study co-author and a researcher in the university’s Leverhulme Center for the Future of Intelligence. “This area of AI is an ethical minefield. It’s important to prioritize the dignity of the deceased and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.”
Among the companies building deadbot-style applications include HereAfter AI which enables users to record stories and memories that their families can access after they have passed away. The mobile app gathers these recordings and utilizes generative AI technology to formulate responses to questions from loved ones, drawing from the recorded inputs.
Another example is StoryFile, which creates interactive chatbots based on hours of recordings of an individual. The company famously immortalized Captain Kirk actor William Shatner in 2021. The company filed for bankruptcy last week, however.
Project December previously used OpenAI tech to recreate loved ones before developing its own technology.
Cambridge researchers want deadbot developers to design their solutions to respond to prompts with dignity.
The research suggests age restrictions for deadbots as well as transparency measures to remind users they’re talking with a chatbot and not their loved one, similar to content warnings about seizures.
Also suggested were opt-out measures, allowing users to turn off their deadbots.
The Cambridge researchers argue that companies offering AI “postmortem presence” could be exploited by companies to spam family members with unsolicited notifications, akin to “being digitally ‘stalked by the dead.’”
“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek.
“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context.”
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like