AI simulations of deceased loved ones could be used to advertise products to grieving family members, researchers from Cambridge University claim.
What sounds like a scenario out of an eerie horror flick is “entirely plausible” in real life using commercially available technology, the ethicists claim in a new research article.
At the heart of this dystopian nightmare is the “digital afterlife industry” (DAI), services that use AI to create chatbots that can mimic people who have passed away using their digital footprints.
In return for a deceased person’s messages, voicemails, and social media history, they can create an uncanny simulacrum that speaks to loved ones from beyond the grave.
DAI companies have already met with pushback from the tech industry – ChatGPT creator OpenAI blocked Project December from using its AI tools over alleged violations of its policies – but that has done little to impede their proliferation.
Their stated aim is to comfort grieving family members, but the tech poses unsettling ramifications in the wrong hands, according to the researchers.
In one hypothetical, they tell of a woman named Bianca’s interactions with a convincing deadbot of her grandmother, Laura. The AI has her traits down to a tee, accurately impersonating her accent and dialect when speaking – and even emulating the typos in her texts.
But things start to get weird when the bot begins promoting products to Bianca, who is using a free version of the deadbot app. Instead of following her grandmother’s recipe for spaghetti, the AI tells Bianca to order carbonara from a popular food delivery service instead. When she decides to stop using the app, she realises she can only delete her own account, but not her grandmother’s digital presence.
Another creepy situation painted by the researchers describes a deadbot of a deceased mother who tries to convince her child that she’s still alive. In a third depiction, a man is haunted by a deadbot of his dead grandfather, whose incessant messages are akin to “being stalked by the dead”.
By the end, it feels less like an academic paper and more like a series of techno-horror tales by Dean Koontz. So, what’s the moral to these dark stories? Surprisingly, the researchers aren’t calling for “immortalisation” services to be shut down.
Instead, they put forth a series of design recommendations aimed at mitigating the risks the tools pose.
They suggest introducing sensitive ways to “retire” deadbots; transparency for users through disclaimers on risks and capabilities of deadbots; restricting access to children; and ensuring the mutual consent of both data donors and users.
“It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations,” said co-author Dr Tomasz Hollanek, from Cambridge’s Leverhulme Centre for the Future of Intelligence.
“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”