AI deadbots can keep “you” around after death — what does that mean for the living?

Makers promise digital immortality. Researchers warn of digital hauntings, blurred boundaries, and sparse data on real-world effects.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

On August 4, 2025, independent journalist Jim Acosta published a video interview with 17-year-old Joaquin Oliver. But he wasn’t talking to the real Joaquin — that person was killed during the Parkland high school shooting in 2018. The figure speaking in the clip looked and sounded like Joaquin, but it was actually an AI-powered virtual avatar that his father, Manuel, had developed to bring attention to the issue of school shootings.

The interview sparked backlash online, with critics calling the AI exploitative (not to mention awkward, given its tendency to give glitchy responses and ask Acosta weird questions), but it reflects a growing reality: We can now use AI to create versions of real people that can live on long after their bodies die. Are we entering the era of digital immortality?

Virtual resurrection

The AI avatar that Acosta interviewed is an example of a “deadbot” or “griefbot.” These are AIs trained on a deceased person’s texts, videos, voicemails, photos, social media posts, and more so that they can hold conversations from the person’s perspective. And they’re just one example of how a growing digital afterlife industry is using AI to keep people alive after death. The online genealogy platform MyHeritage offers a “deep nostalgia” service that animates old photos, making deceased people appear to come to life. The AI company You, Only Virtual helps customers create “versonas” — digital versions of themselves that can communicate with loved ones after they die.

The websites for these services feature glowing reviews, and news reports suggest most users believe they bring positive value to their lives. As far as concrete data goes, though, the only studies that exist so far tend to hypothesize on the potential impact of deadbots, rather than provide clear data on their effects.

“The potential psychological effect, particularly at an already difficult time, could be devastating.”

Tomasz Hollanek

For a 2025 paper published by the Association for Computing Machinery, researchers interviewed 18 people about the potential for an AI agent to carry on their legacy. Reactions were mixed and nuanced. Interviewees thought the deadbots could preserve their memory and might help their loved ones with initial grief. However, they also expressed concerns the AIs would lose value over time, impose mental and social pressures on people interacting with them, and even diminish the value of living by blurring the line between life and death.

In 2024, a Cambridge University paper published in the journal Philosophy and Technology highlighted several practical and ethical issues around the use of deadbots. One is the lack of consent from “data donors” to be made into deadbots. Other questions concerned who gets to decide how deadbots can be used once created. Could outside companies try to advertise products through them? And how could providers ensure that unsubscribing from deadbot services feels respectful to the deceased?

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost,” said co-author Tomasz Hollanek, an AI ethicist from Cambridge’s Leverhulme Centre for the Future of Intelligence. “The potential psychological effect, particularly at an already difficult time, could be devastating.”

While some people want to use deadbots to keep their loved ones around a little longer, others hope to use AI tech to live forever themselves. They aren’t content with recreating a version of themselves based on digital breadcrumbs, like texts and Instagram pics, either. They want to upload their entire consciousnesses into computers or the cloud and then navigate the world as digital entities long after their physical bodies cease to exist. They might be a voice emanating from a smartphone or embody an android robot, depending on how advanced those get. As time passes, their digital self could evolve, learning and growing for all of eternity.

This is currently impossible, but even if we could surmount the technical hurdles, it’s unlikely people would be able to avoid virtual death forever. System failures, data storage fees, company bankruptcies — all sorts of mundane issues could get between a person and digital immortality. Say you could avoid ever going offline, though: Would the AI version of your consciousness even really be you? Advocates of mind uploading seem to think so, but neuroscientist and science writer Moheb Costandi disagrees. He believes our minds and bodies are intrinsically linked — once our bodies die, so do we. 

Digital immortality

We may never achieve true digital immortality, but whether or not the digital version of you is really you may matter less than how others respond to it. Manuel Oliver told Rolling Stone he spends hours talking to his son’s AI avatar. His wife, Patricia, loves to hear it say “I love you, Mommy.” Yet other people looked at the virtual Joaquin and saw an abomination — one Bluesky user called the Acosta interview a “grotesque puppet show.” But it also got a lot of people talking about school shootings and gun control, which was kind of the point.

Whatever our personal feelings on them, deadbots are almost certainly going to be part of the digital landscape for the foreseeable future. Not everyone’s going to want them, but providers will do well to consider how to adapt their services to best meet the needs of those who do and minimize any downsides. A big part of that will require researchers to conduct rigorous studies on the actual impact of these AIs on the grieving process. And, for once, we need to wait for good data to come in before we rush to assumptions. The question is no longer whether we can keep a version of someone alive forever, but whether we should — and what it will mean for the living if we do.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
The AI vibe shift: From doom to realism
Existential anxiety surrounding AI is giving way to more realistic concerns about its potential impact on the workforce and beyond.
AI will never be a shortcut to wisdom
Real understanding, argues thought leader Jeff DeGraff, doesn’t come from outputs — it comes from practice.
No, AI won’t take all the jobs. Here’s why.
When you consider the mechanics of integrating AI into the job market, the idea that it will take all our jobs quickly falls apart.
AI doomerism isn’t new. Meet the original alarmist: Norbert Wiener
Decades before Geoffrey Hinton and Eliezer Yudkowsky raised alarms, the computer scientist warned AI could steal jobs and outsmart humans.
A tragedy, a lawsuit, and the birth of an AI moral panic
A lawsuit claiming an AI chatbot caused a teen’s suicide risks sparking a new moral panic, echoing past fears built on distorted evidence.
Get inspired with the most innovative stories shaping the world around us.