With “thanabots,” ChatGPT is making it possible to talk to the dead

AI programs like ChatGPT can create "thanabots" based on deceased loved ones' digital communications, allowing us to talk with the departed.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Since its public launch last year, the artificially intelligent chatbot ChatGPT has simultaneously wowed and frightened the world with its deep knowledge, its surprising empathy, and its undeniable potential to change the world in unforeseen, possibly miraculous or calamitous, ways. Now, it’s making it possible to digitally resurrect the dead in the form of “thanabots”: chatbots trained on data of the deceased.

Developed by OpenAI, ChatGPT is an AI program called a large language model. Trained on more than 300 billion words from all sorts of sources on the Internet, ChatGPT responds to prompts from humans by predicting the word it should use next based on both its training and the prompt. The result is a stream of communication that’s both informative and human-like. ChatGPT has passed difficult tests, written scientific papers, and convinced many Microsoft scientists that it actually can understand language and utilize reason.

Thanabot Spock

ChatGPT and other large language models can also receive more specific training to shape their responses. Programmer Jason Rohrer realized that he can create chatbots that emulate specific people by feeding ChatGPT examples of how they communicate and details of their lives. He started off with Star Trek‘s Mr. Spock, as any good nerd would. He next launched a website called Project December, which allows paying customers to input all sorts of data and information and make their own personalized chatbots, even ones based upon deceased friends and family.

As San Francisco Chronicle writer Jason Fagone detailed in a long piece published in July 2021, the result can be striking. Fagone described the emotional experience of 33-year-old Joshua Barbeau, who used Project December to make a thanabot with the personality of his fianceé who had passed away eight years prior. 

The term thanabot derives from thanatology, the scientific study of death. Leah Henrickson, a lecturer in digital media and cultures at The University of Queensland, thinks that thanabots could become more prevalent in the ensuing decades as more and more people with extensive digital records of texts, emails, and social media posts pass away.

“These systems may be created without prior consent from the deceased, or may constitute part of ‘digital estate planning’ wherein someone plans or consents to the creation of their own thanabot,” she wrote in a paper published earlier this year in the journal Media, Culture, & Society.

As Facebook, Google, Apple, and Microsoft all store heaps of our digital communications, it’s conceivable that they all could create and sell thanabots in the coming years. Considering that communing with the dead has been a consistent fixation across human cultures, it’s likely there will be plenty of demand.

Digital resurrection

Henrickson sees potential benefits to thanabots. “We may be able to provide more suitable support for those grieving, allow for alternative forms of estate management, and contribute to meaningful cultural understandings of death,” she wrote.

But there also could be downsides. After all, thanabots will only be based on digital data — at least at first. We all know that people’s online lives can be very different from offline, so the thanabot may not accurately represent the person it was made to mimic. Moreover, thanabots may not provide the catharsis that users might hope for, and instead intensify feelings of grief and despair.

We are entering a fascinating new era, one in which death may not be as final as it once was.

This article was reprinted with permission of Big Think, where it was originally published.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
“Model collapse” threatens to kill progress on generative AIs
Generative AIs start churning out nonsense when trained on synthetic data — a problem that could put a ceiling on their ability to improve.
Up Next
A chatGPT-enabled productivity clock illustration.
Subscribe to Freethink for more great stories