End-of-life AI

Pilot programs are using AI to help prompt clinicians and patients to have difficult discussions.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

The question of how and when to prepare for death is among the most difficult and human of conversations — one which centers around our (perhaps unique) ability to grasp, turn, and examine each facet of our mortality, like a diamond under a loupe.

Yet, surprisingly, these important conversations are increasingly being guided by very non-human advice: artificial intelligence.

Knowing When

For doctors and patients, crucial but difficult decisions about end-of-life care cannot be made until a conversation about dying begins. But the taboo around death and fear of discouraging patients often delays such conversations until it is too late.

Writing in STAT, Rebecca Robbins interviewed over a dozen clinicians, researchers, and AI developers and experts on the role of machine learning in addressing patient’s mortal concerns.

“A lot of times, we think about it too late — and we think about it when the patient is decompensating, or they’re really, really struggling, or they need some kind of urgent intervention to turn them around,” Stanford inpatient medical physician Samantha Wang told Robbins.

The nudge provided by AI may help doctors and patients have the difficult talk before it’s too late.

Death and Data

Multiple artificial intelligence models are being applied to palliative care; STAT examined AIs at UPenn, Stanford, and the oncology clinics Northwest Medical Specialties.

The models use various machine learning techniques to analyze the medical records of patients, availing themselves to the vast troves of data — like, Scrooge-McDuck’s-vault troves — to generate mortality probabilities. These AI actuaries are trained with, and then tested on, data of patients who have already been treated, including diagnoses, treatments, and outcomes, discharge or death; some also include socioeconomic data and insurance information, Robbins writes.

Outputs take various forms for various models: the AI at UPenn triages the 10% of patients it predicts are most likely to die within half a year, then winnows those down, while the one used at Northwest Medical Specialities uses a comparative prediction that estimates mortality against the patient’s peers.

From there, clinicians receive notifications about those whom the algorithm feels are at highest risk of death — and prompts that difficult discussion. Those messages have to be considered and curated carefully; at UPenn, clinicians never receive more than six at a time, to avoid overwhelming docs and generating alarm fatigue.

“We didn’t want clinicians getting fed up with a bunch of text messages and emails,” Ravi Parikh, an oncologist at UPenn leading the AI project, told Robbins.

At Stanford, the notifications do not include the patient’s probabilities.

“We don’t think the probability is accurate enough, nor do we think human beings — clinicians — are able to really appropriately interpret the meaning of that number,” Stanford physician and clinical informaticist Ron Li, per STAT.

Entrusting AI with End-of-Life

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did; only 3% in their low risk cohort passed. Northwest’s model showed a similar result.

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did.

“I wouldn’t think this is a particularly good use for AI unless and until it is shown that the algorithm being used is extremely accurate,” Eric Topol, a cardiologist and AI expert at Scripps Research in San Diego, told Robbins.

“Otherwise, it will not only add to the burden of busy clinicians, but may induce anxiety in families of affected patients.”

The models have also yet to be tested in a randomized, prospective trial, wherein some patients are predicted by the AI and others have the usual end-of-life or palliative care conversion considerations.

There is something disconcerting, but perhaps oddly appealing, about augmenting something so deeply human a decision as how to potentially spend one’s final days — or how hard, and at what Pyrrhic cost, to fight to extend those days — with artificial intelligence, a memento mori in the morning.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
No, AI won’t take all the jobs. Here’s why.
When you consider the mechanics of integrating AI into the job market, the idea that it will take all our jobs quickly falls apart.
AI doomerism isn’t new. Meet the original alarmist: Norbert Wiener
Decades before Geoffrey Hinton and Eliezer Yudkowsky raised alarms, the computer scientist warned AI could steal jobs and outsmart humans.
A tragedy, a lawsuit, and the birth of an AI moral panic
A lawsuit claiming an AI chatbot caused a teen’s suicide risks sparking a new moral panic, echoing past fears built on distorted evidence.
Why AI gets stuck in infinite loops — but conscious minds don’t
Anil Seth suggests the difference is that living beings are rooted in time and entropy, a grounding that may be essential for consciousness.
From cryonics to aging: How AI is transforming human health
From curing rare diseases to extending lifespans, four biotech founders share how AI is rewriting the future of medicine.
Up Next
ai models for diagnosing covid-19
Exit mobile version