End-of-life AI

Pilot programs are using AI to help prompt clinicians and patients to have difficult discussions.

The question of how and when to prepare for death is among the most difficult and human of conversations — one which centers around our (perhaps unique) ability to grasp, turn, and examine each facet of our mortality, like a diamond under a loupe.

Yet, surprisingly, these important conversations are increasingly being guided by very non-human advice: artificial intelligence.

Knowing When

For doctors and patients, crucial but difficult decisions about end-of-life care cannot be made until a conversation about dying begins. But the taboo around death and fear of discouraging patients often delays such conversations until it is too late.

Writing in STAT, Rebecca Robbins interviewed over a dozen clinicians, researchers, and AI developers and experts on the role of machine learning in addressing patient’s mortal concerns.

“A lot of times, we think about it too late — and we think about it when the patient is decompensating, or they’re really, really struggling, or they need some kind of urgent intervention to turn them around,” Stanford inpatient medical physician Samantha Wang told Robbins.

The nudge provided by AI may help doctors and patients have the difficult talk before it’s too late.

Death and Data

Multiple artificial intelligence models are being applied to palliative care; STAT examined AIs at UPenn, Stanford, and the oncology clinics Northwest Medical Specialties.

The models use various machine learning techniques to analyze the medical records of patients, availing themselves to the vast troves of data — like, Scrooge-McDuck’s-vault troves — to generate mortality probabilities. These AI actuaries are trained with, and then tested on, data of patients who have already been treated, including diagnoses, treatments, and outcomes, discharge or death; some also include socioeconomic data and insurance information, Robbins writes.

Outputs take various forms for various models: the AI at UPenn triages the 10% of patients it predicts are most likely to die within half a year, then winnows those down, while the one used at Northwest Medical Specialities uses a comparative prediction that estimates mortality against the patient’s peers.

From there, clinicians receive notifications about those whom the algorithm feels are at highest risk of death — and prompts that difficult discussion. Those messages have to be considered and curated carefully; at UPenn, clinicians never receive more than six at a time, to avoid overwhelming docs and generating alarm fatigue.

“We didn’t want clinicians getting fed up with a bunch of text messages and emails,” Ravi Parikh, an oncologist at UPenn leading the AI project, told Robbins.

At Stanford, the notifications do not include the patient’s probabilities.

“We don’t think the probability is accurate enough, nor do we think human beings — clinicians — are able to really appropriately interpret the meaning of that number,” Stanford physician and clinical informaticist Ron Li, per STAT.

Entrusting AI with End-of-Life

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did; only 3% in their low risk cohort passed. Northwest’s model showed a similar result.

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did.

“I wouldn’t think this is a particularly good use for AI unless and until it is shown that the algorithm being used is extremely accurate,” Eric Topol, a cardiologist and AI expert at Scripps Research in San Diego, told Robbins.

“Otherwise, it will not only add to the burden of busy clinicians, but may induce anxiety in families of affected patients.”

The models have also yet to be tested in a randomized, prospective trial, wherein some patients are predicted by the AI and others have the usual end-of-life or palliative care conversion considerations.

There is something disconcerting, but perhaps oddly appealing, about augmenting something so deeply human a decision as how to potentially spend one’s final days — or how hard, and at what Pyrrhic cost, to fight to extend those days — with artificial intelligence, a memento mori in the morning.

“Treasure map” guides scientists to massive meteorite
A “treasure map” highlighting places where meteorites are most likely to be found has led to the discovery of a 17-pound space rock.
What that study linking sugar-free sweeteners and heart disease really tells us
A new study links higher blood levels of sugar-free sweeteners, commonly found in ketogenic diet foods, to a greater risk of death.
Large language models are biased. Can logic help save them?
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
New voice cloning AI lets “you” speak multiple languages
Voice cloning AIs are gaining more abilities, while the amount of audio needed to replicate a person’s voice is shrinking.
New AI can “reimagine” your pictures in infinite ways
Stability AI has launched Stable Diffusion Reimagine, an AI that uses uploaded images as inspiration for new creations.
Up Next
ai models for diagnosing covid-19
Subscribe to Freethink for more great stories