End-of-life AI

Pilot programs are using AI to help prompt clinicians and patients to have difficult discussions.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

The question of how and when to prepare for death is among the most difficult and human of conversations — one which centers around our (perhaps unique) ability to grasp, turn, and examine each facet of our mortality, like a diamond under a loupe.

Yet, surprisingly, these important conversations are increasingly being guided by very non-human advice: artificial intelligence.

Knowing When

For doctors and patients, crucial but difficult decisions about end-of-life care cannot be made until a conversation about dying begins. But the taboo around death and fear of discouraging patients often delays such conversations until it is too late.

Writing in STAT, Rebecca Robbins interviewed over a dozen clinicians, researchers, and AI developers and experts on the role of machine learning in addressing patient’s mortal concerns.

“A lot of times, we think about it too late — and we think about it when the patient is decompensating, or they’re really, really struggling, or they need some kind of urgent intervention to turn them around,” Stanford inpatient medical physician Samantha Wang told Robbins.

The nudge provided by AI may help doctors and patients have the difficult talk before it’s too late.

Death and Data

Multiple artificial intelligence models are being applied to palliative care; STAT examined AIs at UPenn, Stanford, and the oncology clinics Northwest Medical Specialties.

The models use various machine learning techniques to analyze the medical records of patients, availing themselves to the vast troves of data — like, Scrooge-McDuck’s-vault troves — to generate mortality probabilities. These AI actuaries are trained with, and then tested on, data of patients who have already been treated, including diagnoses, treatments, and outcomes, discharge or death; some also include socioeconomic data and insurance information, Robbins writes.

Outputs take various forms for various models: the AI at UPenn triages the 10% of patients it predicts are most likely to die within half a year, then winnows those down, while the one used at Northwest Medical Specialities uses a comparative prediction that estimates mortality against the patient’s peers.

From there, clinicians receive notifications about those whom the algorithm feels are at highest risk of death — and prompts that difficult discussion. Those messages have to be considered and curated carefully; at UPenn, clinicians never receive more than six at a time, to avoid overwhelming docs and generating alarm fatigue.

“We didn’t want clinicians getting fed up with a bunch of text messages and emails,” Ravi Parikh, an oncologist at UPenn leading the AI project, told Robbins.

At Stanford, the notifications do not include the patient’s probabilities.

“We don’t think the probability is accurate enough, nor do we think human beings — clinicians — are able to really appropriately interpret the meaning of that number,” Stanford physician and clinical informaticist Ron Li, per STAT.

Entrusting AI with End-of-Life

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did; only 3% in their low risk cohort passed. Northwest’s model showed a similar result.

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did.

“I wouldn’t think this is a particularly good use for AI unless and until it is shown that the algorithm being used is extremely accurate,” Eric Topol, a cardiologist and AI expert at Scripps Research in San Diego, told Robbins.

“Otherwise, it will not only add to the burden of busy clinicians, but may induce anxiety in families of affected patients.”

The models have also yet to be tested in a randomized, prospective trial, wherein some patients are predicted by the AI and others have the usual end-of-life or palliative care conversion considerations.

There is something disconcerting, but perhaps oddly appealing, about augmenting something so deeply human a decision as how to potentially spend one’s final days — or how hard, and at what Pyrrhic cost, to fight to extend those days — with artificial intelligence, a memento mori in the morning.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
From cryonics to aging: How AI is transforming human health
From curing rare diseases to extending lifespans, four biotech founders share how AI is rewriting the future of medicine.
What if humans mastered every aspect of nature?
A bold vision of a 10x better future where humanity has mastered biology, energy, and matter to unlock unprecedented progress.
Governing AGI: Model laws, chip wars, and sovereign AI
The US must regulate AI in a way that protects society, preserves America’s competitive edge, and fosters innovation — all at the same time.
If we want artificial “superintelligence,” it may need to feel pain
Philosopher Jonathan Birch argues that sentience might be essential to “higher” forms of intelligence, including truly intelligent AI.
The age of industrialized imagination
By slashing costs, AI will spark a creative renaissance in the entertainment industry, leading to more working artists, not fewer.
Up Next
ai models for diagnosing covid-19
Subscribe to Freethink for more great stories