End-of-life AI

Pilot programs are using AI to help prompt clinicians and patients to have difficult discussions.

The question of how and when to prepare for death is among the most difficult and human of conversations — one which centers around our (perhaps unique) ability to grasp, turn, and examine each facet of our mortality, like a diamond under a loupe.

Yet, surprisingly, these important conversations are increasingly being guided by very non-human advice: artificial intelligence.

Knowing When

For doctors and patients, crucial but difficult decisions about end-of-life care cannot be made until a conversation about dying begins. But the taboo around death and fear of discouraging patients often delays such conversations until it is too late.

Writing in STAT, Rebecca Robbins interviewed over a dozen clinicians, researchers, and AI developers and experts on the role of machine learning in addressing patient’s mortal concerns.

“A lot of times, we think about it too late — and we think about it when the patient is decompensating, or they’re really, really struggling, or they need some kind of urgent intervention to turn them around,” Stanford inpatient medical physician Samantha Wang told Robbins.

The nudge provided by AI may help doctors and patients have the difficult talk before it’s too late.

Death and Data

Multiple artificial intelligence models are being applied to palliative care; STAT examined AIs at UPenn, Stanford, and the oncology clinics Northwest Medical Specialties.

The models use various machine learning techniques to analyze the medical records of patients, availing themselves to the vast troves of data — like, Scrooge-McDuck’s-vault troves — to generate mortality probabilities. These AI actuaries are trained with, and then tested on, data of patients who have already been treated, including diagnoses, treatments, and outcomes, discharge or death; some also include socioeconomic data and insurance information, Robbins writes.

Outputs take various forms for various models: the AI at UPenn triages the 10% of patients it predicts are most likely to die within half a year, then winnows those down, while the one used at Northwest Medical Specialities uses a comparative prediction that estimates mortality against the patient’s peers.

From there, clinicians receive notifications about those whom the algorithm feels are at highest risk of death — and prompts that difficult discussion. Those messages have to be considered and curated carefully; at UPenn, clinicians never receive more than six at a time, to avoid overwhelming docs and generating alarm fatigue.

“We didn’t want clinicians getting fed up with a bunch of text messages and emails,” Ravi Parikh, an oncologist at UPenn leading the AI project, told Robbins.

At Stanford, the notifications do not include the patient’s probabilities.

“We don’t think the probability is accurate enough, nor do we think human beings — clinicians — are able to really appropriately interpret the meaning of that number,” Stanford physician and clinical informaticist Ron Li, per STAT.

Entrusting AI with End-of-Life

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did; only 3% in their low risk cohort passed. Northwest’s model showed a similar result.

Of the patients UPenn’s model predicted to be at high risk of dying over six months, 45% indeed did.

“I wouldn’t think this is a particularly good use for AI unless and until it is shown that the algorithm being used is extremely accurate,” Eric Topol, a cardiologist and AI expert at Scripps Research in San Diego, told Robbins.

“Otherwise, it will not only add to the burden of busy clinicians, but may induce anxiety in families of affected patients.”

The models have also yet to be tested in a randomized, prospective trial, wherein some patients are predicted by the AI and others have the usual end-of-life or palliative care conversion considerations.

There is something disconcerting, but perhaps oddly appealing, about augmenting something so deeply human a decision as how to potentially spend one’s final days — or how hard, and at what Pyrrhic cost, to fight to extend those days — with artificial intelligence, a memento mori in the morning.

Watch: Figure’s humanoid robot just learned something new
Robotics startup Figure AI just shared a video showing its humanoid robot completing a fully autonomous “real world task.”
Sam Altman on the future of AI
In the Davos session, “Technology in a Turbulent World,” OpenAI CEO Sam Altman explained where he sees AI heading.
AI for agriculture: How Indian farmers are harvesting innovation
India’s farmers combat climate change, pestilence, and financial burdens, with AI-driven initiatives offering transformative solutions.
4 core strategies as AI changes the “meaning of work”
A 2021 study in Germany revealed that automation can have a serious effect on our sense of meaning at work.
AI startup Magic is building a “superhuman software engineer”
Magic AI is developing an advanced AI software engineer it sees as a milestone along the path to artificial general intelligence (AGI).
Up Next
ai models for diagnosing covid-19
Subscribe to Freethink for more great stories