Second opinions in the age of ChatGPT

LLMs are helping patients double-check diagnoses — and sometimes catch what doctors miss.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

ChatGPT saved a woman’s life.

That’s the gist of a viral post shared on Reddit. The author said his wife developed a fever and felt ill after a seemingly routine cyst removal. Her doctor had said the cyst wasn’t infected, so she wanted to wait it out, but ChatGPT urged them to go to the emergency room immediately.

The AI’s advice was sound: The woman was septic, and had the couple waited to seek medical attention, she could have gotten much sicker. Instead, she was “stable and doing fine,” according to the Redditor, who concluded his post by noting that his $20 ChatGPT subscription was money well spent.

This is just an anecdote (and an unverified one at that), but it’s emblematic of a much larger trend in the world of healthcare: An estimated 20% of Americans turned to large language models (LLMs), such as ChatGPT, Claude, and Grok, for answers to medical questions in 2024.

But can a chatbot really help anyone with their health?

Paging Dr. Google

Tapping into tech resources for health advice is nothing new. In the early 2000s, the creation of WebMD meant anyone with an internet connection could type their symptoms into a text box and receive a list of what might be wrong with them, ranked from most to least likely. 

More than two decades later, people regularly consult Dr. Google before their own doctors — WebMD, Healthline, and sites like them are among the most trafficked on the entire internet. But whether these searches are actually improving healthcare is debatable.

While online symptom checkers can alert people to possible health issues, which they can then follow up on with their doctors, their accuracy is low. A 2015 analysis of 23 of the platforms determined that they listed the correct diagnosis for a query first just 34% of the time. The correct diagnosis appeared within the top 20 results just 58% of the time.

For people already prone to anxiety, these platforms can actually induce stress by suggesting they have a serious health issue, even if the risk is low. Following the development of online symptom checkers, “cyberchondria” spread across the developed world like an infectious disease.

“AI systems are capable of far more than the previous generation of symptom checkers.”

Adam Rodman

It’s against this backdrop that LLMs enter the picture. Unlike symptom checkers, these tools can engage in dialogue, synthesize complex medical histories, and offer contextualized responses. And according to Adam Rodman, an internal medicine physician and director of AI programs at Beth Israel Deaconess Medical Center, they’re a major cut above their predecessors.

“The big difference between WebMD — which was a very basic symptom checker at the time — and LLMs is that LLMs are really, really good,” said Rodman, who is also an assistant professor at Harvard Medical School. “WebMD had relatively simple pattern matching, but these AI systems are capable of far more than the previous generation of symptom checkers. They are often right.” 

The research backs this up: A 2025 meta-analysis of 83 studies found that generative AIs have an overall diagnostic accuracy of about 52%. That’s equivalent to the accuracy of physicians, as determined by those same studies, and significantly higher than the 34% of symptom checkers.

Using LLMs wisely

With patients already using LLMs, it’s clear they are going to be a part of the healthcare ecosystem, at least for now. The question then turns to how people should use them, and the Reddit poster may have the right idea.

Asking an LLM for a second opinion after first consulting a doctor — ideally, one who specializes in whatever might ail you — is a powerful use case, and there are ways to maximize the likelihood you get a useful response, according to Rodman. The trick, he said, is in getting the AI to consider the important information, without leading it to any conclusions.

“My advice for people trying to get a second opinion would be to give as much context as possible,” said Rodman. “Make it clear that you want to know what else it could be, and then try to give as much objective data as possible…You may not want to provide the assessment in your doctor’s notes because that’s going to perhaps sway it.”

“AI changed my patient journey 100%.”

Anonymous cancer patient

The internet is rife with examples like the sepsis one that started this article, where a person sought a second opinion from an LLM, and it spotted something serious that a human doctor missed. A person who posted online that ChatGPT helped him through his prostate cancer diagnosis agreed to speak with Freethink on the condition of anonymity. 

“AI changed my patient journey 100%,” he said. “I was relying on a local surgeon’s opinion, who was negating some viable other treatments. [ChatGPT] suggested balancing his opinion with that of a radiation oncologist.” 

The AI directed him to a highly regarded cancer center, where he met with experts who have been treating him since. “ChatGPT actually relieved a lot of anxiety for me by providing information and a balanced approach,” he said. “I’ll be honest, I’m pretty damn grateful I had that tool to help me, and it’s still helping me decide what kind of treatments to pursue.” 

“Using LLMs to help explain what your healthcare team is thinking…is the killer use for the app.”

Adam Rodman

Even if an LLM doesn’t catch something major that a doctor missed, it can provide a patient with peace of mind that their physician is likely on the right track. 

Because their responses are written in conversational language, the tools can help patients better understand their health, too. A person could feed an LLM their medical records, for example, and ask it to summarize the information in an easily digestible way.

“Using LLMs to help explain what your healthcare team is thinking, especially when it’s really nuanced, to me, is the killer use for the app,” said Rodman.

Risks, realities, and the road forward

LLMs may be useful for patients, but they have drawbacks. 

People who consult LLMs for health advice and get wrong answers likely don’t post about their experiences as often as people who benefited from the AIs. This can skew the public’s perception of their accuracy — if every post you see online about consulting an LLM says the AI was right, you may start to think LLMs are always right.

This can lead to conflict with medical professionals, especially if a patient consults an LLM for their first opinion on an ailment — they might get stuck on the AI’s diagnosis despite their doctors thinking it’s unlikely. The fact that AIs’ responses are so confident and affirming, while doctors are sometimes short, cold, or distracted, can exacerbate this problem.

“I think that now, just to say it explicitly, there is a challenge to doctor authority,” said Rodman. “LLMs are sycophantic. They can make patients confident while being more wrong about [their condition] than WebMD ever could. So when they’re wrong, it’s more challenging.” 

However, if patients can maintain realistic expectations about what LLMs can and cannot do, the tools could improve healthcare for millions and, in notable cases, like the woman with sepsis, maybe even help save lives.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
The real danger in “I, Robot” isn’t AI. It’s humans.
“I, Robot” is a convincing allegory in favor of embracing AI innovation rather than over-regulating, centralizing, and stifling it.
In humanity’s dance with technology, people lead. Always.
As the history of tech demonstrates, we do not have to simply accept and adapt to AI. We can, collectively, choose what to do with it.
Silicon Valley has entered its superstar engineer era
In the race to AGI, tech companies are borrowing playbooks from sports, finance, and film to land elite talent. Here’s why.
AI can dramatically expand human agency
If we get AI right, it will accelerate empowerment, enabling people to learn any subject, start any business, and realize any vision.
A personal assistant for everyone: The promise of ambient AI
We’re leaving the app era and entering the age of ambient AI: intelligent help that’s always on, but never in the way.
Up Next
Exit mobile version