Media has a blind spot when covering the AI panic

The discourse on AI risk is shaped by two communities: rationality and effective altruism. Media coverage should reflect that.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

“THE END OF HUMANITY” screamed the TIME magazine cover in June 2023, with two letters in “humanity” brightly highlighted: AI. 

It was seven months after OpenAI launched a user-friendly chatbot named ChatGPT. At that time, the media was flooded with doomsday scenarios about the “existential risks of AI” and how it’s “the start of an AI takeover.” It reached absurd levels with headlines like “Humans Could Go Extinct When Evil Superhuman AI Robots Rise Up Like the Terminator!” Shortly after, stringent AI bills and policies were proposed around the world.

We were told that “AI will kill us all.”

The public’s understanding of emerging technologies, especially AI, depends heavily on how journalists frame their stories. As someone who has analyzed the technological discourse for two decades, I found this round of mass hysteria particularly frustrating, so I decided to take a step back and attempt to better understand why we were being bombarded by these fatalistic headlines. Who are the forces pushing this into the mainstream?

I initiated an investigative journey that uncovered the leading figures, organizations, and events shaping one of the most consequential debates of our time. In the process, I fell down the rabbit hole of two overlapping movements: “rationality” and “effective altruism.” The picture of their roles in financing and promoting AI doomerism quickly became clear, but when I looked for media coverage on these influential movements, I found little in-depth reporting.

For example, the biggest funders of these movements — billionaire Dustin Moskovitz, billionaire Jaan Tallinn, and former-billionaire-turned-felon Sam Bankman-Fried — were rarely mentioned. This specific trio is responsible for the proliferation of hundreds of organizations devoted to “existential risks from AI.” This seems like an important piece of the puzzle to share with the audience.

The following segments represent a small portion of my upcoming book on this topic. I present them today because of their relevance: They highlight the media’s crucial role in our shared AI discourse and suggest how it can do better. 

It’s essential — and totally possible — to improve the coverage of AI in general and AI doomerism in particular. 

“If it bleeds, it leads.”

The problems start with a false dichotomy. We are presented with two main visions of the future: utopian or dystopian. AI will save us all, or AI will kill us all.

It makes our AI discourse shallow and misleading. In reality, there are endless options in between those two extremes. The other framings are covered by the media, but they are not debated as much or given enough attention; they are less “sexy.” 

What the general public is getting is the science-fiction version of the AI story. Columbia University’s “How the Media Is Covering ChatGPT” study, released in May 2023, found that the initial curiosity over generative AI was covered with extreme utopian and dystopian depictions. 

“It’s the Hollywood-ification of the public’s understanding of AI,” Nick Diakopoulos, a professor of communication studies at Northwestern University, told me.

“People are naturally inclined to engage more with threatening news, the dark side of AI versus the bright side.”

Matthew Hutson

So far, the dystopian depictions have far outweighed the utopian ones.

What they’re not telling you about ChatGPT,” an analysis of AI media coverage in the UK in early-2023, found that headlines were often sensationalized and tended to focus on the warnings of “impending dangers.” Just 13% of the headlines analyzed were related to helpful or otherwise positive applications of AI. 

Two famous phrases — “Bad is stronger than good” and “If it bleeds, it leads” — help explain the media’s choice to lean into the negative: It’s the news people will most likely read. 

  • In February 2023, The New York Times published a disturbing conversation between Kevin Roose and Microsoft’s new Bing chatbot. It has since become known as the “Sydney tried to break up my marriage” story. The front-page headline said, “Bing’s Chatbot Drew Me In and Creeped Me Out.” In an interview about Sydney, Microsoft’s Chief Technology Officer, Kevin Scott, mentioned it was “one of the most-read stories in New York Times history.”
  • In mid-2023, Steve Rose from The Guardian shared in a since-deleted tweet that “so far, ‘AI worst case scenarios’ has had 5 x as many readers as ‘AI best case scenarios.’” The Guardian’s headline for the “worst case scenarios” piece quoted the writer Eliezer Yudkowsky: “Everyone on Earth could fall over dead in the same second.”
  • Ian Hogarth, author of “We must slow down the race to God-like AI,” similarly shared that his essay was “the most read story” on the Financial Times website the day it was published. That piece claimed that God-like AI “could usher in the obsolescence or destruction of the human race.”

As for why these pieces perform so well, people are simply hardwired to gravitate toward negative coverage, according to Matthew Hutson from The New Yorker.

“In general, in psychology, people have a bias to pay more attention to negative things than positive things,” Hutson told me. “People are naturally inclined to engage more with threatening news, the dark side of AI versus the bright side.”

“News is inherently negative — it in no way surprises me that the negative scenario gets more clicks,” said Ryan Heath, who was at Axios at the time of my interview. 

“If we want a well-informed public, we need to figure out how to raise the quality of journalism consumed by ordinary people.”

Timothy B. Lee

The media also tends to view negative news as more worthy of publication since, usually, “where there’s smoke, there’s fire.”

“There’s no accountability in congratulating a company for achieving great AI,” said Heath. “It’s interesting and important to share it, but it’s not making anyone accountable, which is one of the big driving functions of the news.”

“Today’s highly competitive media industry creates bad incentives that discourage reporters from doing in-depth or nuanced journalism,” journalist Timothy B. Lee wrote in a 2024 essay for Asterisk Magazine. “Shallow or sensational stories about technology require less resources to produce and often attract more attention from readers.”

“There is great technology journalism being done today, but it tends to appear in specialist publications that cater to tech-savvy audiences,” Lee continued. “If we want a well-informed public, we need to figure out how to raise the quality of journalism consumed by ordinary people.”

Extreme implications. Extreme sentiment.

The media opting to focus on the negative is not a new concept for me — I covered the discourse around social media and Big Tech in my previous book, “The Techlash” — but the panic around AI has been so much more extreme. 

As part of my “Media Coverage of AI” research project, I asked prominent AI journalists why they think this has been the case. 

“You could potentially do so much more with AI than you ever could with Facebook, so I think people intuitively understand it,” said Heath.

“There have been a lot of transformative technologies, but not all of them are seen as generating existential risk,” said Hutson. “AI is different in that it can potentially have agency, and it can grow exponentially, so it’s kind of qualitatively different from all the other technologies.” 

“Social media is scary, but still just people being people,” he continued. “Maybe we changed our behavior somewhat, but it wasn’t creating new people with unforeseen capabilities in the way that AI can create new beings with unforeseen capabilities. It’s just a completely different kind of beast.”

The scientific uncertainty about AI’s trajectory and potential impacts creates a unique mystique.

“There’s just no way of knowing a lot of this stuff,” said Will Knight, a senior writer at WIRED. “I talked to Demis [Hassabis] one time, and he said, ‘Anybody who says that they know what this is going to do is wrong. We don’t know.’”

“Everyone has grown up watching sci-fi about AI taking over the world … We have all been primed and conditioned to expect this to happen.”

Benj Edwards

Karen Hao, who was at the Wall Street Journal at the time of the interview, but is now at The Atlantic, further emphasized that the difference between AI and social media coverage is due to the nature of the technology. 

“Its implications are more extreme, so it generates more extreme sentiment,” she told me. “The long-term worries are actually based on real short-term worries, which will intensify if we do not take care of them right now. We can see the current harms, so we fear what would happen with all the different misuses later.”

“AI is particularly prone to this hysteria of danger because like 99% of our cultural fabric is made up of popular media, and everyone has grown up watching sci-fi about AI taking over the world in terms of Terminator becoming sentient or C-3PO walking around,” said Benj Edwards, Ars Technica’s senior AI reporter.

“We have all been primed and conditioned to expect this to happen,” he continued. “We’ve been talking about creating artificial life and fantasizing about it since ancient times, and it’s like we could become gods ourselves, creating a new life form. That’s a very powerful metaphor that has persisted through fiction for 5,000 years. So, it goes back. Way back.”

The missing context: rationality and effective altruism

Context isn’t a nice-to-have; it’s crucial for accurate reporting. But when news outlets uncritically quote warnings of an impending AI catastrophe, they rarely mention the two main movements behind this narrative: rationality and effective altruism.

Admittedly, working this context into stories is a hard task. 

Because their philosophical debates are filled with jargon and concepts that are hard to explain, the rationality and effective altruism movements are difficult to describe to a general audience. However, omitting their role in shaping AI discourse leaves audiences with a distorted view of who’s driving the conversation and why they chose that path.

  • Rationality: Originating from online hubs like the LessWrong forum and lengthy blog posts by thinkers like the aforementioned Eliezer Yudkowsky, the rationality movement strives to improve “reasoning and decision-making” and teach long-term strategic thinking.

    Its leading figures — Yudkowsky, Tallinn, Nick Bostrom, and Scott Alexander — see advanced AI as an existential risk to humanity. Among its leading organizations are the Machine Intelligence Research Institute (MIRI), co-founded by Yudkowsky in 2000, and the Center for Applied Rationality (CFAR), which is focused on “developing clear thinking for the sake of humanity’s future.”

    In 2016, co-founder Anna Salamon wrote that “CFAR’s mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world.” That same year, New York Times Magazine journalist Jennifer Kahn attended a CFAR workshop and was told by one of the participants that “self-help is just the gateway. The real goal is: Save the world.”
  • Effective altruism: Emerging as rationality’s twin sibling, effective altruism insists that charitable giving and policy priorities be guided by utilitarian ethics. Figures such as Moskovitz, Bankman-Fried, William MacAskill, and Toby Ord, and organizations such as the Centre for Effective Altruism (CEA) and Open Philanthropy drove the AI narrative leveraging this logic: If AI might one day wipe out humanity, even with low probability, that possibility warrants outsized attention and funding today. 

Both movements have promoted this ideology by investing heavily in “AI safety” and “AI alignment,” which aims to align future AI systems with human values. Effective altruism donors have injected hundreds of millions of dollars into AI safety work through think tanks, research grants, scholarships, and, weirdly enough, luxurious research retreats (e.g., Wytham Abbey castle).

Omitting the role of the rationality and effective altruism movements in shaping AI discourse leaves audiences with a distorted view of who’s driving the conversation.

These subcultures have overlapping communities with shared values, and their borders are so blurry that they merge all the time. 

“I’m slightly conflating EA, rationalism, and AI doomerism rather than doing the hard work of teasing them apart,” Alexander wrote in a post on Astral Codex Ten, his blog that grew out of the rationalist community.

It’s “hard work” because, as David Morris explained in a January 2025 edition of his Dark Markets newsletter, “EAs and rationalists are effectively a Venn diagram forming something close to a circle.” 

Vox’s Kelsey Piper described it similarly on the “Good Robot” podcast:

There was pretty early on a ton of overlap in people who found the effective altruism worldview compelling and people who were rationalists. Probably because of a shared fondness for thought experiments with pretty big real-world implications, which you then proceed to take very seriously.

I don’t think it’s a cult, but it’s a religion. Most of the world’s recorded religions have developed ideas about how the world ends, what humanity needs to do to prepare for some kind of final judgment.

In the Bay Area and on Oxford’s campus, effective altruists started to hear from rationalists they were in community with about what an apocalypse could look like. And of course, since a lot of rationalists thought that AI was the highest-stakes issue of our time, they started trying to pitch people in the effective altruism movement, like, ‘Look, getting AI right is a major priority for charitable giving.’

MIRI and CFAR were explicitly pitched as the top priorities for donations if you want to save the world from an “unfriendly AI.” As for how unfriendly, Yudkowsky has speculated that giving an AI a goal as seemingly straightforward as “make diamonds” could lead to the death of everyone on Earth, as the AI would use the atoms that make up humans to meet its objective.

Such scenarios are riddled with holes, but at this point, the validity of the claims hardly matters — it’s upholding the collective storyline that counts.

“What they have achieved in terms of the AI debate is, I think, remarkable: They’ve taken the niche, practically dystopian-science-fiction idea of AI risk and made people take it seriously,” journalist Tom Chivers wrote in his book “The Rationalist’s Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity’s Future.”

A troubling influence

The media might fail to adequately acknowledge the role of the effective altruism and rationality movements in shaping the AI discourse, but members of the movements themselves are more than willing to take credit.

In November 2024, Oliver Habryka, who runs LessWrong, bragged about how the entangled community had influenced important decision makers. 

“I think the extent of our memetic reach was unclear for a few years, but there is now less uncertainty,” wrote Habryka. “Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.” 

Habryka then offered this “quick rundown” of the influenced:

Shane Legg is a DeepMind cofounder and early LessWrong poster directly crediting Eliezer [Yudkowsky] for working on AGI. Demis [Hassabis] has also frequently referenced LW ideas and presented at both FHI [Bostrom’s Future of Humanity Institute] and the Singularity Summit [organized by MIRI]. OpenAI’s founding team and early employees were heavily influenced by LW ideas (and Ilya [Sutskever] was at my CFAR workshop in 2015). Elon Musk has clearly read a bunch of LessWrong and was strongly influenced by Superintelligence [Bostrom’s book], which itself was heavily influenced by LW. A substantial fraction of Anthropic’s leadership team actively reads and/or writes on LessWrong.

The movements’ influence — and the media’s blind spot — extends into politics as well.

For example, when the Future of Life Institute (FLI) lobbies Washington to introduce stringent regulation on current AI models, the media can (and should) mention that FLI is no longer a small nonprofit as, in 2021, it received more than half a billion dollars through a crypto donation.

By failing to include context about the people and foundational ideologies attempting to shape the AI landscape — who they are, what motivates them, and who’s backing them — journalists make it harder for the public to evaluate the hyperbolic claims and assess their credibility.

While the AI optimists build, the AI doomsayers seek to pause/stop their work by any means necessary. They are not the same.

Looking at the AI discourse, rationalists and effective altruists tend to extrapolate current tech trends indefinitely into the future until we arrive at one of two scenarios: the AI apocalypse or a post-Singularity golden age. 

The overpromised utopia has its own flaws (e.g., inflated expectations), but in this piece, I’m focusing on the dystopian narrative since it has a differentiated impact on the here and now. 

While the AI optimists build, the AI doomsayers seek to pause/stop their work by any means necessary. They are not the same.

Since, based on the doomers’ logic, the survival of the human species is at stake, everything is on the table: computer surveillance, shutting down open-source models, imposing civil and criminal liability on developers, giving governments exclusive control over labs, banning private commercial AI development…

The “ends justify the means” mentality spirals into dangerous territory with talk of violent acts against AI labs (burn them down) and “AGI developers” (a bullet through their head). The discourse includes nuclear exchange between nations and air strikes against data centers

These are all suggestions from the past two years. Imagined dystopian fears have turned into real dystopian “solutions.”  

Toward better AI reporting

Over the past year, the vibe around AI has shifted away from it being a threat and toward it being an opportunity. In response, the doomers learned to pivot away from the existential‑risk jargon — “human extinction” by “superintelligence” — and toward a more palatable “catastrophic dual‑use” and “national security” framing. This shift allows some of the authoritarian proposals above to resurface under a new banner and with renewed urgency. 

Journalists should be taking advantage of the fact that evidence of doomsayers constantly moving the goalposts can be found all over the internet. 

When talking to someone like Dan Hendrycks, director of the Center for AI Safety, for example, they might point out, “You say you’re only concerned with the next frontier models, and there’s nothing to worry about the current models, but here are your lobbying efforts to ban not even the current models, but the previous ones.” Or, “You once said that slowing down America would cause China to slow down as well. Still think that’s the case?” 

The ask from the media is not to denounce every effective altruist or rationalist’s (rephrased) claim, but to situate their warnings within the networks that produce them.

The purpose of journalism is to speak truth to power, and the media shouldn’t be giving this powerful group of people an easy pass. There’s a public service mentality that needs to be implemented here. Some leading AI journalists (including the wonderful interviewees for my study) write about these power players, but it should be more widespread. 

Effective altruists and rationalists will continue to adapt their strategy in response to the AI race, but their storyline will always remain the same: “We’re saving humanity from imminent doom.” The ask from the media, then, is not to denounce every effective altruist or rationalist’s (rephrased) claim, but to situate their warnings within the networks that produce them. 

Here are four actionable steps news organizations can take to raise the quality of their AI coverage:

  1. Provide context: Every time a news story evokes fears of out-of-control AI, ask three simple questions: Who is warning? What frameworks shaped those warnings? Who underwrites their work? Basically, whenever people make sweeping predictions in a state of uncertainty, ask what motivates such forecasts. 
  1. Break the binary: Move beyond “utopia vs. apocalypse.” Correct misconceptions and explain what AI actually can and can’t do.
  1. Avoid anthropomorphism: Attributing human characteristics to AI misleads people about the risks it poses. Steer clear of language that implies AI has human-like motivations, feelings, or other human traits (AI “wants,” AI “behaves,” etc.).
  1. Reward nuance: Instead of the loudest doom‑and‑gloom pundits, highlight underrepresented, balanced voices that provide evidence for their claims.

As author and futurist Daniel Jeffries wrote in his essay “The Middle Path for AI,” “People love to spin tales of AI doom or AI utopia, but it’s time to take a realistic look at AI. The biggest problem is not whether we need more utopias or dystopias. What we most desperately need is a heavy dose of realism.”

An informed public, guided by rigorous, contextualized, nuanced stories, is our best defense against both overpromised dreams and paralyzing fears. Let’s shift the AI coverage toward the “AI realism” that the public and policymakers desperately need. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
How AI could usher in The New Enlightenment
AI could trigger a civilization-scale change for humanity the same way the steam engine helped usher in The Enlightenment 250 years ago.
The missing tech case for how we create an era of abundance
AI and other new technologies could make things that are costly and scarce today, cheap and abundant for all tomorrow.
Why America reinvents itself every 80 years — and is doing so again
Three separate theories help explain why America enters a period of great progress every 80 years — and why another is coming soon.
How DeepSeek rewrote the rules of the AI race
Chinese startup DeepSeek has proven that vast quantities of capital and cutting-edge chips aren’t prerequisites for world-class AI.
Kevin Kelly points a new way forward into the Age of AI
One of the most original and optimistic thinkers in America helps build out some big through lines on what’s possible with AI in the next 25 years.
Up Next
Exit mobile version