I’ve noticed a major vibe shift in the AI discourse lately. Maybe you have, too.
Two years ago, the air was thick with worry about the potential ramifications of AI getting better, and not just a lot better, but even a little bit better. The fear was that, with even a small improvement, AI could potentially make itself even better, faster, which could then trigger a cascade of recursive self-improvement leading to a runaway intelligence explosion.
This idea led to high-level fears that widespread unemployment was imminent, with entire classes of jobs getting wiped out, as well as existential anxiety that we were bringing a superintelligence into our midst with no understanding of how to control it — and that it could potentially be humanity’s downfall.
This concern — that AI could cause humanity’s extinction — was quite widespread. It was expressed by top AI labs (OpenAI, Anthropic, Google) and many of their employees, as well as a number of top AI researchers (Geoffrey Hinton, Yoshua Bengio). Major media outlets, including Time and the New York Times, published stories on the topic.
Policymakers were urged to do something to mitigate the risk immediately, and multiple nations took legislative action and established AI governance frameworks. Some set limits on the amount of compute that could be used to train AI models, arguing that more compute could lead to models that were too powerful and, therefore, too risky. In some places, politicians mulled creating oversight regulatory bodies for AI similar to the ones that exist for financial services or healthcare companies — their job would be to audit and test models and ensure labs reported regularly on their development.
Due in part to changes in the political realm, much of this regulatory authority didn’t come to pass. However, AI did continue to get better at a blistering pace. Models have trained on more compute than the supposed threshold of danger — 1026 FLOPs — including open-source ones that anyone can download. AI can now code autonomously and solve hard problems across physics, maths, and coding. What we thought of as the dangerous threshold for the handful of largest models is now well within the grasp of everyone.
We make tech safe as we build it. If we didn’t, nobody would use it.
Not everyone has been in agreement about the risk of AI, though.
Others, including myself, have argued that this line of thinking — that AI could be an existential threat — was fanciful and not backed by evidence. This group’s position has been that better technology has, traditionally, been beneficial to humanity and that diffusion isn’t simple and would take time — even if AI spread much more quickly than other technologies, there would still be enough time for society to adjust.
They came to this conclusion on three grounds:
- Historically, the human condition is to adjust to technology, and this adjustment happens at human speeds.
- We can’t just apply AI to any problems to instantly fix them — as my friend Matt Clifford says, there aren’t any “AI shaped holes in the world.”
- Real-world constraints bind any growth. We can’t say what these constraints will be in advance, but time, energy, ingenuity, coordination, or even materials can factor into the equation.
I called the insistence that AI poses an existential risk an example of modern eschatology once — a secular version of old apocalypse myths from theology. Its believers sought proof of AI’s safety before its creation.
The counterargument to it is based on engineering. It’s built on the premise that better technology isn’t built in a vacuum — during creation, it’s constantly running up against the friction of the real world.
It argues for an engineering approach to AI safety. We make tech safe as we build it. If we didn’t, nobody would use it. It doesn’t matter how fast you can make a car — nobody would drive it if it blew up every so often or turned right when you wanted it to go left. If there are problems with AI, as there are with every tech, we’ll solve them — we’ll have no choice, if we want to make progress.
Economics supports the counterargument, too. The real constraints are real. S curves are real. When your next data center needs an order of magnitude more investment for linear gains in performance, that has real cost associated with it. It doesn’t mean the performance gain isn’t real, but just because we get wonderful benefits doesn’t mean the cost disappears.
The vibe shift
I’m now sensing a collective waking up amongst the AI doomers.
It seems to be largely triggered by an emerging general consensus that GPT-5 wasn’t all that great, that the scaling curves bent (I think it’s amazing). It might also have been brought about by the lack of ill effects over the past year despite the seemingly constant release of new closed- and open-source models. We’ve also seen enterprises express dissatisfaction with AI — to them, it’s progressing all too slowly.
Even the worries about chatbot psychosis seem overblown — the phenomenon is far from widespread despite the release of incredibly sycophantic and reward-hacking code models in short succession. No worldwide Pentagon hacks as models trained with greater than 1025 FLOPS were released — many with very limited safety training as the US circa 2023 would’ve understood it — by ByteDance, Zhipu, Alibaba, Tencent, and DeepSeek.
Instead, what we saw was how we’re now a year or two into companies adopting generative AI in full force, or trying to, and hitting the same organizational inertia that every other digital transformation effort hit before. Better models will help — they already are — but AI is not replacing all jobs. It’s even beginning to feel like regular IT.
Ultimately, we’re leaving behind the existential anxiety and returning to a more prosaic worry: Technological transformation will cause major shifts in the workforce — likely faster than ever before — and the world as we know it is about to change.
Surviving technology
The core worry underlying AI has always been about the unknown. Could we control it? Could it control us? What does it mean for there to be superintelligence?
And worries about the unknown aren’t new.
In 1955, John von Neumann, a co-developer of the atomic bomb and widely considered one of the smartest men to have ever lived, penned an essay titled “Can we survive technology?” In it, he wrote that “for the kind of explosiveness that man will be able to contrive by 1980, the globe is dangerously small, its political units dangerously unstable … Soon existing nations will be as unstable in war as a nation the size of Manhattan Island would have been in a contest fought with the weapons of 1900.”
His argument in the piece is extremely logical, befitting a man of his intellect. It’s almost tautological in its simplicity. Technology increases people’s capabilities. Capabilities can be used for good or evil. Capabilities for evil, if sufficiently high, can be catastrophic. Applied to AI, it means that, as the technology gets more and more capable, we’re moving toward a world in which everyone will have access to a technology that can complete their homework — or potentially hack the computers controlling an entire nation’s energy grid.
It’s the digital equivalent of a nuclear warhead.
Advanced technology doesn’t inevitably translate into catastrophe.
Seventy years have passed since von Neumann wrote his essay. The world population is about three times larger. World GDP has grown roughly 20x nominally since 1955, and about 5x in real terms, meaning adjusted for inflation. Real GDP per capita has doubled. The power of technology helped create this surge. And yet, deaths from state-based armed conflicts have declined, even adjusting for the World Wars that happened in the first half of the century.
Don’t get me wrong — the world isn’t all rosy. Since von Neumann wrote his essay, there has been a rise in lone-actor terrorism, particularly inspired by online propaganda — less sophisticated than all our war, but also much harder to prevent. But the net impact of those incidents has been much smaller than what von Neumann predicted (he was so convinced that the Soviets would drop a nuclear bomb on the US once they figured out how to build one that he advocated bombing Moscow preemptively in the late 1940s).
Why are we so much less likely to be hurt by acts of war despite technological advances — like nuclear weapons — making us so much more vulnerable?
There isn’t an easy answer, but seemingly, as everyone gets richer, the conditions that produce the worst Hobbesian outcomes of human nature become less compelling. Yes, they do get replaced with copycat media-inspired killers, but they’re much fewer than what you (or von Neumann) might have thought. Plus technology means those killers are also much much easier to catch.
None of this means that we can prove the future will be devoid of problems or “safe” for all humanity when AGI arrives. The future, as always, remains unknown. Maybe all white-collar jobs will disappear in the next decade. Maybe we’ll discover another architectural miracle like the modern transformer, and the machines will wake up. Maybe.
But what we do know is that advanced technology doesn’t inevitably translate into catastrophe, as von Neumann predicted it would.
In the meantime, it’s encouraging to see more of the industry adopting the engineering mindset. There is absolutely no shortcut to any tech diffusing through the entire economy. Workforces will adapt. Whole categories of jobs will disappear — this has already started — but new ones will emerge. Some might even seem like games, but they will be no less real than those of today’s cross-border payment companies. Incredibly valuable, in other words.
We’re moving to a time of a bit more realism. AGI is still in the offing, but grey goo from nanobots it is not. It hardly means we’re not headed for a science-fiction future — just that it’s not destined to be dystopian. Every step forward is one we choose to make by solving the problems seen in the previous step. I’m not sure we can ask for something better.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.