No, AI won’t take all the jobs. Here’s why.

The fantasy of “total automation” can’t withstand the friction of real-world deployment.
Sign up for Daniel Jeffries’s Future History Substack
Essays about the future, the past, technology, and the meaning of life.

Ever since I was a kid reading Isaac Asimov paperbacks under the covers, I’ve been hearing some version of the same prophecy repeated again and again: The machines are coming for our jobs.

“This time it’s different,” says whoever is issuing the warning. Yes, they always say that — but maybe with artificial intelligence (AI), it really is different. Maybe general-purpose machine intelligence will send us all to the breadline, making us as obsolete as horses when the car came around. Instead of hiring people, employers will fire up fleets of digital workers on the cheap. They’ll work 24 hours a day, seven days a week, and humans will live in a neon hell of mass unemployment, riots, and war.

It’s a classic apocalypse tale, utterly terrifying because it hits at our darkest fears about who we are and what we could be: Humans are the dominant life-form now, but what if suddenly the majority of us slipped from the top of the food chain to the bottom of it?

But if you actually consider the mechanics of integrating AI into the job market — unit economics, task decomposition, verification costs, new-market elasticity, labor bottlenecks, etc. — the idea that AI will take all our jobs quickly falls apart. Let’s break it down.

The real cost of digital workers

A recent MIT report concluded that 95% of companies’ attempts to integrate generative AI into their workflows fail. While the reporting on this story was overblown, many automation projects do fail and fail badly, often because the bots make massive mistakes and run into endless edge cases. Taco Bell, for example, is rethinking its integration of AI into drive-thrus after a UK man crashed the system by ordering 18,000 waters and another customer was repeatedly asked what they wanted to drink after already ordering a large Mountain Dew. 

Often the folks who champion AI for low-level jobs assume these jobs are mindless and should, therefore, be easy to automate. But even those jobs require workers to make hundreds or even thousands of little decisions — people are so good at making these little decisions that we don’t even think of the process as thought. Placing ads on LinkedIn is hugely repetitive, but it’s not mindless. It’s threaded with dozens of intelligent turning points. Is this the right copy? Does it look good? Would this video work better with this copy? 

Getting AI to work reliably is often a nightmare because it comes up against the infinite messiness of life. Edge cases. Outliers. Unexpected left turns. Integrating it into even the places that seem like they should be the easiest to automate — like a fast food drive-thru — is a challenge because AI doesn’t make work disappear. It just changes the workflow and creates a different set of problems. 

Every team that tries to slot AI into production will slam into the same three bottlenecks:

  • Problem description
  • Iteration
  • Verification

Let’s start with describing the problem. If the person using the AI can’t say clearly what they want and what a good result looks like, an AI can’t help them. As the old joke goes in consulting, machines will replace workers as soon as clients know exactly what they want. In other words, workers aren’t in danger.

Even if a team does know what it wants, that doesn’t make it easy to describe clearly. Designing the prompts and decision points for an agent is a task littered with pitfalls. 

And what people want changes. Any project is iterative. It changes as the task develops. As Steve Jobs said, “There’s just a tremendous amount of craftsmanship in between a great idea and a great product.” Keeping track of those changes so the agent stays on track is a project management job in itself. 

AI iteration is a slot machine. You don’t know if you’re going to pull a jackpot or dead-end for a day or two or three.

If that’s bad, it’s nothing compared to the problem of iteration. That’s where an AI project really gives you whiplash. Take coding agents, like Anthropic’s Claude Code, as an example. Some problems one-shot, meaning an AI gets them right the very first time. Some take a thousand runs and still don’t work right. If you have to prompt Claude 1,000 times to get working production code, that’s 999 errors and one good result. 

The problem is you can’t plan for AI iteration. It’s a slot machine. You don’t know if you’re going to pull a jackpot or watch in horror as you dead-end for a day or two or three on the same broken loop. 

Worse, sometimes you don’t get the luxury of iterations. Error tolerance rises or falls depending on the task, and you don’t always get a second chance. In automating tax filings, one error can mean the difference between an audit and a refund. In finance, it can mean a bank account drained or an unexpected $10,000 bill. In medicine, it can mean life and death.

That’s where planning and self-correcting scaffolds, tool use, retrieval, and chain of thought variants do real work, but even there, the uncertainty bites. Nothing is perfect. Tools can give AI a way to ground its decisions by looking up facts in a private database, but that doesn’t always mean it gets it right every time. Or take chain of thought. Models can’t always explain what they’re thinking, so a reasoning output may not actually reflect how the model came to a decision internally, as research from Anthropic recently showed. 

At best, these techniques are just hedges against the probability of making a wrong decision. The real world is messy, and both people and bots get things wrong because perfection is impossible in a world of infinite possibilities. 

The point isn’t that iteration is bad. It’s that it’s unpredictable, which wrecks old-school project plans. With humans, you can guess that an IT project to set up a new Exchange server will take X hours. With AI, you can’t predict how well the digital slot machine will work today. 

A growing practical skill of today’s AI-first coders is knowing when to grab the wheel back from the machine and just do it themselves. The longer they wait, the deeper in a hole of wrong code they get, and the more work they have to redo. The system can turn a 10x acceleration into a 10x slowdown in a single turn of the wheel.

But the hardest part of all is verification. It’s the silent tax that compounds with scale. 

For anything that matters, you need a verification pass that goes way beyond a lazy skim.

A casual user might not check GPT’s citations if using the tool to understand their apartment lease, but if you’re a lawyer using the tool for work, you’ll be hunting down every link and triple-checking it — lawyers in the UK faced fines because they submitted docs with fake citations. Research agents that mostly cite correctly are fine for a brainstorm, but they’re useless for publishing or applications where lives and money are on the line. If your AI sales development representative sends an email that promises an imaginary discount to a client, you’ve got big problems. If it gets prompt-injected to exfiltrate your client list, you’ve got a disaster on your hands.

If you’re using these systems for anything that matters, you need a verification pass that goes way beyond a lazy skim. That means detail-oriented human work — you must check every claim, every diagram, every link, every word, every line of code, every outcome and citation and fact. And who’s best positioned to verify? The very people who are already good at whatever the AI is trying to do: the workers it’s supposed to replace.

Doctors can check medical claims. Senior programmers can check AI coding outputs. Strong copywriters can check that whatever GPT writes sings — they know a good turn of phrase when they read it and can make sure each paragraph flows from the one before it. 

That’s the biggest irony of AI work. If you’re not already good at the task it’s doing, you can’t tell if what it generates is good. You don’t have the knowledge or the context. If you don’t know French, then you don’t know if a French translation sounds clunky or if you just told someone to eat shit in your new commercial because of new slang that sounds like the phrase you translated.

Even if what an AI generates is technically correct, we still make many products for use by people. If your Genie 3 model generates a perfectly playable 3D game world on the fly, that’s amazing. But is it fun to play? Does it have little areas where players get stuck? Who is best at testing things like that? The very level designers you’re trying to replace with AI, the ones who spent decades learning how to make games fun for the people who will pay for them.

Establishing an army of digital workers at a company will not be a matter of “set it and forget it.” It’s going to take real oversight, layers of governance, cross-checking, monitoring, and auditing to make the agents stay on track. As experienced coder Victor Taelin wrote on X:

The AI can only work for so long before it needs me. Here’s how [an] experiment went: 1. I wrote a full spec of the ‘next-gen HVM’…About 3 days later, I have a working prototype. I didn’t write more than 1% of that code. I spent 95% of that time playing games. From a point of view, the AI automated 95% of my job, if we measure by time alone. Yet, from another point of view, it automated 0% of my job. After all, without the expert (me) stepping in every 30 minutes, the AI wouldn’t be able to move past the very first module.

That looks like more jobs tending to the machines, just like the rise of cars meant the rise of mechanics to fix them.

The job becomes directing, instrumenting, and babysitting the machine. We go from laborer to conductor. We’ll see new skillsets bloom and new businesses be born: verification guilds that operate like Underwriters Laboratories for AI outputs, model risk professionals who adapt the bank-world playbook to statistical machines, and workflow designers who knit tools, models, and humans into something that produces consistent outcomes, not just cool demos.

Human work isn’t dying. It’s just changing.

Or changing again. We’ve already destroyed all the jobs in history 1,000 times over. You didn’t hunt a water buffalo for tonight’s dinner or tan leather to make your shoes. Someone else produced your food and clothing. You didn’t make candles to light up your house. You flipped a switch.

AI is just the latest labor revolution. It’s not magic. It’s not perfect. It’s not a drop-in replacement. It’s a new and complex workflow, a delicate dance that will take time to play out. And that brings us to the second major reason that AI is not taking all the jobs any time soon: economics.

The dismal science strikes back

AI isn’t cheap or free. If you think it is, you’ve been suckered by teaser rates and loss-leader pricing.

Remember, we’re not talking about these machines responding to a single prompt about what to cook for dinner. In this essay, we’re talking about AI agents running 24 hours a day, doing complex, long-running work over days or weeks and staying on target, like people.

But unlike people, AI doesn’t run on potato chips and Coke — it runs on billions of dollars in electricity and specialized AI chips. Let’s take a deep dive into the economics of robots, digital and physical.

You don’t need a degree in economics to know that long-running AI agents, doing real work, will be hugely expensive.

In a fantastic essay called “My AI skeptic friends are all nuts,” programmer Thomas Ptacek wrote that a common argument against code generated by large language models (LLMs) is that “the code is shitty, like that of a junior developer.” His response? “Does an intern cost $20/month? Because that’s what Cursor.ai costs.” In other words, if a digital worker costs just $20 a month, it’s still a good deal even if the quality is hit or miss. 

Great line. Just one problem. That’s not what Cursor costs. That’s just the teaser rate, designed to suck people in. The company recently put severe rate limits on that plan and faced a big backlash. But the backlash will have zero effect. Why? Because $20 a month is completely and totally unsustainable. Those rate limits will get worse and worse as folks wake up to the real unit economics of AI agents.

The same happened with Claude Code recently. Anthropic rolled out Max plans at $200 a month that gave users as much compute as they wanted. Or at least that’s what subscribers thought. Only a few months later, the usage limits came crashing down. These hamstrung users so badly that some were trying to optimize their sleep schedules around the limits. OpenAI’s Sam Altman recently said his company loses money on its $200 a month subscription, too.

As the folks at Cline wrote in their excellent blog post about coding agents, this is going to keep happening:

The problem isn’t greed — it’s math. AI inference is a commodity, like electricity or gasoline. When you sell commodities on subscription, power users destroy your economics. A power user on a $200 monthly plan can easily consume $500 worth of AI inference per day. The provider bleeds money with every power user they attract.

That’s the loss leader at work. Deliberately price everything low or at a loss to drive up demand and then hit them with the real costs later. 

Maybe you think $200 per month is expensive. It’s actually dirt cheap compared to what agents are eventually going to cost companies. You don’t need a degree in economics to know that long-running AI agents, doing real work, will be hugely expensive. Powering those digital brains requires massive amounts of electricity, cooling, and compute. Big Tech is currently spending trillions to build new AI data centers, and on top of everything else, the cost of paying people to run those centers will need to be factored into the equation, too.

Your future workforce, running on specialized neural net chips, is more likely to cost $5,000, $10,000, or $20,000 a month per digital worker, not $200. At my AI agent startup, I have a team of three engineers using Claude Code and OpenAI’s GPT-5 to assist their coding. Using API pricing, we’re currently averaging about $8,000 a month — that’s what $600 worth of subscriptions really costs. And that’s eight hours a day per programmer, not agents running uninterrupted 24 hours a day, seven days a week. 

We’re also likely to see taxes on AI agents if they really do bite into the labor force in any meaningful way, as the world lurches more and more into protectionism. In fact, it’s virtually guaranteed. That will raise the cost of running them as well.

As these unit economics hit, most companies will have to ask, “Is it cheaper to throw more people at a problem than to pay for AI?” In most cases, the answer will be “yes,” unless you have a hyper-specialized problem that AI can really help accelerate, like drug discovery. 

The prices will come down over time, but likely not for the leading-edge models — if the scaling laws are right, those will keep getting bigger and more expensive over time. Older models will get cheaper, but the bleeding-edge of intelligence will live at a premium.

Eventually, costs come down and unlock more use cases. We then weave them into our workflow, and jobs really transform. But all this takes a lot more time than people think. It’s the process of creative destruction that powers modern economies. The lamplighter jobs eventually disappear, but they’re replaced by electricians and building contractors and more. And none of this happens overnight.

That brings us to the last reason AI won’t take all the jobs.

The lump of labor fallacy

People have been accusing robots of taking all the jobs for more than 100 years, and yet somehow we still have jobs. 

That’s because people tend to think that there are only so many jobs to go around and that we’re all fighting for them. Economists call that the “lump of labor fallacy,” and it is not how the job market works. 

Work changes. It expands. The amount of work to be done grows when we lower the cost of job creation. New technologies bring new possibilities and previously unimaginable new jobs. An 18th-century farmer would never be able to understand the job of a web designer because it’s built on the back of numerous black swan innovations — like electricity, computers, and the internet — that he couldn’t possibly imagine. 

We’re great at imagining all the jobs that will disappear because of a new technology, but terrible at predicting the ones that will be created because they don’t exist yet.

I talked about this in my last “Clear Windows” essay, “The age of industrialized imagination.” While it focused on how AI will change the entertainment industry for the better, the ideas in it can apply to the world of work more broadly: 

AI is nothing but a set of phenomenally powerful tools. Instead of imagining AI just doing everything and putting us all out of work, instead imagine a hybrid workflow. An actor performs on a real set. You film a scene, but in post-production, you “reshoot” it from a different angle generated by an AI that understands 3D space, saving you a fortune on reshoots. Costs collapse, and that’s a good thing. When the price of failure drops from $200 million to $20 million, or even $200,000, risk becomes your friend again. Studios take more risks again, and instead of “Fast and Furious 55,” we get something new. Cheaper creation doesn’t kill careers; it multiplies them. The future of work isn’t a world without people. It’s a world where people can do more, faster, and cheaper than ever before.

As James Bessen notes repeatedly in the Harvard Business Review, automation doesn’t just create or destroy jobs — it transforms them. When bank ATMs rolled out, obituary writers sharpened their quills for bank tellers, but bank teller employment didn’t collapse — it grew. That’s because ATMs lowered the cost of running a branch, which meant banks could open more branches, where the teller’s job changed from cash-handling to relationship sales and service. 

Demand elasticity matters, too. When you cut the cost of a resource, humans use more of it — that’s Jevons paradox in action. Applied to artificial intelligence, cheaper cognition will make us consume more intelligence overall, just as cheaper energy led to more energy use. A new market for intelligence will emerge. 

In his essay “Why are there still so many jobs?” economist David Autor explains why this kind of thing keeps happening. Technologies automate specific tasks, but they also create new tasks, new complementarities, and new categories of work that we didn’t have words for before:

Automation does indeed substitute for labor — [but] Journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor … the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.

But AI is different…

That’s theory and history. But what about right now, with LLMs crawling into every workflow like ivy up a brick wall? 

As real firms plug generative systems into actual jobs, we’re not seeing a jobs cliff despite scary, fake headlines. What we are seeing is AI giving workers — especially those on the bottom rung — a notable boost. In 2023, Stanford economist Erik Brynjolfsson and his coauthors published a working paper for the National Bureau of Economic Research that found that generative tools boosted the productivity of more than 5,000 call center agents by an average of 14%, with the biggest gains accruing to the least experienced people. 

Technology is not a pink slip. It’s a new baseline for what “junior” means. Jobs don’t vanish. Old job descriptions do, with work slithering into new shapes. Call it the shapeshifter economy.

Another stubborn fact about the world we live in that just doesn’t square with the theory that AI is going to eliminate all jobs: People aren’t having as many children, and the population is getting older fast. Jobs in caregiving, fixing and upgrading infrastructure, and the final step of services — where people deal directly with customers — are all growing quickly. You can’t have a glut of workers when the working-age population is shrinking compared with the number of people who depend on them.

Even if AI was perfect and cheap, there will always be jobs we simply don’t want to hand over to machines.

The idea that AI will ever be able to do everything humans do perfectly is a fallacy.  Every exponential curve eventually becomes an S curve. We’ll run into unexpected limits and walls. We already have with today’s models, and more are waiting for us around the corner. What we’ll end up with are imperfect machines, just like we have imperfect people. In some cases, the machines will be superhuman, able to do the job better than us, and in others, they’ll keep running into unexpected pitfalls they just can’t overcome without some new breakthrough.

The idea that they’ll ever be able to do everything perfectly and cheaply is nonsense. Today’s AI costs are deceptive. AI is a stack of compute, power, cooling, networks, memory, and maintenance, and the machines actually cost way more to run than developers are currently charging for them. Costs will fall over time, but if the scaling laws are right, leading-edge models will always be expensive. That their cost will ever be negligible is a fantasy. Intelligence does not scale to infinity. The real world has friction, and there are upper limits to how well machines — or flesh — can do anything. 

Even if AI was perfect and cheap, though, there will always be jobs we simply don’t want to hand over to machines. We got ATMs, but we still have human bank tellers because we want the option of talking to a person. In the restaurant world, we may one day have perfect waiter droids, but I’ll still want to interact with a friendly, funny person who makes me feel like a king while I’m dining, and I expect many other people will, too. 

What will actually happen to work?

Yes, AI will get more and more ubiquitous in the future, but work won’t vanish — it’ll mutate. 

Here’s the most useful way to think about the next few decades. Imagine the cost of intelligence as a service cratering, like compute and bandwidth did over the last 40 years. As the curve bends, three things happen at once:

  1. We get more of everything. That was the entertainment thesis I laid out in my last essay — cheaper creation means more attempts, more risk-taking, more weirdness, more trash, and more treasure. The same logic scales to services, products, and experiences everywhere. When the price of failure drops, orthodoxy loses its grip, and risk-taking rises. The market tries more ideas. Most fail fast. More succeed than before. That translates directly into jobs because the set of things that clear the bar expands.
  2. We reshuffle the organization chart. Firms that simply bolt a model onto a legacy process get a sugar high and a hangover. Firms that use machines for roles where the systems are strongest and people for ones where they’re irreplaceable — because the jobs require a creative spirit, human judgment, taste, trust, context, or a layer of personal interaction that makes customers feel like they’re not yelling into a void — see compounding returns. The call center study didn’t show machines replacing agents. It showed junior agents climbing toward senior performance with a synthetic coach humming in the background. That’s very different from “everyone goes home.”
  3. Demand outruns substitution. The easiest way to test this claim is to walk into sectors that are underbuilt or underserved and try to imagine how making the work cheaper could lead to less of it. Consider elder care in a graying world. Would safer lifts, fall detection, agentic medication management, and robotic assistance shrink the workforce? Or would it expand care hours and quality so that the bottleneck becomes the number of trained humans? If a robot can feed and wash a wheelchair-bound senior, will she no longer want someone to talk to over dinner to stave off loneliness or to drive her to bingo night? Consider construction. We are tens of millions of housing units short in rich countries. If robots can lay bricks and AI crews can pull permits, does that kill jobs or unleash a building wave that requires every skilled pair of hands we can train? The answer is obvious if you’ve ever tried to find a contractor in a hot market.

Even in white-collar land, this logic holds. The first wave of prompt engineers was a placeholder. The real roles are designable and durable: 

  • AI product managers who can scope workflows in human language and machine constraints
  • AI operations (AIOps) teams that instrument, observe, and remediate 
  • Retrieval engineers who wrangle messy enterprise knowledge into semantic shape
  • Governance folks who implement the NIST and EU toolkits with adult supervision
  • Model risk managers who build signoff loops that regulators will actually sign 

You don’t get any of those without people.

Economics polices the outer boundary of all of this. Your digital person is not a flat $20 a month. Long-running, agentic work with tools, memory, and autonomy is a bill that grows with tokens, time, and ops. It will get dramatically cheaper, but slowly, and the cost will never fall to zero because demand expands to eat the gain. 

If you imagine a world in which every person has a staff of 15 digital agents, each churning day and night, you’re imagining a world with a lot of power and silicon costs. Data centers already chew a nontrivial slice of electricity, and if current trends continue, the IEA projects that AI and data centers will pull significantly more over the next few years. The capex and opex curves shape where machine labor makes sense. 

Bottom line: The future of work isn’t going to be a matter of simply choosing between AI and human workers. It’s going to be calculating what each task costs to do with people, with machines, and with hybrids when you include the cascading error costs, verification tax, and risk.

If you want a North Star, it’s this: We are moving from jobs to quests.

Once you adopt the right framing, you get a better perspective. A lot of rote, low-value internal writing is getting eaten by AI because it’s cheap to generate, cheap to verify, and the tolerance for errors is high. A lot of drafting with formal constraints — legal memos, PR statements, policy drafts — is getting augmented because the structure is predictable, but the stakes are high, so verification is essential. A lot of high-context, multi-stakeholder coordination is stubborn because the world model lives in overlapping human relationships, histories, and weird, unstated incentives — that’s where you give the agents tools, but let the humans conduct.

If you want a North Star, it’s this: We are moving from jobs to quests. Not Fiverrized gig labor writ eternal, but composable work that assembles a human team and an agent cohort around a specific outcome for a defined time. 

You can see it first in the indie edges of film, games, and startups because the cost of failure is lowest there. Bosses often forbid AI, but people use it anyway because it helps. There’s already a world of secret cyborgs. People use what works even if you don’t want them to. It moves into enterprise last because the compliance surface is bigger. In all cases, the director — the person who can decompose a vision into steps and orchestrate humans and machines into output — is the new irreplaceable layer. 

That is not a future without humans. That’s a future with more leverage per human. That’s the very definition of economic growth: doing more with less. It used to cost your ancestors 1,000 hours of labor to light their house with tallow candles. It costs you a few seconds to light your house with the flip of a switch. Leverage in action.

We are not utility-maximizing robots. We are status-seeking apes who love a good story and a good pair of hands.

Still, the skeptics persist: “What if we get real artificial general intelligence (AGI)?” 

First, define AGI. If you define it as “systems that can carry out any economically relevant task as well as a skilled human,” then you’re positing a power with awesome capability and a cost curve that becomes the economy itself. The most charitable version of the doom story says that a system like that makes human labor obsolete. The more adult version says that economics still matters (power, chips, energy, constraints) and that society still matters (policy, norms, rights, meaning). 

Even there, notice how quickly meaning enters the chat. Humans will still pay humans for human experiences and trust. We pay for barbers, therapists, tutors, and live shows even when cheaper options exist because a huge slice of value is social and symbolic. We are not utility-maximizing robots. We are status-seeking apes who love a good story and a good pair of hands. Goods and services that explicitly signal human craft are not going away. They tend to command higher prices as machine baselines rise.

AI will swallow tasks we were doing well, but expensively, and force us to raise our game. It will open new categories of work we can barely name yet: outsourced verification houses, agent choreographers, compliance toolsmiths, retrieval architects, safety case designers, model-risk sheriffs. It will push more work to the edges, small, weird, hungry teams, because the leverage is finally enough for them to matter at scale. 

We are not running out of jobs. We’re running out of excuses to do them the old way.

The age of industrialized imagination is Act I. Act II is the age of industrialized intelligence everywhere, all at once. If we do it right, we get more builders, not fewer. More teachers, not fewer. More therapists, more caregivers, more inspectors, more conductors. A thousand new job descriptions bloom in the space between specification and verification, precisely because the tools moved the bottlenecks instead of erasing them.

This is a world where humans have more reach, more speed, and more leverage than any generation before us. A world where more people can do work that used to be reserved for the few because the tools are finally good enough.

If you want a useful slogan for the 2030s, it’s not “AI will take your job.” It’s “AI will move your bottlenecks.” Your job is to learn to specify crisply. Learn to verify. Learn to conduct a team of tireless weirdo machines that never sleep and then do the one thing machines can’t do, no matter how many tokens you throw at them: decide what matters.

Again, even if AI can magic up a 3D game from nothing, who will decide if it’s any fun to play? A level designer knows best where people get stuck. A crackerjack game veteran knows if the quests are tedious or incredible.

We are not running out of jobs. We’re running out of excuses to do them the old way. The rest is a choice. Embrace the future and don’t run from it. It’s coming anyway, and it will be beautiful for anyone who takes it in with open arms.

Choose to be fearless and embrace the possible. The rest will take care of itself.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Sign up for Daniel Jeffries’s Future History Substack
Essays about the future, the past, technology, and the meaning of life.
Related
AI doomerism isn’t new. Meet the original alarmist: Norbert Wiener
Decades before Geoffrey Hinton and Eliezer Yudkowsky raised alarms, the computer scientist warned AI could steal jobs and outsmart humans.
Ancient Olympians wouldn’t qualify for today’s Games
Across history, the human body has been reshaped by discipline, medicine, and now technology — each era redefining peak performance.
A tragedy, a lawsuit, and the birth of an AI moral panic
A lawsuit claiming an AI chatbot caused a teen’s suicide risks sparking a new moral panic, echoing past fears built on distorted evidence.
Why AI gets stuck in infinite loops — but conscious minds don’t
Anil Seth suggests the difference is that living beings are rooted in time and entropy, a grounding that may be essential for consciousness.
From cryonics to aging: How AI is transforming human health
From curing rare diseases to extending lifespans, four biotech founders share how AI is rewriting the future of medicine.
Up Next
Exit mobile version