Imagine an AI at the Pentagon simultaneously issues two instructions: eliminate an advanced radar installation near Kaliningrad and reroute 20 Maersk container vessels around the Red Sea.
Within four seconds, both orders receive approvals — a brigadier’s click and a CFO’s e-signature. Following that brief interval, a hypersonic missile lights up the Baltic sky and $200 million in freight changes course.
This rapid decision-making is made possible because approximately $1,000 in cloud-GPU compute replaced analytical tasks previously costing millions in human hours.
Now, war as a video game isn’t new: drone pilots in Iraq have guided strikes from distant bunkers. What’s transformative about the above scenario is how it demonstrates cognition as an affordable commodity — one that’s only likely to get cheaper and better in the future.
Knowledge as a force multiplier
Today’s AIs are like superhuman analysts, and they give militaries the ability to run multiple parallel scenarios for any situation and choose the optimal path, quickly and cheaply. With them, the cost of answering a problem is reduced to computation cycles, and the time it takes to complete the OODA loop — observe, orient, decide, and act — is cut down to seconds.
The number of AI analysts a military can have — each of which can do almost every task — will be directly proportional to the amount of compute it can get. Whoever leverages the low-cost capabilities of AI the most will have a decisive advantage.
We’re already seeing concrete examples of this in action.
Palantir’s Maven system demonstrated a sensor-to-shooter loop of roughly a dozen seconds in recent NATO acceptance trials. XVIII Airborne Corps operators, during the 2025 Scarlet Dragon exercise, achieved AI-assisted strikes approved in under 30 seconds, exponentially increasing decision throughput. Anduril’s Lattice platform targets engagement cycles of approximately 90 seconds from detection to strike.
AI is poised to do for military analytics what the steam engine did for logistics.
Still, as impressive as this is, knowledge amplifying military effectiveness is nothing new — it’s been happening for as long as militaries have existed.
Knowing how to make gunpowder was once a huge military advantage as it rendered castles and armored knights obsolete, a fact best exemplified by the Ottoman bombards used at Constantinople in 1453. Napoleon leveraged operational strategy and mass mobilization to redefine warfare at Austerlitz. Industrial-era innovations, such as rail and telegraphy, restructured military logistics during the American Civil War, while World War II’s codebreaking and radar changed outcomes at Midway and Bletchley Park.
AI is just the next step-change in knowledge, and it is now poised to do for military analytics what the steam engine did for logistics: cut costs so dramatically it rewrites doctrine. Analytical prowess once costly and rare will become suddenly ubiquitous.
Affordable compute accelerates itself, continually widening strategic advantages.
As advanced as today’s AI-based military solutions are, though, they’re only going to get better and cheaper — and the pace of improvement is likely to accelerate.
An example: The Department of Defense’s Replicator program is funding the development of AI-equipped air, sea, and land drones. Each flight streams telemetry from the drones into a common model-training pipeline. The data a $50K quadcopter collects today will refine the autonomy stack that designs its $30K successor next quarter. Data collected by that drone will train its successor and so on…
We’re seeing the same thing happen on the hardware side of AI development.
Affordable AI computation is accelerating chip design. Those better chips are then used to lower computation costs further. Nvidia and Synopsys’ AI-assisted Electronic Design Automation (EDA) tools have already shortened chip design cycles dramatically, perpetuating a cycle where affordable compute accelerates itself, continually widening strategic advantages.
The challenges
For all the benefits AI offers the military, it brings with it new vulnerabilities.
DARPA red-team exercises have discussed concerns about prompt-injection exploits — cyberattacks in which an LLM is prompted to do something nefarious purely through carefully constructed wording or manipulated images — with potential to override operator link and feed spoofed sensor data to redirect strike drones.
AI models capable of autonomous long-duration thought — ones that can engage in complex chains of reasoning without involving a human in the process — amplify these risks, turning unsupervised learning periods into significant strategic vulnerabilities.
Even the data used to train an AI can be a point of weakness.
A 2022 study found that malicious actors could “poison” the data sets used to train an AI, slipping in a small amount of data that will eventually cause the model to engage in “sleeper-agent” behaviors. After replicating this in larger models, AI startup Anthropic proposed continuous evaluations, through regular iterative, dynamic assessments, to detect manipulation early.
Ultimately, the solution seems to require knowing exactly what data goes into training an AI and out while asking it something. Ideally, this data should be timestamped, hashed, with a fingerprint that you can trace through the system. Palantir’s hash-ledger, which is already in trial use, might well become an essential defense.
The vulnerabilities inherent in AI systems are just one way they could pose a problem for militaries.
Powerful, affordable AI means private groups with cloud-computing access can wield strategic analytical engines previously exclusive to nation-states — from cybersecurity capabilities to scientific research, OpenAI’s o3 model is already a stronger analytical engine than what the Pentagon might have had a few years ago.
For now, silicon production remains concentrated, as TSMC’s chip dominance and related export controls highlight continued dependency risks. Initially, this could flatten power structures, but control may reconsolidate among entities dominating AI infrastructure.
If every country has to rely on the few AI providers — many private, many American — then are the other countries even meaningfully independent in any actual way?
Any successful defense is a combination of a country’s resources and its people. If the functions many of those people perform are taken over by AI, then the types of decisions made are more likely to be aligned to the models rather than the model users. Claude and ChatGPT and DeepSeek have personalities, and they’re far less idiosyncratic than what one would expect.
By using another country’s AI, a military will be essentially replacing its entire management team with foreign software.
While this sort of takeover has never really been done short of annexation, we’ve seen glimpses in monetary policy, and when countries have replaced their own currency with the US dollar, the result has not been an increase in their independent decision-making capacity.
Looking ahead
Ultimately, the safest path forward may be lined with guardrails, like the 2024 update to DoD Directive 3000.09, which requires four-star review before deploying autonomous weapons.
It’s likely policymakers will try to ensure rigorous oversight and transparency, while military planners embed deep red-teaming into regular exercises. Additional machine-time brakes, such as capped autonomous runtimes before human review, rigorous model validation, and air-gapped model escrow, could prove critical as well.
This need for caution might, for the first time in technological development, force militaries to opt to go slower, not faster — but those who are too slow to adapt risk obsolescence, their latency turning from a comfort to existential risk.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].