Governing AGI: Model laws, chip wars, and sovereign AI

The US has stumbled in regulating past technologies. With AGI on the horizon, failure this time could reshape the global balance of power.
Sign up for Rohit Krishnan’s Strange Loop Canon Substack
Essays about innovation, progress, technology, business, science, and complex systems.

Historically, the US government hasn’t excelled at regulating technology. Nobody really has. Maybe it’s because tech evolves too quickly, or because most politicians aren’t developers and simply don’t understand what they’re trying to regulate and can’t see all the potential consequences of their actions. 

Take the 1990s, when US lawmakers capped exported software at 40-bit encryption keys to keep strong cryptography out of foreign hands. They didn’t foresee tech companies balking at the need to maintain separate US-only and international versions, and as a result of their regulations, the weaker standard became the global default, undermining security everywhere, even in the US. 

We now have AI capable of completing multi-hour tasks, and the length of tasks it can complete is doubling every seven months. AGI is on the horizon. And the US government is once again attempting to shape the future of tech through regulation. It’s had limited success so far — and the pace of improvement means that it needs to decide its next steps relatively soon.

Model restrictions

When AI started to emerge in the public consciousness as potentially something big — circa the release of ChatGPT — the US government, despite being composed of multiple factions with competing interests, made a coordinated policy push to try to slow AI’s development by regulating the models themselves. It was afraid of the future and sought to control the immensely powerful groups building these technologies with crude thresholds and onerous paperwork.

The White House Office of Science and Technology Policy (OSTP) introduced the AI Bill of Rights in 2022. In 2023, President Joe Biden signed the Executive Order on Artificial Intelligence, which emphasized safety over progress. That same year, the National Institute of Standards and Technology (NIST) — the US agency tasked with developing standards and guidelines in science and technology — released its AI risk management framework

Many of those early initiatives have since fallen apart. They ran into a fundamental problem: measuring and enforcing compliance. There are no agreed-upon technical thresholds and no major oversight committees to hold developers accountable. They set boundaries that sounded good — 1026 FLOPs thresholds — that we’ve since zoomed by. Many of the policies tried to answer incredibly complex questions, essentially attempting to encode centuries of legal and ethical precedent into software. That’s slow, messy work, and the would-be enforcers of AI rules — the digital Teamsters — didn’t succeed.

In 2025, the second Trump administration accelerated the demise of this safety-first approach. Once in office, President Trump quickly rescinded Biden-era AI directives and issued Executive Order 14179, which emphasized innovation and competitiveness, taking the guardrails off AI developers. 

Border control

Looming over the debate of how to regulate AI models has been the spectre of the best ones ending up in the hands of America’s biggest geopolitical foe, China.

To make frontier AI models, you need three things: engineering talent, energy, and semiconductor chips. Since 2018, the US has treated control of advanced AI chips as a national‑security issue and has used regulations — and political relationships — to ensure they stay out of China’s hands whenever possible:

  • In 2018, the US made advanced AI chips a national security issue by passing the Export Control Reform Act (ECRA), the first permanent statutory export control authority since the Cold War.
  • In October 2022, the Biden administration banned exports of high‑performance GPUs and some chip‑making tools to China. 
  • In January 2023, Washington convinced the Netherlands and Japan to halt the sale of equipment used to manufacture semiconductors to China. 
  • In December 2024, the Bureau of Industry and Security expanded export controls to cover high‑bandwidth memory chips and more manufacturing equipment, forcing Samsung and Micron to obtain licenses to ship to China.
  • In January 2025, the outgoing Biden administration issued the AI Diffusion Framework. It would have required licenses for exports of high‑end chips, and even model weights, to most of the world, creating a tiered system that essentially banned shipments to China, but it was rescinded by the Trump administration.

“We are protecting our technologies with a ‘small yard, high fence’ approach,” Jake Sullivan, former National Security Advisor, said many times, providing an analogy for the US’s plan to tightly control a small number of highly valuable hardware components. 

The nature of the chip war is about to change.

Did it work? Partially. Chinese-made chips, such as Huawei’s Ascend 910B/C, are said to be roughly four years behind Nvidia’s leading chips, but that gap might not exist for long — Kai-Fu Lee, founder of China-based AI company 01.AI, told Reuters that Chinese models were just three months behind US models as of March 2025. 

More importantly, China is starting to learn to work around the regulatory hurdles through upskilling, domestic manufacturing, and maybe even subterfuge — during the DeepSeek saga, there were persistent rumors that China was getting its hands on restricted Nvidia chips via intermediaries in Singapore and elsewhere.

The nature of the chip war is also about to change, and the nature of this change could shift the race in China’s favor. As AI adoption grows, the number of chips used to run the models — a process called inference — will soon outnumber the number used to train them. Unlike training, inference often can rely on older or less specialized chips, a market where China’s domestic production could provide substantial cost and reach advantages.

Sovereign AI

The US has been able to maintain a lead in the race to AGI so far, but China is very close behind. If you exclude the very top models from OpenAI and Anthropic, Chinese models — Qwen, DeepSeek, Kimi, GLM, etc. — are highly comparable, and new ones emerge almost every day. Almost all of them are open source.

As America’s lead narrows, the stakes in the race are getting higher. What used to sound like hyperbole — “AI will take everyone’s jobs” — is starting to sound at least partly plausible. And with AGI — however you define it — approaching, we’re now seeing $1 billion pay packages for engineers, $100 billion investments into capex, and $1 trillion valuations for tech companies. These figures are too high to ignore — the US economy is going to be dramatically affected by AI.

Software-based AI regulations were designed to give the US government control over the speed of AI development. That proved impossible, and now the guardrails are off. Hardware-based regulations were designed to ensure only “we” could use AI, not “them.” Though partially successful, that lead is evaporating. 

That leaves the US facing the inevitable: The most powerful technology the world has ever seen — one that could replace a big chunk of the $100 trillion of spend that goes to labor every year — will likely soon be available to the US and the rest of the world’s most powerful nations. 

This is going to dramatically change the makeup of society and the global power dynamic. 

The US has stumbled before in regulating transformative technologies, and this is the most powerful one yet.

The key question of governance of AI is essentially a question of how to handle this new concentrated power. As American historian and philosopher Hannah Arendt argued, new technology always radically alters human affairs, and the government’s role is to preserve plurality and constrain the domination that gets enabled. 

And this is hard. Like we saw, simple software regulations don’t quite work or make sense. Constraining hardware can work but requires becoming far more draconian than we would like to be. It also only works for geopolitical domination. Removing AI entirely isn’t really an option to keep global competitiveness. That’s why it’s complicated. To come to grips with it means answering questions like:

  • How sovereign should AI be? Is it like the cloud, data storage, networking equipment, power plants, solar panels? What’s the analogy?
  • How much should a model’s creator get indemnity for what the model does? 
  • How would that apply to the creators of open-source models? 
  • Should the US have federal AI laws? If so, what should they include and how should they be enforced? 
  • What would be the benefits of letting the free market sort it out? When’s even the right time to get involved?

The US has stumbled before in regulating transformative technologies — from encryption to social media — and this is the most powerful one yet, with the potential to redefine work, wealth, and global influence. The real challenge now is whether the US can govern this power appropriately — in a way that protects society, preserves America’s competitive edge, and fosters innovation, all at the same time.

Sign up for Rohit Krishnan’s Strange Loop Canon Substack
Essays about innovation, progress, technology, business, science, and complex systems.
Related
The forgotten war on the Walkman
Today, the Sony Walkman inspires nostalgia, but in the 1980s, it was feared as a dangerous device that could disconnect society.
If we want artificial “superintelligence,” it may need to feel pain
Philosopher Jonathan Birch argues that sentience might be essential to “higher” forms of intelligence, including truly intelligent AI.
The age of industrialized imagination
By slashing costs, AI will spark a creative renaissance in the entertainment industry, leading to more working artists, not fewer.
America’s next grand mission: Build an AI-powered society for all
America’s new mission isn’t just to invent the new technology of AI. It must invent the systems that then make the most of AI.
AI’s next frontier: Modeling life itself
Biologists are skipping the petri dish and using AI-powered virtual cells to experiment in silico.
Up Next
A person holding a smartphone in one hand and a small container with pills in the other, possibly using ChatGPT to seek healthcare advice or medication information.
Subscribe to Freethink for more great stories