News

Why the New US AI Order Is a Big Deal

Chloe Maluleke|Published

CEO of OpenAI Samuel Altman testifies at a hearing of the Senate Committee on the Judiciary on Oversight of A.I.: Rules for Artificial Intelligence in Washington, DC, the United States.

Image: Xinhua

When the White House moves to pre-empt state laws that regulate artificial intelligence, we need to sit up and ask who wins, and who loses? A draft executive order reportedly under consideration directs the US Department of Justice to form an AI Litigation Task Force whose job is to sue US states that pass AI laws the federal government deems burdensome. That is not just a policy shift, it is a power shift.

Innovation vs. State Autonomy: The Stakes Are High

The draft order aims to block state AI laws by arguing they infringe on the Commerce Clause or duplicate federal regulation. 

More concretely, if a state passes a law requiring AI firms to disclose how they train models, or to prevent bias in AI outputs, the order would label that “onerous” and potentially subject that state to legal challenge and loss of federal funding. For example, the draft names California and Colorado’s transparency and algorithmic-discrimination laws as targets. 

The stated aim? Create “one national standard” rather than 50 different state laws. Proponents argue that a patchwork of rules slows down innovation, raises compliance burdens and undermines US global competitiveness in AI. The other side argues this is a dangerous centralising move.

There are key numbers:

The US Senate in July voted 99-1 to strip from a major bill a provision that would have banned states from regulating AI for ten years — an attempt to block states entirely. 

The federal broadband funding programme at stake, the Broadband Equity Access and Deployment (BEAD) Program is valued at US$42 billion. Some states may lose access if they pursue AI regulations deemed unacceptable. 

In short: a national AI policy debate is turning into a fight about federal vs. state power.

Why This Matters: Protective Regulation or Innovation Stifler?

State regulations in AI are not just symbolic. Many states are acting because of real harms algorithmic discrimination, misinformation, deep-fakes, lost jobs, privacy violations. The Electronic Frontier Foundation says the draft order is “deeply misguided” because state laws are often the earliest protectors of citizens from AI abuse. 

When the federal government says “states regulate too much,” it risks leaving gaps where regulation is necessary. Without state guardrails, a handful of large firms could dictate how AI is used in everything from employment and housing to policing and health. That concentration of power may accelerate innovation, but it also accelerates risk.

And there’s a constitutional dimension. Legal scholars point out that only Congress can legally pre-empt state laws under the Commerce Clause; the executive branch cannot unilaterally decide state laws are invalid. 

 If the draft order becomes law, expect prolonged court battles, regulatory uncertainty and a chilling effect on state-level experimentation.

Hidden Risks: When “Innovation” Becomes a Shield

Behind the rhetoric of “winning the AI race” lies a simpler reality: the largest tech firms prefer uniform standards, fewer jurisdictions to deal with, and lower compliance costs. They have lobbied hard against state laws. In the draft order, big-tech lobby groups are described as worried about a “patchwork” of state regulations. 

So what does this mean for citizens?

States that attempt to protect their residents might be punished by loss of billions in federal funding.

Innovation might speed ahead in legal terms, but without parallel safeguards, harms might grow unchecked.

The balance between commercial potency and public interest could tilt toward the former.

A Call for Balanced Governance, Not Blanket Power

A national AI strategy is both necessary and overdue. But centralising power under an executive order that threatens states with lawsuits and funding cuts is not the answer. Innovation should not mean deregulation by decree. States’ rights are not relics—they are essential laboratories for democracy and protection. If the U.S. truly wants safe, trustworthy AI, then rules should not be imposed from the top down only; they should be shaped with public interest in mind, at both state and federal levels.

If the draft becomes reality, we will watch whether America advances not just as an AI leader, but as a nation that still honours the diversity of its states, governance, and people. Because in the end, technological dominance without strong values and checks risks becoming a different form of vulnerability.

 

Written By:

Chloe Maluleke

Associate at BRICS+ Consulting Group 

Russia & Middle East Specialist

** MORE ARTICLES ON OUR WEBSITE https://bricscg.com/

** Follow https://x.com/brics_daily on X/Twitter for daily BRICS+ updates