3 min read

NDAA Push Fails & Republicans Move to Block State AI Laws

NDAA Push Fails & Republicans Move to Block State AI Laws
NDAA Push Fails & Republicans Move to Block State AI Laws
7:15

House Majority Leader Steve Scalise told reporters Tuesday that Republican leaders are "looking at other places" to include federal preemption of state AI laws after their push to add the provision to the National Defense Authorization Act stalled amid Republican infighting.

The proposal would impose a 10-year moratorium on state AI regulations—preventing states like California from enacting laws that govern AI development, deployment, or safety requirements. GOP leaders previously tried inserting this into Trump's tax and spending bill earlier this year before stripping it out due to internal Republican resistance.

Trump has publicly urged Congress to pass the ban, either in the NDAA or as separate legislation, and reportedly considered an executive order targeting state AI measures before lawmakers asked him to hold off. The White House argument is straightforward: avoiding a "patchwork" of state laws that could hamper innovation as the U.S. competes with China for AI dominance.

"There's still an interest in making sure that you don't have states like California that wreck the ability to innovate in artificial intelligence, similar to what Europe did to wreck their innovation," Scalise said Tuesday.

Let's examine what's actually happening here.

The Innovation Argument's Convenient Framing

The case for federal preemption centers on preventing regulatory fragmentation. Companies investing billions in AI don't want to navigate 50 different state regulatory frameworks. Compliance costs increase. Legal uncertainty grows. Innovation supposedly suffers.

This argument would be more convincing if the federal government were proposing comprehensive AI regulation with clear safety standards, transparency requirements, and accountability mechanisms. Instead, they're proposing to block state action while offering no federal framework in return.

That's not streamlining regulation—it's preventing regulation. The "patchwork" concern becomes cover for maintaining a regulatory vacuum that benefits AI companies at the expense of state-level democratic accountability.

Scalise's comparison to Europe is telling. Yes, European AI regulation is more stringent than U.S. approaches. Whether that "wrecked" European innovation or simply required companies to account for societal impacts remains contested. But the implicit argument is clear: any regulation that constrains AI development is definitionally bad for innovation, and innovation trumps all other concerns.

The States' Rights Party Suddenly Loves Federal Power

The Republican Party traditionally champions states' rights and federalism—letting states serve as "laboratories of democracy" where different regulatory approaches can be tested. Unless, apparently, those experiments might constrain AI companies.

This philosophical inconsistency hasn't gone unnoticed within Republican ranks. Marjorie Taylor Greene, Sarah Huckabee Sanders, and Ron DeSantis all opposed including AI preemption in the NDAA. Their objections center on federalism principles—states should retain authority to regulate activities within their borders, particularly when the federal government offers no alternative framework.

Senate Majority Leader John Thune acknowledged the tension: "The White House is working with senators and House members to try and come up with something that works but preserves states' rights. Right now, both sides are kind of dug in."

The split reveals competing Republican priorities: business interests that want regulatory certainty (meaning no regulation) versus conservative principles about limiting federal power. When those interests conflict, watch which one wins.

New call-to-action

What State AI Laws Actually Do

California's AI safety legislation—the kind Scalise wants to preempt—includes requirements like safety testing for high-risk AI systems, transparency about training data and capabilities, and mechanisms for addressing harms. These aren't radical restrictions. They're basic accountability measures that other industries routinely face.

The argument that such requirements "wreck innovation" assumes innovation should proceed without safety constraints or public accountability. That's not how we treat pharmaceuticals, automobiles, aviation, or financial services. Industries that can cause significant public harm face regulation. AI companies argue they're special.

State-level AI legislation often fills gaps where federal action lags. When Congress fails to address emerging technology risks, states step in to protect their citizens. Federal preemption without replacement regulation doesn't solve the patchwork problem—it eliminates accountability entirely while Congress continues indefinitely "studying" the issue.

The China Competition Deflection

Scalise invoked China competition to justify blocking state AI laws: "AI is where a lot of new massive investment is going. You're seeing companies invest in $5 billion, $10 billion, $20 billion, real money. We want that money to be invested in America."

This framing treats AI development as zero-sum competition where any constraint on U.S. companies advantages China. But China's AI development proceeds regardless of California's regulatory choices. And if the concern is truly about competitiveness, stronger federal standards—not regulatory absence—would provide the certainty and legitimacy that supports sustainable development.

The China argument also ignores a fundamental question: compete toward what end? If winning the AI race means deploying systems without safety constraints, transparency requirements, or accountability mechanisms, have we won anything worth having?

What This Reveals About AI Governance

The federal preemption push demonstrates how AI policy debates get framed exclusively around industry interests. Innovation, investment, and competition dominate discussions while questions about safety, accountability, and democratic governance get dismissed as innovation-killing regulation.

This isn't unique to AI. It's a familiar pattern where emerging technologies claim exemption from normal regulatory oversight during their "critical innovation phase," which conveniently never ends. By the time comprehensive regulation becomes politically feasible, industry practices are entrenched and harder to constrain.

For business leaders navigating AI adoption, the regulatory uncertainty cuts both ways. Yes, companies want clear rules. But absence of regulation doesn't provide certainty—it provides exposure. When AI systems cause harm and no regulatory framework exists, companies face unpredictable litigation risk and reputational damage.

At Winsome Marketing, we help teams understand that sustainable AI adoption requires accountability frameworks whether or not government mandates them—because customers, employees, and partners increasingly demand transparency about AI use regardless of regulatory requirements. Betting your strategy on regulatory absence might work short-term, but it's not a foundation for long-term trust.

OpenAI's Privilege Fight: The Discovery Battle That Will Define AI's Legal Future

OpenAI's Privilege Fight: The Discovery Battle That Will Define AI's Legal Future

OpenAI is fighting for its life in a Manhattan courtroom, and the weapon pointed at it isn't a novel legal theory or a sympathetic plaintiff—it's...

Read More
ISO 42001: The AI Governance Standard That's Actually Getting It Right

ISO 42001: The AI Governance Standard That's Actually Getting It Right

While the tech world debates whether AI will save humanity or destroy it, a quieter revolution is happening in boardrooms and compliance offices...

Read More
The Insurance Industry's AI Problem: When Multibillion-Dollar Claims Meet Unquantifiable Risk

The Insurance Industry's AI Problem: When Multibillion-Dollar Claims Meet Unquantifiable Risk

The insurance industry has spent centuries perfecting the art of risk quantification. Actuaries can tell you the probability of a house fire, a car...

Read More