Congress Again Drops Bid to Block State AI Laws: What It Means for Employers and Tech Businesses
Insights
12.05.25
Key lawmakers just killed a provision that would have stopped states from enforcing their own artificial intelligence laws from the National Defense Authorization Act (NDAA), handing a setback to President Trump’s latest push to block state regulation of AI. Tech companies and AI developers were hoping Congress would impose a federal standard that would wipe out the growing patchwork of state rules, but Tuesday’s announcement means that employers remain in the middle of an increasingly fragmented regulatory landscape. What happened, what do we expect next, and what should employers and businesses do?
What Happened?
Over the last six months, some congressional Republicans and the White House have repeatedly tried to block states from passing or enforcing AI laws, aiming to attach preemption language to high-priority legislation:
- First attempt – the “pause” dies in the Senate. Earlier this year, senators voted 99–1 to strip a five-year pause on state AI laws from a major budget bill after it had already been watered down from an initial 10-year moratorium.
- Second attempt – the NDAA gambit. In November, President Trump publicly urged Congress to revive the effort and either attach a federal standard to the NDAA or pass a standalone bill, warning that a “patchwork” of state rules would undermine US competitiveness.
- Tech and the White House lead push. Major AI developers lined up behind a federal standard, arguing that state-by-state rules would slow innovation. White House AI czar David Sacks led an eleventh-hour push to insert preemption language into the NDAA.
- Scalise says “not here.” But yesterday, House Majority Leader Steve Scalise (R-LA) confirmed the AI moratorium is out of the defense bill, saying the NDAA “wasn’t the best place” for preemption. Another key Republican, Sen. Josh Hawley (R-MO), immediately posted a reaction on X, saying “Good. This is a terrible provision and should remain OUT”
What Do We Expect Next?
Here’s what employers and businesses can expect in the coming months.
Another Preemption Attempt – Just Not Through the NDAA
Scalise has already said that AI preemption language could show up in “other places,” and Trump has floated both standalone bills and a possible executive order. We’re likely to see a narrower preemption bill in 2026 focused on “national competitiveness” and critical-infrastructure AI, with carve-outs for discrimination and safety.
Growing State Patchwork
This latest setback means state and local efforts will continue and probably accelerate. Just a short list of the existing and proposed state- or local-level AI regulation includes:
- Colorado’s AI Act
- California’s disclosure mandates
- New York City’s bias audit rules
- Illinois’s notice and bias requirements
- Virginia will take another run at AI regulation now that the state has turned blue
- Dozens of pending ADMT-style bills nationwide.
What Employers Should Do Now
Here are some steps you can take now to prepare you regardless which way the debate shapes out. Even if federal lawmakers block states from passing their own measures, plaintiffs’ attorneys and state regulators will use existing laws and regulations to hold employers accountable.
1. Map Your AI Tools – Especially High-Risk Uses
Refresh your centralized inventory and make sure it clearly identifies tools that affect:
- Hiring, promotion, and layoffs
- Performance scoring and productivity tracking
- Predictive scheduling and workforce planning
- Monitoring, sentiment, or voice analysis
- Safety, misconduct, or incident prediction
2. Build or Update Your “State Patchwork Strategy”
Treat state AI regulation like you already treat wage-hour or data-privacy patchworks:
- Track key jurisdictions (NYC, Illinois, Colorado, California, Texas, and any states where you have substantial headcount).
- Identify the strictest applicable rules on notice, audits, and documentation, and decide where it makes sense to standardize up.
- Coordinate HR, Legal, and IT so that policy changes (e.g., new AI features in an ATS) are assessed against state requirements before deployment.
3. Prepare for Bias Testing Requirements
Even if federal preemption moves forward, workplace-related outcomes (discrimination, disparate impact, applicant screening) are likely to remain protected carve-outs. Start working with an auditor to help build:
- Data-retention models
- Bias measurement protocols
- Documented rationale for each tool’s use
4. Update Vendor Contracts Now
Ask the right questions of your AI vendors. Make sure to consider:
- Clear disclosure of training data sources
- Audit rights
- Outcome-based risk mitigation commitments
- Adherence to NIST AI RMF or comparable frameworks
5. Build a Cross-Functional AI Governance Team
Involve HR, Legal, IT, Security, and others. No matter how the federal fight ends, employers who can show intentional governance will be better protected. You can start with these 10 steps.
Conclusion
If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our AI, Data, and Analytics Practice Group or on our Government Relations team. Make sure you are subscribed to the Fisher Phillips Insight System to stay updated.
Related People
-
- Usama Kahf, CIPP/US
- Partner
-
- Braden Lawes
- Senior Government Affairs Analyst

