• People
  • Services & Industries
  • Insights
  • Innovation
  • Offices
  • My Binder
  • PDF

Trump Calls for Federal Standard to Block State AI Laws: What Employers and Tech Companies Should Know Now

Insights

11.19.25

President Trump just injected fresh urgency into the national AI regulatory debate by asking Congress to resurrect an initiative to block or curb states from passing their own AI-related laws. In a social media post on Tuesday, the president urged Congress to once again pursue legislation to block state-level AI regulation and instead create one federal standard, warning that a patchwork of state rules threatens US innovation and global competitiveness. This development marks the beginning of the next chapter of a remarkable six-month saga in D.C. over whether states should be allowed to regulate AI at all. What will happen? And what should employers and tech developers do?

The Message

Trump’s message that reignited the debate came on Truth Social late yesterday afternoon:

Investment in AI is helping to make the U.S. Economy the “HOTTEST” in the World, but overregulation by the States is threatening to undermine this Major Growth Engine… We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI race.

Trump floated adding the measure to the National Defense Authorization Act (NDAA) – a common vehicle for controversial policy riders – or passing it as a standalone bill. House Majority Leader Steve Scalise had already announced on Monday that GOP leadership is actively exploring the NDAA route for an AI preemption law.

The Summer Saga: How We Got Here

The president’s call comes after months of back-and-forth on Capitol Hill, where lawmakers debated – and repeatedly reshaped – a proposal to pause or block state AI laws.

  • The Opening Shot: A Full 10-Year Ban on State AI Regulation – Back in June, House Republicans passed a bill that would have barred states from passing or maintaining any new or existing AI laws for 10 years.
  • Senate Leaders Water Down the Proposal – Days later, Senate negotiators softened the measure to simply block states from receiving federal tech funding if they regulated AI – early evidence the ban faced resistance, especially from states prioritizing privacy, safety, or ADMT-style controls.
  • Officials Cut Ban to Five Years – As negotiations continued, the proposal was reduced from 10 years down to five, aligning with centrists and industry voices who argued a shorter pause might be workable.
  • Senate Ultimately Walks Away – In July, the Senate voted 99-1 to drop the pause entirely, recognizing that more consideration was needed before advancing the proposal

Why This Matters for Employers and Tech

That brings us to the present: the president publicly called for Congress to resurrect a moratorium on state-level AI laws. What forces are shaping the debate?

The Pressure For a National AI Standard Is Growing

Republican leadership, major tech CEOs, and the White House are pushing the same message: the US risks losing ground to China unless it creates a consistent national framework.

State AI Regulation is Rising Fast

AI laws at the state and local level are mounting fast, and more are expected in 2026:

  • Colorado’s AI Act
  • California’s disclosure mandates
  • New York City’s bias audit rules
  • Illinois’s notice and bias requirements
  • Virginia will take another run at AI regulation now the state has turned blue
  • Dozens of pending ADMT-style bills nationwide.

Employers are Caught in the Middle

Companies with multi-state operations already face fragmentation, inconsistent timelines, and overlapping obligations. You’re seeing:

  • Divergent definitions of “automated decision tools”
  • Conflicting notice and transparency rules
  • Different approaches to AI-assisted hiring and monitoring
  • Expanding obligations around bias testing and documentation

What Happens Next? 3 Scenarios to Watch

Our Government Relations team sees three possible scenarios that are most likely to unfold in the coming months:

Scenario 1: The NDAA Becomes the Vehicle

Highly plausible. Both Congress and the administration routinely use the NDAA to move contested policies. If the AI standard is attached to the NDAA, look for:

  • A shorter moratorium than 10 years
  • Possible carve-outs for safety, discrimination, or copyright
  • A focus on “national competitiveness” language

Scenario 2: A Standalone Federal AI Bill

Harder but not impossible. This would require:

  • A narrower preemption clause
  • Outcome-focused regulation (a framework suggested by Congressman Jay Obernolte at this summer’s FP AI Conference)
  • Compromise on labor-market issues like hiring algorithms and monitoring tools

Scenario 3: No Federal Action and States Surge Forward

Perhaps it’s no coincidence that the National Conference of State Legislatures (NCSL) just reaffirmed its opposition to a federal preemption standard and has been actively lobbying Congress this week to hold off on any such law. If Congress listens to their appeal and stalls on a federal law, expect a 2026 wave of:

  • ADMT rules modeled on California and Colorado
  • New York-style hiring audit laws spreading
  • California expanding its disclosure mandates
  • Sector-specific rules in healthcare, insurance, finance, and workforce management – Virginia is a probable test case for such action

What Employers Should Do Now

Regardless of how Congress resolves this fight, the regulatory risk is already here. And remember that some states could buck Congress and pass their own AI laws if the federal bill simply prevents states from getting a cut of federal tech funding, as they might feel confident they can carry on without that bucket of money (not to mention that some states might pursue litigation against a federal law that interferes with their rights to regulate AI). Take these steps now to prepare:

1. Map Your AI Tools, Especially For High-Risk Use Cases

Create a centralized inventory covering:

  • Hiring and promotion systems
  • Performance monitoring
  • Productivity scoring
  • Sentiment/voice analysis
  • Predictive scheduling
  • Safety/incident prediction

2. Build a “State Patchwork Strategy”

Track obligations in key jurisdictions. Prioritize compliance with:

  • California (security disclosure mandates)
  • Colorado (AI Act)
  • Illinois (notification)
  • Virginia (renewed notification push)
  • New York (algorithmic hiring audits)
  • Emerging ADMT-style bills

3. Prepare for Bias Testing Requirements

Even if federal preemption moves forward, workplace-related outcomes (discrimination, disparate impact, applicant screening) are likely to remain protected carve-outs. Start working with an auditor to help build:

  • Data-retention models
  • Bias measurement protocols
  • Documented rationale for each tool’s use

4. Update Vendor Contracts Now

Ask the right questions of your AI vendors. Make sure to consider:

  • Clear disclosure of training data sources
  • Audit rights
  • Outcome-based risk mitigation commitments
  • Adherence to NIST AI RMF or comparable frameworks

5. Build a Cross-Functional AI Governance Team

Involve HR, Legal, IT, Security, and others. No matter how the federal fight ends, employers who can show intentional governance will be better protected. You can start with these 10 steps.

6. Watch the NDAA Closely

If the AI language is included, it could become law fast. You’ll want to know immediately whether preemption applies, hiring tools are covered, or workplace monitoring tools fall under any exemptions, among others. The best way to stay up to speed? Make sure you are subscribed to the Fisher Phillips Insight System to receive the latest developments straight to your inbox.

Conclusion

If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our AI, Data, and Analytics Practice Group or on our Government Relations team. Make sure you are subscribed to the Fisher Phillips Insight System to stay updated.

Related People

  1. Benjamin Ebbink photo
    Benjamin M. Ebbink
    Partner

    916.210.0400

    Email
  2. Danielle Kays Bio Photo
    Danielle Kays
    Partner

    312.260.4751

    Email
  3. Braden Lawes Bio Photo
    Braden Lawes
    Senior Government Affairs Analyst

    202.916.7176

    Email

Service Focus

  • AI, Data, and Analytics
  • Privacy and Cyber
  • Government Relations

Industry Focus

  • Tech

We Also Recommend

Subscribe to Our Latest Insights 

©2025 Fisher & Phillips LLP. All Rights Reserved. Attorney Advertising.

  • Privacy Policy
  • Legal Notices
  • Client Payment Portal
  • FP Solutions