• People
  • Services & Industries
  • Insights
  • Innovation
  • Offices
  • My Binder
  • PDF

Congressional Republicans Propose 10-Year Ban on State AI Laws: What It Could Mean for Employers

Insights

5.15.25

An influential House committee just advanced a sweeping proposal to impose a decade-long moratorium on state-level laws regulating artificial intelligence – a move that could dramatically reshape the regulatory landscape for employers across the country. If enacted, the provision would halt new or existing state laws targeting AI systems, models, and automated decision-making tools, effectively pausing local efforts to address AI risks. While the proposal released on May 11 and approved by committee on May 14 is buried within a broader tax and budget bill and faces steep procedural hurdles, its inclusion is a clear signal that congressional Republicans are serious about curbing state authority on AI. What do employers need to know about this intriguing development?

[Ed. Note: The budget bill passed by the House on May 22 included the 10-year ban on state and local governments enforcing laws or regulations that govern AI.]

What’s in the Proposal?

The draft legislation from the House Energy and Commerce Committee would prohibit states from enacting or enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for a full 10 years.

That timeframe is raising eyebrows across the political spectrum. While some lawmakers and business leaders argue that a federal moratorium would create much-needed breathing room to craft national standards, others see it as a blanket shield that would benefit Big Tech at the expense of state autonomy.

Why Now?

This push comes at a time when the federal government has yet to pass comprehensive AI legislation, leaving a vacuum that states have rushed to fill. According to the National Conference of State Legislatures, over 550 AI-related bills have been introduced by at least 45 states in 2025 alone, covering everything from workplace bias and privacy rights to deepfakes and content labeling.

  • Colorado passed a first-in-the-nation law targeting algorithmic bias in employment and other key sectors, set to take effect in early 2026. A recent attempt to soften the law failed to pass, setting the stage for the nation’s first comprehensive AI state law impacting employers.
  • California has led the way with aggressive proposals like SB 1047, a sweeping AI safety bill that Governor Newsom vetoed last year after intense industry lobbying. This year, we’re monitoring the “No Robo Bosses” Act and another bill seeking to regulate AI decision-making tools in employment and other key areas, among others. 
  • Illinois’s AI Video Interview Act regulates AI-driven hiring assessments – but is much narrower in scope than Colorado’s law and other proposals.
  • New York, Connecticut, and Vermont are among the other states pursuing AI oversight tailored to local priorities, including transparency, discrimination prevention, and protection from addictive AI systems. (Read more here.)
  • Virginia passed a detailed AI law that would have impacted employers, but it was vetoed by Governor Youngkin before it could take effect.  

The House proposal would effectively wipe out these initiatives, locking the door on state enforcement before many laws even take effect.

Big Tech Pushing For Unified Approach

Major tech firms have lobbied hard for a unified federal approach to AI. They don’t want a 50-state chaotic patchwork of rules that could stifle innovation, drive up compliance costs, and cause administrative headaches – especially in the human resources space. Venture capital firms, particularly those tied to Silicon Valley, are also backing federal preemption to protect emerging startups from what they describe as burdensome state laws.

This is a notable shift from past fights over privacy law, where tech firms generally opposed federal regulation. In the AI arena, they’re embracing it – as long as it neutralizes tougher state standards.

Some federal officials are echoing those concerns. Rep. Jay Obernolte (R-CA), a member of the Energy and Commerce committee and Chair of the House Bipartisan Task Force on AI, argued that Congress needs to act fast “before the states get too far ahead.”

Many States Not on Board With Aggressive Approach

While industry groups are eager for national uniformity, state lawmakers and consumer advocates are sounding the alarm. Critics say the proposal is less about consistency and more about delay – a “preemption without protection” strategy that removes guardrails without offering anything in their place.

Democratic leaders in multiple states – including Rep. Brianna Titone (D-CO) and Rep. Monique Priestley (D-VT) – have criticized the proposal as reckless and dangerous (subscription required), warning it would strip states of their ability to respond to real harms already emerging from AI systems. “This is a free-for-all on AI,” said Titone, one of the architects of Colorado’s impending law. “People want to see regulation, not have it be stripped away in this reckless way.”

Even within Colorado, the debate is splitting Democrats. Governor Jared Polis has voiced support for a temporary federal pause (subscription required), while suggesting a shorter two- to four-year moratorium might make more sense and allow Congress time to act in a more thoughtful way. This comes in the wake of Polis expressing frustration with his state legislature’s inability to refine the impending Colorado AI law before next year’s effective date.

[Ed. Note: On May 16, a bipartisan collection of 40 state Attorneys General wrote a letter to Congressional leadership urging them not to pass the state AI law ban. You can read a copy of the letter here.]

Will It Survive?

The proposed moratorium may not ultimately make it into law. Because it’s part of a tax bill moving under budget reconciliation, it must comply with strict Senate rules that limit provisions to those directly tied to federal spending. Legal experts and even Senate aides have questioned whether a state preemption clause like this one can survive a challenge under the Byrd Rule (which prohibits including provisions in reconciliation bills that are “extraneous” to the federal budget, such as those with no direct impact on government spending or revenue.).

But even if this version fails, insiders expect the preemption fight to return in the next year.

What’s Next?

The Energy and Commerce Committee began debating the measure on Tuesday as part of the larger budget negotiations. Early Wednesday, the committee voted along party lines (29-24) to advance the package. It still faces further mark-ups and procedural hurdles before officially becoming a part of the budgetary mega-bill requested by President Trump. We will have a better sense for the AI preemption’s chances of the survival in the coming weeks.

[Ed. Note: As noted above, the budget bill passed by the House on May 22 included the 10-year ban. It now moves to the Senate for further debate.]

What Employers Need to Know

So what does this mean for your business? Whether you're currently using AI or just evaluating it, this proposal could have major implications for your compliance strategy. Here’s what to watch – and do:

1. Temporary Uniformity – But Long-Term Uncertainty
A moratorium might offer near-term clarity by halting conflicting state rules. But with no federal AI law in place, it would also extend the current vacuum. That makes long-term planning difficult for employers seeking safe, lawful AI deployment strategies.

2. Don’t Confuse Preemption With Protection
Even without state AI laws, AI-driven decisions are still subject to anti-discrimination laws, such as the ADA, Title VII, and other existing state-level workplace statutes. Liability wouldn’t disappear, it would just shift arenas.

3. Audit Now, Not Later
Employers should audit AI tools for explainability, bias, and disparate impact (regardless of the recent executive order taking on that legal theory) and not wait for regulators to catch up. If your vendor can’t explain how their system works or prove it’s compliant with civil rights laws, it’s time to rethink that partnership. Here are the essential questions you should ask your AI vendor before deploying AI at your organization.

4. AI Governance is the Name of the Game
Risk assessments, transparency, and human oversight remain essential tools for preventing AI-based discrimination. Read AI Governance 101: The First 10 Steps Your Business Should Take in order to plot out the best course for your organization.

5. Prepare for Whiplash
If the moratorium passes and then is later repealed or overturned on procedural grounds – or if federal rules conflict with emerging international standards – businesses may face another rapid pivot. Design your compliance systems with flexibility in mind. And stay up to speed on the latest developments by subscribing to the Fisher Phillips Insight system.

Want to Learn More About AI?

Join Fisher Phillips for our third-annual AI Conference for business professionals this July 23 to 25, in Washington, D.C. Learn more and register here.

Conclusion

If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our AI, Data, and Analytics Practice Group or on our Government Relations team. Make sure you are subscribed to the Fisher Phillips Insight system to receive the latest developments straight to your inbox.

Related People

  1. Amanda M. Blair
    Associate

    212.899.9989

    Email
  2. Benjamin Ebbink photo
    Benjamin M. Ebbink
    Partner

    916.210.0400

    Email
  3. Braden Lawes Bio Photo
    Braden Lawes
    Senior Government Affairs Analyst

    202.916.7176

    Email

Service Focus

  • AI, Data, and Analytics
  • Government Relations

We Also Recommend

Subscribe to Our Latest Insights 

©2025 Fisher & Phillips LLP. All Rights Reserved. Attorney Advertising.

  • Privacy Policy
  • Legal Notices
  • Client Payment Portal
  • FP Solutions