• People
  • Services & Industries
  • Insights
  • Innovation
  • Offices
  • My Binder
  • PDF

What Responsible AI Use Means for Employers: 4 Takeaways from FP’s Recent Capitol Hill Testimony

Insights

2.11.26

Efforts on Capitol Hill to rein in artificial intelligence technology would be like taking a wiffle ball bat to a tidal wave. That’s the message Dave Walton, partner at Fisher Phillips and Co-Chair of the firm’s AI, Data, and Analytics team, delivered to the US House Committee on Education and the Workforce earlier this month. During a hearing on “Building An AI-Ready America: Adopting AI At Work,” Walton emphasized that AI is likely to create more jobs than it displaces – and employers need to take measures to govern themselves now as the legal landscape around the tools evolve. As federal lawmakers consider legislation to place guardrails around AI use, Walton cautioned them not to go too far, particularly since established employment and labor laws already apply. AI can actually help retain employees, monitor workplace safety and health, and reduce turnover, he said. Here are four main takeaways from Walton’s testimony on AI regulation to help your business stay ahead of the game as technology evolves.

David Walton

1. Past technological innovations have led to job growth.

How artificial intelligence will ultimately embed itself into our economy and the workplace will likely be a case of history repeating itself, Walton said. While workers may not be needed for certain tasks, history suggests AI will actually increase total employment opportunities.

💡 This is known as Jevons’ Paradox: Improved efficiency reduces costs, which increases demand, and ultimately expands overall employment levels, even as specific tasks require less human effort. This pattern has characterized every major automation technology since the Industrial Revolution.

Right now, AI presents massive uncertainty because we don't know what new jobs are going to be created. Walton warned lawmakers that overreactive regulation could scare employers from deploying the technology. In turn, this could limit workers’ access or exposure to AI, making the system less democratic because workers won’t have as much of a role in shaping emerging technology.

2. Your legal obligations haven’t changed.

Employers, regulators, and advocates should keep in mind that employee rights under federal collective bargaining, workplace safety, or minimum wage laws still apply even when AI is being used.

For example, surveillance targeting union activity violates the law no matter how it’s done. But monitoring for safety, compliance, productivity, quality, or security is generally legal and can be extremely beneficial for both employees and your business.

🗝️ What’s key is transparent policies and firewalls against targeting protected union activity. The goal should be to ensure that monitoring serves legitimate purposes, operates openly, stays proportionate to business needs, and includes real safeguards against misuse. It's not the technology that needs to be regulated, according to Walton, it's the output of the technology and how you use it.

3. AI is already being used in the workplace to help manage risks and boost compliance.

Remember, AI isn’t limited to chatbots, image creation, or the typical generative tools we see proliferating the internet, like ChatGPT. Employers are using AI technology to better recruit talent and personalize employee onboarding, as well as to monitor scheduling and compensation practices. AI can also greatly improve safety practices by, for example, ensuring personal protective equipment is properly worn, monitoring environmental hazards, or alerting supervisors of exposure limits.

🦺 Safety First. “Since the average direct and indirect costs of a lost-time workplace injury exceed $80,000, organizations can often justify AI safety programs based on preventing just a few incidents annually,” according to Walton.

4. Smart employers are adopting governance structures for their AI systems, even in the absence of federal AI-specific regulations.

Common themes among corporate AI compliance plans include disclosure practices, cross-team oversight, and auditing for bias and data privacy risks, among other controls. Many companies have adopted proactive policies like requiring that humans be involved in major employment decisions, ensuring extensive employee training on AI tools used in the course of their work, and having an incident response plan in place for when AI malfunctions.

⭐ Be Proactive. Walton emphasized the importance of employers governing themselves, “because if they don’t take measures to govern, somebody will.” If your company is unsure of where to start when it comes to mitigating AI risks, Fisher Phillips offers multiple services, including AI-bias auditing, that can help safeguard your business.

Conclusion

Want more? Listen to Dave’s February 3, 2026 testimony before the U.S. House Committee on Education and the Workforce or read the full transcript. If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our AI, Data, and Analytics Practice Group or on our Government Relations team. And make sure you are subscribed to the Fisher Phillips Insight System to receive the latest developments straight to your inbox.

 

 

 

 

 

Related People

  1. Rebecca Rainey
    Legal Content Reporter

    202.908.1142

    Email
  2. David Walton bio photo
    David J. Walton, AIGP, CIPP/US
    Partner

    610.230.6105

    Email

Service Focus

  • AI, Data, and Analytics
  • Privacy and Cyber
  • Government Relations

We Also Recommend

Subscribe to Our Latest Insights 

©2026 Fisher & Phillips LLP. All Rights Reserved. Attorney Advertising.

  • Privacy Policy
  • Legal Notices
  • Client Payment Portal
  • FP Solutions