“Regulate AI Outcomes, Not AI Tools.” Congressman Shares Vision for AI Regulation + 5 Tips for Employers
Insights
8.01.25
Speaking at last week’s FP AI Conference, Congressman Jay Obernolte set out to debunk two misconceptions about artificial intelligence. The first is that AI is largely unregulated. The second is that we need to pass myriad new laws in response to the rise of AI use. It’s already illegal to steal people’s money whether you use AI to do it or not, he said – just like it’s already illegal to make race-based employment decisions. Touting the just-released America’s AI Action Plan, he and other federal leaders are aiming to protect people from harmful and malicious AI use by focusing on the outcomes rather than regulating the tools used. Here are five points Congressman Obernolte shared about his vision for AI regulation, as well as five tips from FP’s AI, Data, and Analytics Practice Group leaders.
From left: Dave Walton, Co-Chair of FP's AI Practice Group; Rep. Jay Obernolte (R-CA); Erica Given, Vice Chair of FP's AI Practice Group
5 Insights on the Future of AI Regulation
Congressman Jay Obernolte (R-CA) is clearly excited about the future of AI, and his background illustrates his commitment: He is a computer engineer and video game developer – and the only member of Congress with a graduate degree in artificial intelligence. He's also the Co-Chair of the bipartisan House Task Force on Artificial Intelligence. Here are five key ways he said the federal government is approaching AI regulation:
1. Relying on Federal Agency Knowledge: Rep. Obernolte noted that sectoral regulators are best equipped to handle issues that arise in their areas of expertise. For example, the Equal Employment Opportunity Commission (EEOC) can assess AI risks when it comes to employment discrimination in hiring tools. The Occupational Safety and Health Administration (OSHA) is best positioned to address AI risks related to workplace safety, such as the way monitoring systems are used in manufacturing. His viewpoint: It’s better to teach the agencies about AI than to teach the AI sector about workplace anti-discrimination and safety compliance.
2. Taking a “Hub-and-Spoke” Approach: While a particular AI technology may be risky for one use, it may not be for another. The FDA, for example, oversees safety for medical devices. This is a high-risk area where an AI tool may have a different impact than a lower-risk area. For instance, a diagnosis tool may be fine for a workplace wellness app but not appropriate for diagnosing cancer. A hub-and-spoke model allows agencies (the spokes) to each take a different approach depending on how the technology is used.
3. Protecting People from Malicious AI Use: Law enforcement agencies need to have the tools to combat malicious use of AI in cyber fraud and theft. Rep. Obernolte pointed out that AI presents new ways to commit crimes, but we don’t need to make new laws about what’s illegal. Fraud and theft are already illegal, the shift is in how these crimes are being committed and how to protect people against them.
4. Avoiding a Patchwork of Laws: As more states consider AI legislation, we’re at risk of creating a Balkanized approach to regulation among the 50 states. Rep. Obernolte fears this would stifle innovation and entrepreneurship. “Congress needs to clarify where the interstate commerce guardrails are and where states are free to be the laboratories of democracy that they always have been,” he said.
5. Encouraging Bipartisan Support. AI regulation is not a partisan issue, and Congress is capable of taking swift action. The Congressman said he’s confident we will see bipartisan action and emphasized that his vision for AI regulation is to prevent malicious use while encouraging innovation and entrepreneurship.
Rep. Jay Obernolte (R-CA) speaks at FP's AI Conference
5 Tips for Employers
As the federal government continues to shape its rules on AI use, employers will want to stay ahead of the curve and proactively address policies and compliance efforts in the workplace. We asked our AI, Data, and Analytics Practice Group leaders – Dave Walton and Erica Given – to share their top five tips:
1. Create an AI Governance Plan: Governance is about building a process, following the process, and documenting the process. Click here for key steps to ensure your AI technology aligns not only with your company values and customer expectations but also with the legal standards that are springing up as courts and government investigators adopt them.
2. Safeguard Your Business from AI Hallucinations and Deepfakes: AI “hallucinations” are situations where generative AI produces incorrect or blatantly false pieces of information that sound all too real – and your employees may mistakenly rely on this false information. AI deepfake tools are more intentional: they are used by cybercriminals to fake their identity and infiltrate organizations. Both can cause serious damage to the businesses involved. Here are some steps you can take to protect your business from AI hallucinations, and 10 things you can do to ensure you don’t fall for a deepfake scam.
3. Track Litigation Trends: Lawsuits over AI use in the workplace are popping up all over the country and the ultimate court rulings will surely influence employer policies and practices. Here are just a few issues to track:
- AI Call-Monitoring Lawsuits Are Heating Up
- Discrimination Lawsuit Over Workday’s AI Hiring Tools Can Proceed as Class Action
- AI Screening Systems Face Fresh Scrutiny
4. Review the White House AI Action Plan: The Trump Administration’s plan, which was just announced on July 23, identifies more than 90 federal policy goals that aim to create a roadmap for achieving “global AI dominance” in innovation, infrastructure, and international diplomacy and security. This will have a huge impact not only AI developers and the tech sector but also on many employers and employees throughout the US workforce. You can read more about America’s AI Action Plan here, as well as the top 10 employer takeaways.
5. Keep Up with State Regulations: State lawmakers aren’t waiting for Congress to step in. From active regulations to proposed bills, states are moving full speed ahead to define how AI technologies – especially in hiring and employment – can and should be used. Here’s a rundown of what you should be tracking at the state level.
Conclusion
If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our AI, Data, and Analytics Practice Group or on our Government Relations team. Make sure you are subscribed to the Fisher Phillips Insight System to receive the latest developments straight to your inbox.
Related People
-
- Erica Given
- Partner
-
- Lisa Nagele-Piazza
- Lead Content Counsel