Landmark AI Discrimination Bill Stalls Out in California Legislature, But Other AI Measures Advance
Insights
9.03.24
California lawmakers knocked back a chance to pass a groundbreaking AI discrimination that would have required employers to provide notification – and perhaps an accommodation – to workers when artificial intelligence is used in certain critical ways during hiring or employment. The bill, which would have also required employers and AI developers to establish robust governance programs and conduct impact assessments to mitigate against algorithmic discrimination, was poised for passage but ultimately lawmakers could not agree on a final version before the legislative session ended at midnight on Saturday night. We expect to see legislators take another crack at such a proposal in 2025, so this isn’t the last you’ve heard of this proposal. Meanwhile, three other AI-related bills advanced to the governor’s desk and will be closely monitored before the September 30 deadline for action. What do employers need to know?
California’s AI Discrimination Bill Would Have Changed the Game
Again, it’s worth tracking the bill that failed to pass the legislature this time around because odds are we’ll see something like it again come 2025 – and because it would have absolutely changed the employment landscape as we know it. AB 2930 would have ushered in a new era for employers using AI and AI developers alike. Among the critical elements:
- Any business that deployed an AI system (or any automated decision system) to make a consequential decision about a worker or applicant (think hiring, firing, pay, promotion) would have had to provide advance notice about the AI use. In some cases, employers would have had to accommodate them upon request and use an alternate process instead of AI.
- Both employers using AI systems and AI developers would have had to perform an impact assessment before the system is first deployed and every year thereafter, reviewing details about the way the system operates and the steps taken to ensure the AI outputs are valid and reliable. They also would have had to turn these in to the government each year.
- Employers and AI developers would have also needed to create robust governance programs that contains safeguards designed to track, measure, and manage the risks of algorithmic discrimination.
California would have joined Colorado as one of two states that have taken the most significant steps when it comes to combatting AI discrimination. With California lawmakers punting in 2024, we’ll now see if other states venture into this same territory in the near future.
Lawmakers Pass Controversial AI Safety Bill
As perhaps the broadest AI measure on the docket this session, you may have already heard a lot about this one. The “Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act,” SB 1047, has made headlines as large AI developers and tech leaders are lining up to either publicly support or oppose this measure. The bill aims to declaw the potential for AI systems to be used to threaten public safety and security, such as developing nuclear or biological weapons of mass destruction, or aiding in crippling cyberattacks. The largest and most powerful AI developers would need to conduct thorough testing and safety protocols for their systems, and be ready to fully shut them down if a risk arises.
Even if you’re not a tech developer, the bill’s potential to impact the availability and functionality of AI tools could affect any employer or employee who uses them. From a broader standpoint, it might signal how the state views this technology and the direction state leaders want to take in addressing it.
Already, many Silicon Valley leaders are pressuring Gavin Newsom to reject the bill before his September 30 deadline. We will be keeping a close eye on this measure as it could shape policy for states throughout the country.
Another Measure Will Tackle “Digital Replicas” in the Entertainment Industry
The entertainment industry regularly creates and uses digital replicas – one of the hot issues during the actor’s strike last year – of actors in a variety of ways. AB 2602, which also passed the legislature, would make a digital replica contract provision retroactively unenforceable if it:
- permits creating and using a digital replica of an individual’s voice or likeness either in place of work the individual otherwise would have performed or to train a generative AI system; and
- does not clearly spell out the proposed uses of the digital replica or generative AI system.
The bill targets situations where there is an imbalance of power, such as when the individual who is at risk of losing work because of a digital replica is not represented by legal counsel or a labor union. The party with the power to create or use digital replicas would be required to notify the individual by February 1, 2025, that a prohibited provision is unenforceable.
State Agencies Could Soon Be Pushed to Take AI Action
The “Artificial Intelligence Accountability Act” focuses on the use of AI by state agencies. SB 896, which also passed this session and is awaiting action by the Governor, would require various agencies to produce reports on the state’s potential best uses of generative AI tools and perform a joint risk analysis of AI’s potential threats to California’s critical energy infrastructure. It would require state agencies to notify people when generative AI is being used to communicate their interaction.
Want a Full Recap?
Aside from all the AI happenings, the end of this year’s legislative session was busy as usual – with many of the bills under consideration relating to the workplace. Click here to read our recap of the top 10 labor and employment law bills you should track over the next month.
Conclusion
We will continue to monitor developments as they unfold. Make sure you subscribe to Fisher Phillips’ Insight System to gather the most up-to-date information on AI and the workplace. If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our California offices, or any attorney in our Artificial Intelligence Practice Group.