California Unveils Landmark AI Policy Blueprint: What Businesses Need to Know (And Do) Now
Insights
6.24.25
California just released the most comprehensive, forward-looking AI policy framework we've seen from any US jurisdiction to date – and it is expected to lay the groundwork for legislation and regulation we could see emerge in the next year. Even as Congress debates whether to ban state-level AI laws for the next decade, California is charging forward with a detailed blueprint that could also set the tone for AI regulation nationwide. So what does the June 17 California Report on Frontier AI Policy mean for your business? And how should you respond?
💥 FP’s AI Conference will focus on the future of AI regulation and what your business needs to do to prepare. We’ll hear from two members of the bipartisan House Task Force on AI, as well as state lawmakers and analysts – register now to hear first-hand about AI regulation and much more! |
The Big Picture: California Steps Into the Leadership Void
The June 17 report (which you can read here) was prepared by an expert working group at the request of Governor Gavin Newsom. You might recall that, after he vetoed the legislature’s attempt at regulating AI systems last year, Newsom called upon AI experts to recommend a framework for a proposed law that would balance the needs of safety with innovation. Last week’s report is the result of that request.
Blueprint in a Nutshell
The report focuses squarely on the most advanced forms of AI: “frontier” foundation models like those powering large language models (LLMs), multi-modal AI systems, and emerging AI agents. Rather than proposing specific legislation, it offers a detailed policy framework built on one central idea: “Trust but verify.”
Key themes include:
- Balance of Innovation and Risk: The report stresses California's unique role as both AI innovator and potential global policy leader. It urges proactive guardrails without stifling innovation.
- Transparency as Cornerstone: A recurring theme is the lack of meaningful transparency among AI developers, with particular gaps around training data, risk mitigation, downstream impact, and internal safety evaluations.
- Evidence-Based Policymaking: Given AI’s rapid evolution, the report emphasizes policies that actively generate new evidence through transparency, adversarial testing, third-party audits, and adverse event reporting.
- Urgency of Early Action: Drawing historical analogies (tobacco, climate change, internet security), the report argues that early policy windows rarely stay open for long. It contends that acting now may avoid repeating costly mistakes from prior technological revolutions.
Why is Regulation Necessary?
The drafters of the report provide a laundry list of reasons why they believe guardrails are necessary – and don't pull any punches when it comes to describing what they see as potential risks related to AI:
- AI capabilities are expanding fast. The report cites multi-step reasoning, agentic autonomy, and advanced code generation as current dangers, noting that we are careening toward “artificial general intelligence” – when AI has the same level of generalized cognitive ability as humans, learning new skills autonomously.
- Malicious use risks are escalating. The report says that the use of AI to fuel cyberattacks, bioweapon development, and misinformation campaigns is a clear and present danger to human safety.
- Loss of control scenarios are increasingly plausible. The report notes that “reward hacking” (where an AI system manipulates its own objectives to maximize success in unintended ways, often by exploiting loopholes) and displays of deceptive behavior by AI in safety tests has further heightened concerns among AI experts.
What’s Next in California? Legislative Action is Coming
This report is widely viewed as the intellectual blueprint for likely legislation in the coming year. While Governor Newsom vetoed last year’s SB 1047 proposal citing concerns over premature regulation, the landscape has shifted dramatically in just the last nine months:
- Model capabilities have advanced at a speed even AI leaders admit has outpaced prior projections.
- Internal model reports now admit higher risk levels across cyber, biosecurity, and autonomy domains.
- Industry leaders are increasingly acknowledging that independent evaluation and safety disclosures may be inevitable.
- Public attention over AI dangers are sharply rising, with fears of a federal vacuum as Congress debates limiting state action.
It’s almost certain that California lawmakers will review this report and craft new legislation drawing heavily from its recommendations: transparency mandates, third-party safety evaluations, whistleblower protections, adverse event reporting, and clear disclosure thresholds tied to compute power, model size, or deployment scale.
The National Context: States and Congress Now on a Collision Course
California isn’t alone. As we detailed recently, New York is moving aggressively with its own first-of-its-kind AI safety law requiring pre-deployment safety protocols and independent audits (see full article).
At the same time, Congress is considering legislation that would either completely ban or dissuade states from passing AI laws for ten years (most current update here). That proposal underscores growing federal tension about the role that states will play when it comes to AI regulation.
For businesses, this creates enormous uncertainty: will you face overlapping, evolving state requirements? Or will Congress preempt state experimentation altogether?
California’s report is highly aware of this debate. It repeatedly calls for:
- Harmonization across jurisdictions to avoid fragmented compliance burdens
- Preserving state authority where federal gaps remain unaddressed
- Creating a template that could scale nationally or internationally
Employer Takeaways: 5 Things Your Business Should Do Now
California and New York often set the tone for how regulations are adopted across the country, and many companies may adopt their nationwide standards to comply with these states as a minimum baseline. And while nothing is immediately imminent, this report signals the likely future shape of AI obligations. Smart businesses should consider acting now:
1. Conduct an Internal Transparency Audit
- Do you know how your vendors train and test their models? Review our Insight that provides the essential questions you should ask your AI vendors.
- Can you document your own AI supply chain, training data sources, and risk mitigation steps?
- Are internal safety tests credible, repeatable, and documented?
2. Start Evaluating Third-Party Risk Assessment Options
- Independent safety evaluations are coming. Begin exploring how you would vet, select, and work with external auditors.
- Build governance capacity to oversee these processes.
3. Reassess Your Whistleblower Policies
- Expect future mandates to protect employees who flag safety, bias, or misuse concerns related to AI systems.
- Begin reviewing whether your existing policies adequately cover AI-related disclosures.
- Track a federal proposal that would make it illegal to retaliate against employees who speak up about AI-related risks (read our summary here).
4. Build Internal Adverse Event Reporting Mechanisms
- The report recommends tracking real-world AI failures and near misses.
- Establish internal pathways to log, investigate, and escalate AI incidents.
5. Create an AI Governance Framework
- Establish clear internal ownership: designate cross-functional teams responsible for AI oversight, including legal, compliance, IT, and business leadership.
- Build scalable policies: create documentation protocols, model inventories, and ongoing monitoring procedures to manage your AI use as technology evolves.
- Read our AI Governance 101: The First 10 Steps Your Business Should Take plan to stay ahead of the curve.
Want to Learn More About AI Regulation? 📃 Join Fisher Phillips for our third-annual AI Conference for business professionals this July 23 to 25, in Washington, D.C. Learn more and register here. |
Further Reading
- California Governor Issues Landmark AI Policy Report
- WebProNews: California Leads With Frontier AI Policy Framework
- CalMatters: Artificial Intelligence Regulations
- Transparency Coalition: Guide to the California Report on Frontier AI Policy
- GovTech: As Regulation Ban Looms, California Issues Frontier AI Study
- TechPolicyPress: California Governor's Report Sidesteps AI Liability
Conclusion
If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our AI, Data, and Analytics Practice Group, in any our California offices, or on our Government Relations team. Make sure you are subscribed to Fisher Phillips’ Insight System to receive the latest developments straight to your inbox.
Related People
-
- Benjamin M. Ebbink
- Partner
-
- Usama Kahf, CIPP/US
- Partner
-
- Richard R. Meneghello
- Chief Content Officer