Court Upholds California AI Transparency Law, Rejecting X.AI’s Trade Secret Defense: 5 Action Steps for Employers
A California federal court denied Elon Musk’s X.AI request to block enforcement of the state’s AI training data transparency law, rejecting the company’s claims that the disclosure requirements would destroy trade secrets and violate free speech rights. The March 5 ruling comes as California Attorney General Rob Bonta expands his office’s AI enforcement capabilities, signaling that the state intends to aggressively regulate AI regardless of federal inaction. For employers using or developing AI systems, the decision underscores a few critical lessons about California’s approach to AI oversight and what compliance will require going forward. Here’s our recap and five action steps you can take now.
Training Data Law Raises Trade Secret Concerns
California’s Artificial Intelligence Training Data Transparency law took effect January 1, 2026, requiring generative AI developers to publicly post summaries of the datasets used to train their systems. The law mandates disclosure of:
- Whether datasets include personal data or copyrighted content
- When the data was collected
- What modifications were made to the datasets
- How the data is used in training
X.AI sued California Attorney General Rob Bonta in December 2025, arguing the law is a “trade-secrets-destroying disclosure regime” that would “gut the AI industry” and give competitors a roadmap to reverse-engineer rival models. The company sought a preliminary injunction to halt enforcement while the lawsuit proceeds.
Court Rejects Business Concerns and Permits Law to Remain
U.S. District Judge Jesus Bernal denied the injunction on March 5, finding X.AI failed to show its case is likely to succeed. The court rejected both X.AI’s trade secret and free speech arguments.
- On trade secrets, Judge Bernal acknowledged that training datasets could potentially be trade secrets, but found X.AI’s “generalized, abstract pleading” failed to demonstrate its datasets are distinct from competitors’ in a way that merits protection. The court said X.AI’s “resort to generalizations and hypotheticals about the AI model development industry make it difficult for the Court to find that Plaintiff has carried the heavy burden of showing a likelihood of success in proving that trade secrets are at play here.”
- On vagueness, the court rejected X.AI’s argument that terms like “dataset” and “data point” are undefined in the statute, noting the company “seems to understand and use with ease ‘dataset’ throughout its Complaint.”
- On free speech, Judge Bernal ruled X.AI had not shown the law violates First Amendment rights at this stage in the case.
The law remains in effect, and X.AI must comply with disclosure requirements while its lawsuit continues.
California’s Expanding AI Enforcement
The X.AI ruling didn’t happen in isolation. Attorney General Bonta is simultaneously building what he calls an “AI oversight, accountability and regulation program” to strengthen California’s AI enforcement amid what he characterizes as federal regulatory gridlock.
The California legislature is also considering legislation that would require the Attorney General’s office to establish a formal program to build in-house AI expertise.
What Employers Should Do Now
1. Determine if California’s transparency law applies to you: The law covers “developers” of generative AI systems. If your organization builds AI models (not just uses third-party AI tools), evaluate whether you must comply with disclosure requirements. The law applies to California operations, but could affect out-of-state companies doing business in California.
2. Review AI vendor compliance: If you use third-party AI systems (like chatbots, content generators, or decision-support tools), confirm your vendors comply with California’s transparency law. Non-compliance could create reputational or contractual risk for your organization. Here are some questions you should consider.
3. Review and audit your AI-related intellectual property: The court's rejection of X.AI’s trade secret claims turned largely on the company’s failure to specifically articulate what made its datasets distinct. If your organization develops AI systems, document precisely what proprietary data, processes, or methodologies you believe qualify as trade secrets, and why they meet the legal standard.
4. Audit AI outputs for prohibited content: Given Bonta’s focus on AI-generated sexual content and harmful outputs, review whether your AI systems could produce content that violates California law. This includes:
- Sexually explicit content, especially involving minors
- Non-consensual intimate images
- Content encouraging self-harm or illegal activity
- Discriminatory outputs
5. Document AI governance practices: California’s enforcement expansion means more scrutiny of how companies govern AI use. Document your AI oversight processes, including:
- How you select and vet AI vendors
- What guardrails you’ve implemented
- How you monitor AI outputs
- What you do when problems are identified
6. Prepare for multi-issue investigations: As the X.AI case shows, one complaint can trigger broader investigations. If California contacts you about any AI-related issue, assume regulators will examine your entire AI program, not just the specific allegation.
Conclusion
We’ll continue monitoring California AI enforcement and related litigation. Make sure you’re subscribed to Fisher Phillips’ Insight System for updates. If you have questions about California AI laws, AI transparency requirements, or AI governance practices, contact your Fisher Phillips attorney or any member of our AI, Data, and Analytics Practice Group or our Privacy and Cyber Practice Group.
