New York Governor Signs Sweeping AI Safety Law: What Businesses Can Do in 2026 to Prepare For a New Era
Insights
12.23.25
New York has officially joined California at the forefront of US artificial intelligence regulation. Governor Kathy Hochul just signed a revised version of the Responsible AI Safety and Education Act (RAISE Act) into law on Friday, establishing strict safety obligations for developers of the most advanced AI systems. The final law is narrower and less punitive than the version passed in June, but still marks one of the most consequential state AI safety laws enacted to date. Here’s what businesses need to know about what actually became law – and some specific steps you can take in 2026 to prepare for the January 1, 2027 effective date.
AI Regulation Focused on Catastrophic Risk: Who is Covered?
Unlike most state AI laws and proposed laws that focus on bias, discrimination, or consumer deception, New York’s RAISE Act targets a very specific risk category: catastrophic harm caused by highly capable AI systems.
The law applies to “frontier models,” the largest and most advanced AI systems capable of enabling serious real-world harms like cyberattacks, bioweapon development, and large-scale infrastructure damage. Coverage is limited to:
- AI developers with more than $500 million in annual revenue
- Companies that develop or operate frontier AI models in New York
Earlier versions of the bill focused on compute-cost thresholds (e.g., $100 million in training costs). Lawmakers removed those provisions during negotiations and replaced them with a revenue-based trigger, which means the law more closely lines up with California’s SB 53 framework.
“Core 4” Obligations for Covered AI Developers
Covered companies must comply with four core mandatory safety and transparency requirements.
1. Safety and Risk Assessment Plans
Developers must:
- Create and follow written safety protocols
- Assess how their systems could cause “critical harm” to people or property
- Implement safeguards to prevent or mitigate those risks
2. Mandatory Incident Reporting (72 Hours)
AI developers must report critical safety incidents to the state within 72 hours of determining that an incident occurred. This reporting timeline is significantly shorter than California’s 15-day window and was one of the most contentious points during negotiations.
3. Ongoing Oversight by New State AI Office
The law creates a new AI oversight office within the New York Department of Financial Services (DFS). That office will require covered developers to register with the state, assess fees to fund oversight, issue regulations as well as guidance, and publish annual reports on AI safety risks.
4. Enforcement and Penalties
The New York Attorney General will enforce the law. There is still no private right of action to allow aggrieved individuals to file lawsuits in court. Instead, the AG’s office will be able to levy penalties that are lower than those in the June version but still substantial:
- Up to $1 million for a first violation
- Up to $3 million for subsequent violations
When Does the Law Take Effect?
The law takes effect January 1, 2027, giving regulators time to set up the new oversight office and giving covered companies time to prepare.
|
What Changed From the June Version? The signed law reflects substantial compromises from the version originally passed by state lawmakers in June (which you can read about here):
|
What About the White House’s Recent Attacks on State AI Laws?
The RAISE Act was signed soon after President Trump issued an executive order authorizing federal lawsuits against states that pass AI laws viewed as hindering innovation. Meanwhile, some Congressional Republicans are pushing proposals that would limit or preempt state-level AI regulation.
But the EO faces all-but-certain legal challenges, and it remains to be seen whether Congress can band together to pass a comprehensive federal law to cover the entire country. Until we see further developments in this area, you can expect to see a patchwork of state-by-state compliance requirements for multistate businesses.
What Employers and Businesses Should Do Now
Even if you are not developing frontier AI models, you should stay on top of this new law and use 2026 to prepare for the January 1, 2027 effective date.
- Vendor diligence: Ask AI vendors whether their models fall within frontier definitions and how they manage safety risks (and read our list of top questions to ask your AI vendors).
- Contracting: Consider whether AI safety disclosures or incident-notification provisions belong in procurement agreements.
- Governance: Maintain clear internal AI governance policies so your business is not caught off-guard by downstream regulatory obligations (and read our AI Governance 101 guide here).
- Watch the other states: New York and California are setting early benchmarks that many businesses may decide to set as the regulatory floor, but other states may follow with variations.
- Track federal developments: Federal intervention could reshape or preempt parts of this framework, so make sure you are subscribed to Fisher Phillips’ Insight System to get updated when key developments occur.
Conclusion
If you have any questions, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our AI, Data, and Analytics Practice Group, in our New York City office, or on our Government Relations team or Tech Industry Team. Make sure you are subscribed to Fisher Phillips’ Insight system to receive the latest developments straight to your inbox.
Related People
-
- Amanda M. Blair
- Associate
