Colorado Moves to Replace AI Law’s Bias Audit Requirements With Transparency Framework: 5 Action Steps for Employers
Colorado’s first-in-the-nation artificial intelligence law could look very different by the time it takes effect thanks to a new release from key policymakers. A state working group released a sweeping proposed rewrite on March 17 that would strip out the original law’s most burdensome requirements (including mandatory bias audits) and replace them with a streamlined transparency-and-notice framework focused on disclosure, correction rights, and human review. The proposal would also push the law’s effective date back to January 1, 2027, giving employers additional time to prepare. If the proposal clears the legislature, employers using AI tools for hiring and other consequential decisions will face a much more workable compliance landscape. Here’s what you need to know and a five-step action plan.
Quick Background
- Colorado passed the nation’s first comprehensive AI antidiscrimination law in 2024, originally set to take effect February 1, 2026. The law imposed sweeping obligations on both AI developers and the employers deploying those tools, including bias audits, risk impact assessments, and extensive disclosures. Almost immediately, the tech and business communities pushed back hard, arguing the requirements were unworkable and would stifle innovation.
- After failed attempts to revise the law during the 2025 legislative cycle, lawmakers pivoted and pushed the effective date to June 30, 2026.
- Governor Jared Polis convened a working group of technology industry representatives, consumer advocates, and business groups to find common ground.
- On Tuesday, that group released a proposed rewrite, and Polis immediately endorsed it.
What the Proposed Rewrite Would Do
The proposed bill, officially titled the "Automated Decision Making Technology in Consequential Decisions" framework, would replace the original law’s audit-heavy approach with a transparency-and-notice model. Here are the key provisions employers need to understand:
- Narrower scope – but employment is still squarely covered. The proposal focuses on “covered ADMT,” which is automated decision-making technology that “materially influences” a consequential decision. Employment and employment opportunities are explicitly included in the definition of consequential decisions, alongside housing, credit, education, insurance, health care, and essential government services. Routine scheduling, administrative routing, and workflow management are carved out.
- Common AI tools are off the hook. Spell-check, calculators, spreadsheets, robocall filters, and general-purpose large language models like ChatGPT are excluded, as long as they aren’t specifically configured or marketed for use in consequential decisions.
- Upfront notice to applicants and employees. Employers using covered AI tools in hiring or employment decisions must provide clear, conspicuous notice to job applicants and employees that AI is being used. This can be satisfied through a public-facing notice (such as a link or posting reasonably near the point of interaction) rather than individualized disclosures at every touchpoint.
- Post-adverse outcome disclosures. When an AI-assisted decision results in an adverse outcome (such as a rejection, a termination, or a denial of an opportunity), the employer must provide the affected individual within 30 days with a plain-language explanation of the AI’s role, the categories of data the system used, instructions on how to request correction of inaccurate personal data, and information on how to request human review.
- The right to human review. Workers and applicants who receive an adverse AI-assisted decision can request meaningful human review and reconsideration “to the extent commercially reasonable.” That human reviewer must have actual authority to override the decision, must be trained for the role, and cannot simply defer to the system’s output.
- Shared liability between developers and deployers. One of the most contested issues in the original law was who bears responsibility when AI goes wrong. The proposed rewrite splits liability based on relative fault. Developers are responsible for harms that arise from their systems being used as intended. Employers (as “deployers”) are responsible for their own independent decisions, including using AI in ways the developer didn’t intend or authorize. Indemnification clauses that would shift a party’s own liability to the other are void as against public policy.
- Enforcement stays with the AG – no private lawsuits. The Colorado Attorney General has exclusive enforcement authority. There is no private right of action under this law. Violators get a 90-day cure period before the AG can seek civil penalties, unless the violation was knowing or repeated.
- Effective date pushed. Finally, as noted, the effective date of the law would shift to January 1, 2027.
What This Means for Employers
If this proposal becomes law, the compliance picture (and timing) for employers changes significantly compared to the original 2024 law. Gone are the bias audit requirements and broad risk impact assessment mandates. What remains is a focused set of transparency and process obligations – notice, disclosure, correction rights, and human review – that are demanding but navigable.
What’s Next?
The framework still needs to be turned into legislation, introduced, and passed before the session ends in May. Prior rewrite attempts collapsed under competing pressures from lawmakers and lobbyists who weren’t part of the negotiating table, and the same dynamics could play out again.
That said, the proposal has meaningful momentum. It earned unanimous support from the working group and the Governor, and Senate Majority Leader Robert Rodriguez (the original bill’s sponsor) told reporters he sees areas that need some adjustments, but is pleased that the provisions about transparency and discrimination remain in the proposal.
What Should Employers Do Now? 5 Action Steps
The bill isn’t law yet, so sweeping compliance overhauls aren’t warranted. But there are smart preparatory steps employers can take right now:
- Audit your AI tools. Take stock of every AI-assisted tool involved in hiring, performance evaluation, compensation, or other employment decisions affecting Colorado workers. Understand what each tool does, how it influences decisions, and what data it uses.
- Conduct a bias audit. Just because Colorado might not require bias audits, they are still a good idea. Discrimination in employment decisions remains illegal under state and federal law, and we are seeing more discrimination lawsuits by applicants when employers use AI tools in the recruiting, screening, or interviewing process – regardless of whether the tool qualifies as an ADMT.
- Talk to your vendors. The proposed law places documentation obligations on AI developers. Ask your vendors now what they can provide: intended use cases, data categories, known limitations, and meaningful human review guidance. If they can’t answer these questions, that’s important information. Here are some other key questions to consider.
- Watch the legislature closely. The session ends in May. Whether and how this framework moves through the Capitol will determine what employers actually need to do. Stay current. The best way to stay up to speed is to ensure you are subscribed to the Fisher Phillips Insight system.
- Be aware of possible federal movement. The White House is actively attempting to block state AI laws like Colorado’s, and may even resort to suing to block this law. We’ll monitor any litigation and update employers as needed.
Conclusion
Make sure you are subscribed to Fisher Phillips’ Insight System to receive the most up-to-date information directly to your inbox. We will continue to monitor the situation and provide updates as they unfold. For more information, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our Denver office, or any attorney in our AI, Data, and Analytics Practice Group.

