• People
  • Services & Industries
  • Insights
  • Innovation
  • Offices
  • My Binder
  • PDF

Another Employer Faces AI Hiring Bias Lawsuit: 10 Actions You Can Take to Prevent AI Litigation

Insights

8.15.25

An unsuccessful job applicant is suing Sirius XM Radio in federal court, claiming the company’s AI-powered hiring tool discriminated against him based on his race. Filed on August 4 in the Eastern District of Michigan, the plaintiff in Harper v. Sirius XM Radio, LLC alleges that the company’s AI system relied on historical hiring data that perpetuated past biases – resulting in his application being downgraded despite his qualifications. The lawsuit accuses Sirius XM of violating federal anti-discrimination statutes by the way it leverages AI tools in hiring, an allegation that we’re seeing more and more of lately. Here’s what you need to know about this case and how it fits into the larger puzzle of workplace AI litigation – plus 10 best practices you should follow as a result.  

Case Summary: Harper v. Sirius XM Radio

  • Court: U.S. District Court, Eastern District of Michigan (2:25-cv-12403)
  • Filed: August 4, 2025
  • Judge: Hon. Terrence G. Berg
  • Complaint: Available by clicking here

In Harper, a job applicant proceeding pro se (representing himself without an attorney) just filed suit in a Michigan federal court.

  • Arshon Harper alleges that Sirius XM relied on an AI-powered hiring system (iCIMS Applicant Tracking System) that embedded historical biases into its evaluation process, resulting in his rejection from approximately 150 positions despite his alleged qualifications in the IT field.
  • According to the complaint, the iCIMS Applicant Tracking System analyzed application materials and assigned scores based on data points that proxy for race (such as educational institutions, home zip code, employment history, etc.), which Harper contends disproportionately disadvantaged African-American candidates.
  • He claims this automated scoring caused his candidacy to be downgraded and for his applications to be eliminated before he could advance to later stages of the hiring process.

Harper asserts two legal theories: disparate treatment, alleging intentional discrimination in the design or use of the AI tool; and disparate impact, claiming the tool’s outcomes had an unlawful discriminatory effect even if the bias was unintentional.

He brings race discrimination claims under Title VII of the Civil Rights Act and Section 1981, and also seeks to expand his lawsuit into a class action to sweep in all other similarly situated applicants. In addition to compensatory and punitive damages for lost wages and emotional distress, Harper seeks injunctive relief requiring Sirius XM to discontinue or significantly modify its use of the AI screening tool.

It is important to note that these are only allegations at this stage of the litigation. Harper’s complaint reflects his version of events only. Sirius XM has not yet filed its response to the lawsuit, and the company will have an opportunity to contest these claims in court. Additionally, the court has not made any findings of fact or law.

Other AI-Related Employment Matters We’re Tracking

Harper’s claim isn’t the first lawsuit filed against employers for AI misuse, and it won’t be the last. Here’s a quick roundup of the other ongoing lawsuits we’re tracking involving allegations of AI-fueled discrimination in the hiring process.

Aon Consulting (ACLU administrative actions)

  • Posture: FTC complaint (May 30, 2024) and earlier EEOC charge – no lawsuit yet.
  • Allegations: ACLU challenges three Aon hiring tools (ADEPT-15, vidAssess-AI, gridChallenge) as discriminatory against people with disabilities and certain racial groups; also alleges deceptive “bias-free” marketing.
  • Why it matters: Employers could be liable for vendor bias, even if they didn’t design the tool.
    Read our full analysis ➜

Intuit / HireVue (ACLU charges in CO + EEOC)

  • Posture: Administrative charges filed March 19, 2025; investigations pending.
  • Allegations: Deaf Indigenous applicant claims automated video interview lacked proper captioning; denial of requested CART accommodation allegedly tainted results, raising ADA, Title VII, and state law issues.
  • Why it matters: Do accessibility and accommodation requirements apply equally to AI tools?
    Read our full analysis ➜

Mobley v. Workday (Class action brewing in N.D. Cal.)

  • Posture: Certified as a nationwide ADEA collective on May 16, 2025.
  • Allegations: Screening algorithms allegedly disadvantaged applicants age 40+; earlier filings also alleged race and disability bias.
  • Why it matters: One vendor’s system can create mass exposure across many employers.
    Read our full analysis ➜

Epic Games / Llama Productions (SAG-AFTRA ULP – AI displacement)

  • Note: While this summary is not related to court litigation, it remains an important development to track – especially for unionized employers.
  • Posture: Unfair Labor Practice (ULP) charge filed May 19, 2025, alleging replacement of union voice actors with an AI-generated voice without bargaining.
  • Allegations: The union claims that the company’s use of AI to generate the voice of Darth Vader in its video game violates federal collective bargaining law.
  • Why it matters: AI substitutions can trigger NLRA issues, especially in unionized workplaces.
    Read our full analysis ➜

10 Action Steps for Employers Using AI in Hiring

1. Establish an AI Governance Program – Develop clear systems and guardrails for AI use before deployment, consistent with the NIST AI Risk Management Framework. Include a human oversight component, specify roles and responsibilities, and regularly evaluate outcomes to ensure your governance measures are working.

2. Vet and Audit Your Vendors – Require vendors to document their bias testing, data sources, and accessibility features. Build in contractual assurances for nondiscrimination, data transparency, cooperation with audits, and indemnification. If you need guidance on the right way to approach this dynamic, this Insight reviews the right questions to ask at the outset of the relationship and along the way.

3. Be Transparent With Candidates – Consider clearly communicating when and how AI tools are used in the hiring process. Transparency may soon be required in some jurisdictions and is emerging as a best practice nationwide.

4. Offer and Publicize Accommodation Options – Consider whether you can offer applicants a path to request alternative assessments or human review. Options might include specialized equipment, alternative test formats, or modified interview conditions. This may not be possible for all circumstances or at each step of the application process, but document any accommodation pathway you offer and make it visible in job postings and application portals.

5. Align AI-Driven Questions With Job Requirements – When providing questions or criteria to an AI tool, make sure they relate directly to the role’s essential functions. Avoid irrelevant prompts or stock materials pulled from unknown sources that could introduce bias.

6. Retain Human Oversight in Decision-Making – Train HR and Talent Management teams to review and, when appropriate, override AI recommendations. Regularly audit hiring outcomes to ensure fairness and compliance with applicable laws.

7. Document Your Process and Rationale – Maintain detailed records of how hiring decisions are made, including the objective criteria used and any adjustments to AI-driven scores. Avoid relying on vague or opaque “fit scores” that can’t be explained.

8. Conduct Regular Accessibility Audits – Test your systems for compliance with disability accommodation requirements, and correct any accessibility gaps promptly.

9. Monitor for Disparate Impact and Adjust – Run periodic analyses to identify disparities across protected classes (age, race, gender, disability, etc.). Treat significant disparities as red flags and take steps to mitigate them.

10. Stay Informed on Legal Developments – Track new legislation, court rulings, and agency guidance that could affect your AI practices. Make sure you are subscribed to Fisher Phillips’ Insight System to get regular updates about relevant developments.

Conclusion

We will continue to monitor AI litigation and related developments and provide the most up-to-date information directly to your inbox, so make sure you are subscribed to Fisher Phillips’ Insight System. If you have questions, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our AI, Data, and Analytics Practice Group.

Related People

  1. Michael Greco photo
    Michael R. Greco
    Regional Managing Partner

    303.218.3655

    Email
  2. Karen Odash bio photo
    Karen L. Odash
    Associate

    610.230.2165

    Email

Service Focus

  • AI, Data, and Analytics
  • Litigation and Trials
  • Employment Discrimination and Harassment

We Also Recommend

Subscribe to Our Latest Insights 

©2025 Fisher & Phillips LLP. All Rights Reserved. Attorney Advertising.

  • Privacy Policy
  • Legal Notices
  • Client Payment Portal
  • FP Solutions