Can Your AI Chat History Be Used Against You in a Lawsuit? 5 Practical Takeaways for Employers as Courts Start to Split
If you or your employees use ChatGPT or other generative artificial intelligence to help during a lawsuit, are the AI chat histories and other archived data fair game during discovery, or are they protected by the attorney-client privilege or work-product doctrine? As the use of GenAI tools expands into business operations and transforms employment litigation, more courts are beginning to address this critical question. We’ll cover two recent federal court decisions that reached nearly opposite conclusions and offer five practical takeaways for employers.
Quick Background
During the discovery phase of a lawsuit, the parties are required to collect and exchange evidence, including electronically stored information (ESI) to understand the facts of the case. However, discovery requests are limited by relevance, proportionality, and other rules, and certain materials and communications are protected. For example:
- The attorney-client privilege shields certain communications from discovery if they are between a client and their attorney, intentionally kept confidential, and for the purpose of obtaining or providing legal advice.
- The work product doctrine protects certain materials from discovery if they were prepared in anticipation of litigation, such as legal strategies, notes, and analyses.
Just as we predicted in our FP Forecast 2026, AI-generated ESI – especially from notetakers, meeting summaries, auto-drafted emails, and chat assistants – is becoming a core discovery battlefield in employment cases, as highlighted by two recent cases discussed below. We also recently covered how the rise of the “ChatGPT plaintiff” and how AI is transforming employment litigation, driving up defense costs, and what in-house counsel can do about it.
Warner v. Gilbarco, Inc. – Pro Se Plaintiff’s ChatGPT Interactions Were Protected “Work Product”
In an ongoing employment discrimination case, a federal court in Michigan ruled on February 10 that materials reflecting the pro se plaintiff’s use of ChatGPT to help draft filings for the lawsuit were protected by the work-product doctrine.
Key Facts and Arguments
A woman sued her former employer in 2024, alleging gender, race, and national origin discrimination, as well as retaliation, under Michigan and federal law. She filed her suit pro se, which means she represented herself without the aid of a lawyer. During discovery, the company asked the court to compel the plaintiff (who admitted during a deposition to using ChatGPT to help draft filings for the lawsuit) to produce “all documents and information concerning her use of third-party AI tools in connection with this lawsuit” – including, for example, her prompts/queries and the AI outputs.
- The company argued that the ChatGPT materials were relevant, discoverable, and not protected by the work product doctrine since the plaintiff is self-represented and not an attorney, and because she voluntarily disclosed such information to ChatGPT, a third party. In a December 23 court filing, the company claimed that “any ruling to the contrary would effectively confer upon generative AI tools the status of ‘attorney,’ and would give plaintiffs the encouragement and license to displace actual human lawyers with generative AI tools and assert privilege and work product protections for all communications with the tool.”
- The plaintiff objected, arguing that such discovery requests were for her “internal analysis and mental impressions – i.e., her thought process – rather than any existing document or evidence, which is not discoverable as a matter of law.”
How the Court Ruled
The court rejected the company’s arguments and sided with the plaintiff, holding that the ChatGPT materials were not discoverable because they were not relevant (or, even if relevant, not proportional) and, in any event, protected by the work-product doctrine.
- The court said that the pro se plaintiff was permitted to assert work-product protection, and that she did not waive it by using ChatGPT because such waiver must be “to an adversary or in a way likely to get in an adversary’s hand” and “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.”
- The court also agreed with the plaintiff that the company’s theory “would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed.”
The court therefore denied the company’s request for the court to overrule the plaintiff’s attorney-client privilege and work-product objections to the AI materials, though it did not specifically address the privilege issue.
United States v. Heppner – No Privilege or Protection for Criminal Defendant’s AI-Generated Legal Advice
One week after the Warner v. Gilbarco decision, a federal district court in New York reached a much different conclusion. This court held that when an AI user communicates with a publicly available AI platform in connection with a pending criminal investigation, the communications are not protected by attorney-client privilege or the work product doctrine. While the circumstances in this case are much different than those in the case discussed above, the court’s take demonstrates how disputes around whether certain AI materials are privileged or protected may play out differently across jurisdictions.
Key Facts and Arguments
After Bradley Heppner was indicted on various fraud-related charges, FBI agents arrested him and executed a search warrant at his home. During that search, the agents seized numerous documents and electronic devices, including documents reflecting Heppner’s communications with Claude, a GenAI platform operated by Anthropic. (Heppner pleaded not guilty to all charges and is awaiting trial, which is set to begin in April).
- The defendant’s counsel asserted privilege over the Claude communications, claiming that Heppner created those documents for the purpose of speaking with counsel to obtain legal advice and by inputting information he had learned from counsel. The attorney, however, admitted that he had not directed Heppner to run Claude searches.
- The government agreed to temporarily set the Claude documents aside and asked the court for a ruling that neither the attorney-client privilege nor the work product document protected the Claude documents.
How the Court Ruled
The court held that the Claude communications were not protected by the attorney-client privilege because they were:
- not between Heppner and his counsel – and the court even quoted this recent commentary (JOLT Digest) to suggest that communications with platforms like Claude could never be privileged because all recognized privileges require “a trusting human relationship.”
- not confidential, especially given that Claude users must consent to the company’s privacy policy, which states that Anthropic reserves the right to collect users’ inputs and Claude’s outputs and disclose it to various third parties – including governmental regulatory authorities.
- not created by Heppner for the purpose of obtaining legal advice, as his lawyer did not direct him to communicate with Claude (though the court noted that if that had been the case, this third prong would’ve been a closer call as “Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege”).
The court also ruled that the work-product doctrine did not apply because even if the Claude documents were prepared “in anticipation of litigation,” Heppner was not acting as his counsel’s agent because he “communicated with Claude of his own volition” (and his counsel conceded that the Claude documents did not reflect his strategy at the time they were created).
5 Practical Takeaways for Employers
Though these cases reflect two courts’ initial forays into this very new subject matter, a split appears to be emerging over whether interactions with generative AI tools can be protected by the attorney-client privilege or the work-product doctrine. As a result, employers may benefit from reassessing how GenAI is used across their organizations. Here are five steps you should consider taking now to help protect – or, when advantageous, leverage – AI-related materials in litigation:
- Vet your AI vendor before deploying their technology. When choosing an AI vendor for your organization, evaluate critical areas to ensure the AI system won’t bring you any unanticipated legal liability and disastrous reputational harm. (For example, the Heppner decision highlights how using AI platforms may be viewed as disclosing information to a third party – potentially waiving privilege.) We covered the essential questions to ask your AI vendor, and for more personalized assistance throughout the entire selection process, contact a member of our AI, Data, and Analytics Team.
- Limit or ban GenAI usage for sensitive or confidential HR matters. AI use for business purposes or day-to-day operations is unlikely to be viewed by courts as privileged information, so you should consider training your managers and HR staff to limit or avoid AI use for hiring, firing, or other personnel matters that could potentially become relevant in litigation. This may be especially true in jurisdictions seeking to regulate, if not outright ban, the use of AI when making such determinations (such as in California, where lawmakers are once again considering a “No Robo Bosses” bill – read more here). Beyond HR-specific protocols, make sure to refresh your broader policies as needed (here are 10 things all workplace AI policies should include).
- Tread carefully when using AI for legal advice. You should train your leadership (from C-level executives to day-to-day managers) on the risks tied to using GenAI for legal advice. When non-lawyers use GenAI to research the issues in a new legal claim or to help draft their communications with a plaintiff’s attorney before they have engaged defense counsel or without the involvement of legal counsel, all of those AI chat histories could very likely be discoverable and potentially become evidence that could be used against the company in court. Consider enacting policies for management that require any AI use for legal advice to be directed or approved by in-house or outside counsel and conducted within secure, company-approved systems. Searches and queries should be clearly labeled as prepared at the direction of counsel – but again, tread carefully here as this is a new and unsettled area of the law.
- Anticipate discovery risks and determine the impact on record retention. Work with counsel to identify where AI data resides and how to preserve it appropriately. If you have questions, contact any member of our eDiscovery and Digital Workplace team.
- Make AI-related discovery requests when advantageous. If opposing parties have used AI tools, their prompts and outputs could potentially be discoverable if they are relevant to the claims or defenses in the case. As the use of GenAI becomes more ubiquitous, employees are likely to rely on it for answers to their everyday questions outside their areas of expertise, including legal issues. We recommend working with counsel to challenge privilege claims, especially if the plaintiff used third-party AI tools without any attorney direction.
Want more? Last month, the DOL released a roadmap for training workers for AI literacy, and FP’s Dave Walton testified on Capitol Hill about what responsible AI use means for employers. Our attorneys also recently covered best employer practices for using AI-driven tools to screen resumes, moderate job interviews, conduct gamified hiring assessments, scan candidates’ social media, boost employee engagement and retention, and support employee performance management.
Conclusion
This emerging question – whether AI-generated legal advice can be protected by the attorney-client privilege or work-product doctrine – is poised to become a frequently litigated issue and may be treated inconsistently across jurisdictions. We will continue to monitor developments in this area and provide updates as warranted, so make sure you are subscribed to Fisher Phillips’ Insight System to get the most up-to-date information directly to your inbox. If you have questions, please contact your Fisher Phillips attorney, the authors of this Insight, or any member of our Litigation Practice Group or our AI, Data, and Analytics Practice Group.



