A New York court has ruled that content generated by the artificial intelligence platform Claude is not protected by attorney-client privilege, even when shared with legal counsel. The February 17, 2026, decision in United States v. Heppner (No. 25-cr-00503-JSR) marks a significant development in the emerging legal landscape surrounding the use of generative AI in legal practice, clarifying the boundaries of confidentiality when utilizing these tools.
The case centered on defendant Michael Heppner, who used the publicly available AI tool to analyze his potential legal exposure. After receiving AI-generated analyses, Heppner shared them with his defense team. When federal agents seized his computer during a search, the government sought access to the AI-generated content. Judge Jed S. Rakoff ultimately sided with the government, finding no basis for privilege or work-product protection.
The court’s decision hinged on several key factors. Crucially, the judge emphasized that Claude, as a third-party platform, offered no reasonable expectation of confidentiality. The AI-generated materials were not created at the direct instruction of counsel, nor were they created specifically to facilitate the provision of legal advice. Simply transmitting the AI’s output to an attorney did not retroactively imbue it with privilege, a point underscored by the court’s reasoning that the privilege applies to confidential communications between a lawyer and client, not to documents that later become useful to counsel.
This ruling aligns with established principles of attorney-client privilege, which protects confidential communications made for the purpose of seeking or providing legal advice. As the court made clear, AI systems themselves are not lawyers or clients, and communications with them are not automatically privileged. The decision reinforces that privilege is not a blanket shield extending to any document that eventually finds its way into the hands of an attorney.
The court likewise addressed the work-product doctrine, which protects materials prepared by or at the direction of counsel in anticipation of litigation. The judge found that the AI-generated materials did not qualify as work product as they were not prepared under counsel’s direction and did not reflect the defense team’s litigation strategy. This echoes a distinction courts are beginning to draw between AI data created at counsel’s direction for litigation purposes and data generated independently for exploratory purposes, as highlighted in recent cases.
A separate case, Tremblay v. OpenAI, Inc. (No. 23-cv-03223-AMO), offered a contrasting perspective, though within a different context. In that case, plaintiffs alleging copyright infringement conducted targeted testing of ChatGPT before filing a lawsuit. The court determined that unused prompts, account data, and testing results constituted “opinion work product” prepared in anticipation of litigation. Though, the court limited waiver to only the specific prompts and outputs affirmatively relied upon in the pleadings, preventing a blanket waiver of all related materials.
Experts warn that the risk of waiving privilege is heightened when sensitive data is input into GenAI tools that retain data, reuse it, or utilize it for model training. To mitigate these risks, legal professionals are advised to prioritize the use of secure, enterprise-level GenAI platforms with robust confidentiality protections.
Practical recommendations for preserving privilege and work-product protection include treating GenAI as a supervised assistant, with prompts and outputs generated under counsel’s direction and subject to review. Clearly labeling protected materials as privileged or work product protected is also advised, though not dispositive on its own. Maintaining detailed records of AI activity, including metadata, is crucial, as these logs could reveal litigation strategy. Consideration should also be given to incorporating GenAI data into electronic discovery (ESI) agreements and seeking Rule 502(d) orders to further minimize waiver risk.
As courts continue to grapple with the implications of AI in legal practice, the focus is likely to remain on the level of supervision, the purpose of the AI’s use, and the reasonable expectation of confidentiality. The Heppner decision serves as a cautionary tale, demonstrating that casual or unsupervised use of public GenAI tools can generate discoverable and unprotected material.