Big Law Warns After Court Rules AI Chats Aren’t Privileged

In United States v. Heppner, a federal judge found private chats with Anthropic’s Claude lack attorney‑client privilege. Major firms warned clients and updated engagement letters.

In February, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York ruled that private conversations a defendant had with Anthropic’s chatbot Claude were not protected by attorney‑client privilege. The defendant, Bradley Heppner, had used Claude on his own after receiving a grand jury subpoena; the FBI later seized 31 documents generated in those chats from his home.

Rakoff gave three reasons the chats lacked privilege: Claude is not a licensed attorney; Anthropic’s privacy policy allows sharing user data with third parties; and Heppner used the chatbot without direction from counsel. The judge wrote that no attorney‑client relationship ‘could exist’ between a user and a platform such as Claude. He also left open that a chatbot used at counsel’s direction ‘might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney‑client privilege.’

More than a dozen large U.S. law firms issued client advisories, embedded privilege warnings in engagement agreements and set new rules on client use of AI tools after the ruling. New York firm Sher Tremonte added language to a March engagement letter stating ‘disclosure of privileged communications to a third‑party AI platform may constitute a waiver of the attorney‑client privilege.’

Law firms advised clients to avoid entering legal questions into public chatbots and to use closed, enterprise AI systems when available. Some firms recommended that if a lawyer directs a client to use an AI tool, the client should record that direction in the chatbot prompt, for example by writing ‘I am doing this research at the direction of counsel for X litigation.’ Debevoise & Plimpton recommended that approach to help preserve privilege under the Kovel doctrine, which can extend protection to non‑lawyers acting as an attorney’s agent.

Courts have issued conflicting opinions on AI and protection. In Warner v. Gilbarco, a judge found a self‑represented plaintiff’s ChatGPT conversations qualified as work product, reasoning that software is a tool and using it does not automatically disclose information to an adversary. A Colorado court in Morgan v. V2X reached a similar result for a pro se litigant but ordered the plaintiff to disclose which AI tool was used and barred feeding confidential discovery into platforms that permit data training.

Legal practitioners describe a developing split: represented parties who consult consumer chatbots on their own risk waiving privilege, while self‑represented litigants in some civil cases may receive protection for AI‑assisted materials. Justin Ellis of MoloLamken said additional judicial decisions will be needed to clarify when AI chats can be treated as privileged.

Separately, some courts are testing AI to assist judges with heavy dockets. The Los Angeles Superior Court is piloting a tool called Learned Hand to summarize filings, organize evidence and draft rulings in civil matters.

Firms continue to revise engagement letters, send client notices and provide step‑by‑step guidance on using AI safely while litigation over how privilege applies to AI proceeds through the courts.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author