AI, Privilege, and the First Waiver Fight: What Heppner and Warner Really Mean for the Legal AI Landscape
Two federal rulings on the same day reached opposite conclusions about AI and privilege—mapping the emerging legal framework.

Ravi A Magia, Esq.
Legal Engineer

On the same February day in 2026, two federal courts looked at AI and privilege and reached seemingly opposite conclusions. One treated consumer AI use as a privilege killer while the other treated AI as a protected work-product tool. Together, they sketch an emerging legal framework: public, consumer AI is a third party that can destroy privilege, but AI used inside a properly structured, counsel‑directed framework can sit comfortably under traditional doctrine.
The Heppner Ruling: Consumer AI as a Privilege Risk
In United States v. Heppner (S.D.N.Y.), a criminal defendant used the free, consumer version of Anthropic’s Claude to research his case. He fed in information from his lawyers, generated dozens of AI-written analyses about potential defenses, and shared them with his legal team. When the FBI later seized his devices, those documents were on them. The government moved for a ruling that the AI-generated documents were not protected.
Judge Jed Rakoff agreed that neither attorney-client privilege nor work-product protection applied:
No attorney–client relationship with AI: Claude is not a lawyer, and its terms expressly disclaim providing legal advice.
No reasonable expectation of confidentiality: Anthropic’s consumer privacy terms allow collection of user inputs, model training on that data, and disclosure to third parties and even government authorities. On that basis, the court found that using consumer Claude destroyed the confidentiality needed for privilege.
No work product without attorney direction: The documents were created by Heppner on his own initiative, not by or at the direction of counsel, so they did not qualify as protected work product.
Judge Rakoff did, however, leave an opening. He noted that if counsel had directed Heppner to use Claude, the tool might arguably function like a nonlawyer agent assisting the attorney, bringing it closer to traditional privilege doctrine.
The Warner Ruling: AI as a Protected Litigation Tool
That same day, in Warner v. Gilbarco, Inc. (E.D. Mich.), Magistrate Judge Anthony Patti reached a very different conclusion on AI and work product. The plaintiff, Sohyon Warner, is a pro se employment-discrimination litigant – she is both the client and her own lawyer – and used OpenAI’s ChatGPT to help prepare her case.
The defendants sought sweeping discovery: essentially everything related to her use of third-party AI tools in connection with the lawsuit. Judge Patti said no on two independent grounds:
Work product protection: Warner’s AI use was part of her case preparation and therefore fell within Rule 26(b)(3)(A)’s protection for materials prepared in anticipation of litigation.
Proportionality and relevance: The request was not relevant nor proportional under Rule 26(b)(1) and would have exposed Warner’s internal analysis and mental impressions rather than underlying evidence.
The key move in Warner was Judge Patti’s treatment of waiver. Defendants argued that sending prompts and drafts to ChatGPT disclosed work product to a third party. Judge Patti rejected that framing on two fronts:
Privilege waiver vs. work-product waiver: Voluntary disclosure to a third party may waive attorney–client privilege, but work-product waiver generally requires disclosure to an adversary or in circumstances where disclosure to an adversary is likely.
AI is a tool, not a person: The opinion characterized ChatGPT and similar systems as tools, not “persons,” even if humans administer the service in the background. If there is no person on the other end, there is no third party to whom work product has been disclosed.
On that reasoning, using AI as part of one’s internal drafting and strategy process does not, by itself, waive work-product protection.
Why These Opinions Matter
The two rulings certainly do not share identical facts – a criminal defendant acting outside counsel’s direction vs. a pro se civil litigant whose AI use is her litigation work – but together they expose the core questions that will shape AI and privilege law:
Is an AI platform a third party or is it a tool through which a party or lawyer develops their own analysis?
Do broad consumer terms of service, standing alone, eliminate any reasonable expectation of confidentiality?
How should doctrine distinguish between client-initiated AI use and AI work directed and supervised by counsel?
Read literally, Heppner’s reliance on consumer privacy terms could sweep far beyond AI, potentially destabilizing the confidentiality assumptions underlying everyday cloud tools. Warner, by contrast, anchors the analysis in longstanding work-product principles: what is the user actually doing and would compelled disclosure expose their mental impressions?
Practical Takeaways for Law Firms and Legal Teams
Until more courts weigh in, firms and legal departments can reduce risk and still capture AI’s benefits by focusing on a few concrete steps.
1. Get consumer AI out of client work
Free or standard consumer tiers of tools like Claude, ChatGPT, and Google’s Gemini were not designed for privileged legal work as their terms typically allow expansive data use and sharing. After Heppner, treating those tools as safe for client-sensitive prompts is untenable.
Address this with technical controls, not just a policy memo: block or restrict consumer AI on firm devices, monitor for “shadow AI” usage, and route client-related AI work through approved channels.
2. Use enterprise AI that matches existing cloud confidentiality models
The problem in Heppner was not AI per se, but consumer-grade terms. Enterprise legal AI platforms are built more like other trusted legal infrastructure: tenant isolation, no training on customer data, and clear contractual confidentiality obligations with no unilateral disclosure rights.
Courts have long accepted similar models for cloud email, document management, and eDiscovery. Enterprise AI that follows the same pattern is on far firmer ground than consumer chatbots used ad hoc.
3. Make attorney direction and review explicit
Judge Rakoff’s opinion turned in part on the absence of attorney involvement, while expressly leaving open the possibility that AI used at counsel’s direction could be treated like a traditional agent. That is a strong signal.
Build “lawyer in the loop” into your AI workflows: require attorney supervision for AI-assisted analysis, ensure outputs are reviewed and revised by counsel, and document the lawyer’s role so you can later show that the work was prepared by or at counsel’s direction.
4. Talk to clients about how they use AI
Corporate clients are already using AI tools in contracts, deals, investigations, and HR. From a discovery and privilege perspective, unsupervised client prompts to consumer tools are now a live issue.
Ask what platforms they use, how data is handled, and whether training is enabled. Provide written guidance on what is appropriate for sensitive matters and, where necessary, bring AI-assisted work under your direction and into your enterprise tooling.
5. Update AI policies and keep watching the law
If your firm does not yet have a clear policy for AI in client work, these rulings make that gap hard to defend. Policies should cover which tools are allowed, when enterprise versions are mandatory, what data can be used, and how attorney oversight works.
This area will evolve quickly. Differences between criminal and civil contexts, consumer and enterprise tools, and attorney-directed versus client-initiated use will matter. Treat policies and training as living documents.
Where August Fits
The emerging pattern in these cases points toward a model that many firms are already adopting: AI embedded in existing legal workflows, running on enterprise infrastructure, and operating under attorney direction.
August is built for exactly this legal landscape. Firms use August to power litigation and transactional workflows – from chronologies and deposition prep to contract review and due diligence – in environments designed for confidentiality: isolated deployments, no training on customer data, and deep integrations with tools like Microsoft Word and Outlook. Our Playbook builder lets firms encode their own standards and frameworks into repeatable AI workflows that can be safely extended to clients under the firm’s supervision.
For legal teams, the question is no longer whether to use AI, but how to use it without creating avoidable privilege and discovery risk. These first AI privilege rulings suggest a clear direction: move away from unsupervised consumer tools and toward supervised, enterprise-grade AI like August that fits comfortably within existing privilege and work-product doctrine. And as federal courts increasingly express frustration and issue sanctions for hallucinated briefs that submitting lawyers are not properly vetting, choosing a legal AI tool like August that fully grounds its output with verifiable citations becomes critical.
If you’d like to see what that looks like in practice, you can schedule a brief demo or run a limited evaluation of August on real matters.
Case citations: United States v. Heppner, No. 25‑cr‑503 (JSR) (S.D.N.Y. Feb. 17, 2026); Warner v. Gilbarco, Inc., No. 2:24‑cv‑12333 (E.D. Mich. Feb. 10, 2026).
About August:
August is a workflow-native AI platform built for legal teams. ISO 27001 certified, zero training on customer data, single-tenant architecture. Learn more at august.law.