Protecting Privilege in the Age of Legal AI
The Heppner ruling made one thing clear: which AI tools attorneys use, and how they use them, can directly determine privilege outcomes. Consumer tools and purpose-built platforms are not the same.
Artificial intelligence is rapidly reshaping legal work, and it’s happening faster than many firms have had time to fully assess the risks. Tasks that once took hours or days to complete can now be done in minutes, freeing attorneys to focus on higher-value judgment calls. But with this increased efficiency also comes risk, and serious ethical stakes to consider. This raises a question legal professionals can’t afford to ignore: How do you adopt AI without putting attorney-client privilege at risk?
Navigating Bias and Confidentiality Risks in AI Tools
Bias is one of the harder risks to spot in legal AI. Because consumer AI tools learn from historical data, they can quietly carry forward systemic inequities that influence outcomes, often in ways that are easy to overlook. When that data reflects systemic gaps, AI outputs can reinforce them, with disproportionate consequences for marginalized clients and communities.
To maintain justice and fairness, attorneys must actively ensure the AI systems they use operate neutrally. In commercial-grade AI models, minimizing bias is built into the system itself and tailored to the environment.
Another challenge arises when legal professionals adopt AI tools without fully understanding how they handle sensitive data. In most industries, using AI poorly might be a compliance issue, but in law, it’s a question of duty. Attorney-client privilege in AI isn’t just best practice, it’s an ethical responsibility with serious repercussions if breached. When sensitive information is fed into AI tools without proper safeguards, it can trigger an AI privilege waiver, sometimes without anyone realizing it until it’s too late.
In February 2026, that risk reached a federal courtroom. In United States v. Heppner, a judge ruled that 31 AI-generated legal documents were not protected by attorney-client privilege because the consumer AI tool used to create them shared data with third parties by design.
This ruling sent a clear signal: The real question isn’t whether AI is being used; it’s how legal teams can use the right AI tools responsibly while safeguarding clients.
The answer lies in understanding the structural difference between consumer AI tools and purpose-built legal platforms—and why that distinction now carries privilege implications that every litigation team needs to evaluate.
Why Attorney-Client Privilege Is Uniquely Vulnerable to AI
Attorney-client privilege protects communications between a lawyer and their client. But that protection hinges entirely on keeping those communications confidential.
Most large language models (LLM) are designed to learn from user inputs. When an attorney pastes case-specific details into a consumer AI platform, that information may be stored or disclosed to third parties under the platform's terms of service. The attorney may not realize this is happening. The client almost certainly does not. And that puts all parties at risk.
Lessons from United States v. Heppner AI Privilege Ruling
Bradley Heppner learned this the hard way. Facing federal securities fraud charges in late 2025, the former CEO of GWG Holdings typed what he had learned from his defense attorneys into the consumer version of Claude, generating 31 documents analyzing potential defense strategies, and later shared them with his counsel. In February 2026, a federal judge ruled that none of those documents were privileged.
The court pointed to several key factors, including the absence of an attorney-client relationship with the tool and the platform's data practices, which permitted data collection and disclosure to third parties.
The risk extends beyond legal practice. In 2023, Samsung engineers inadvertently exposed proprietary source code and internal meeting notes by entering them into ChatGPT. Those incidents involved trade secrets, not privilege, but the mechanism is the same: information goes in, and what happens next depends entirely on the system receiving it.
What the Bar Requires
In July 2024, the ABA issued Formal Opinion 512, providing the clearest guidance to date on generative AI. Under this framework, voluntarily sharing privileged information with a third party, without an applicable exception, can result in a privilege waiver. The ABA identifies three rules to maintain responsible use of AI in legal practice:
Rule 1.1 – Competence
Lawyers must understand the benefits and risks associated with the technology they use, including AI. Technological competence is not optional. In Heppner, the court did not ask whether the defendant intended to waive privilege. It asked whether the tool's data practices were understood before use. They were not. That is exactly the kind of gap Rule 1.1 exists to prevent.
Rule 1.6 – Confidentiality
Attorneys must take reasonable steps to prevent unauthorized disclosure of client information and obtain informed consent before entering any confidential data into self-learning AI tools. Simply giving clients boilerplate engagement-letter language isn’t enough, this will require thoughtful explanation and education. The Heppner ruling turned on this point directly: the platform's terms of service permitted data collection and disclosure to third parties. The privileged information was shared the moment it was entered. No breach, no hack—just a terms-of-service agreement that no one read closely enough.
Rule 5.3 – Supervision
AI tools are treated as nonlawyer assistants. Attorneys remain responsible for how those tools handle client information; delegating a task does not delegate the ethical obligation. When Heppner used a consumer AI tool to analyze his defense strategy, there was no attorney supervising how that tool processed the information. The 31 documents it generated were treated as unprotected precisely because no lawyer controlled the workflow that created them.
Taken together, these rules establish that AI can be a powerful resource, but only when used with proper safeguards and human oversight. Missteps can lead to serious consequences, including AI confidentiality legal risks and inadvertent privilege waivers.
State bar guidance reinforces this direction. North Carolina's 2024 Formal Ethics Opinion 1 and Texas Opinion 705 both address the duty to evaluate AI tools for confidentiality risks before use.
Attorneys have an affirmative duty to understand how the AI tools they use handle data. Ignorance is not a defense. Under ABA Opinion 512, it is a competence failure.
Questions Every Legal Team Should Ask Their AI Vendors
Not every AI tool carries the same risk.
In contrast, in Warner v. Gilbarco, Inc., a federal court reached the opposite conclusion from Heppner, finding that AI-assisted analysis remained protected under the work product doctrine.
What made the difference? The AI tool that was used operated under counsel’s direction, within a secure workflow, and backed by contractual confidentiality protections. In this case, the court did not penalize the use of AI, instead it validated a disciplined, defensible approach to using it.
That distinction—between uncontrolled consumer tools and purpose-built, secure platforms—should guide how every legal team evaluates their AI workflows.
Before adopting any AI platform, legal teams should establish clear security standards and must be able to answer the following:
- Does the tool train on client data? Verify this in the contract, not the marketing copy.
- Where is data stored, and is it encrypted in transit and at rest?
- Are AI processes isolated from other users' data? Shared environments create exposure that isolated ones don't.
- What certifications does the vendor hold? ISO 27001 and SOC 2 Type 2 are the baseline indicators of independently verified security standards.
- What happens to data after the engagement ends? Retention and deletion policies are frequently overlooked and rarely negotiated.
These are not edge-case concerns. They are the baseline for evaluating legal AI data security and protecting privilege in any AI-enabled workflow.
Those questions have clear, verifiable answers. And the difference between a platform that can answer them and one that cannot is, after Heppner, the difference between a defensible workflow and an exposed one. Here is what a purpose-built approach to these questions looks like in practice.
How Prevail Approaches AI Security and Privilege Protection
Not all AI tools are created equal, and in legal work, the difference can be critical. Consumer AI models are powerful, but their data practices reflect general-purpose design. They are not built to safeguard privileged information. Using them for confidential legal work isn’t a flaw in the technology. It is a tool mismatch. It’s like storing privileged documents in an unlocked filing cabinet. One that, after Heppner, carries demonstrable privilege consequences.
These questions have clear answers, and they point to why purpose-built legal AI exists. Platforms designed for law, like Prevail’s CheckMate, integrate security and compliance, from the ground up. With the authority to operate with federal agencies, CheckMate is designed specifically to counter these risks, giving lawyers confidence that the right infrastructure is in place.
How Prevail Protects Confidential Client Data
Isolated AI environments
Client data is never used to train models or retained in shared datasets, or exposed across users. The separation is purpose-built in, not a policy setting that can be changed in the next terms-of-service update.
Encryption in Transit and at Rest
All data is protected at every stage of the workflow. There is no unencrypted state.
ISO 27001 Certified and SOC 2 Type 2 Attested
Prevail is the first vendor in the testimony management space to hold both, independently verified through regular third-party audits and penetration testing. Full details on Prevail’s security architecture are available at prevail.ai/security.
Platform-Based Workflows
Sensitive content stays within Prevail. It does not migrate to personal devices or unsecured tools.
Strict Access Controls and Authentication
User-defined permissions govern who sees what. SSO and SAML authentication integrate with existing identity management, and all access is logged and auditable.
The Heppner court noted that enterprise AI tools with contractual confidentiality protections could present a materially different privilege analysis. Prevail's architecture is built to meet that standard.
Moving Forward With Purpose-Built AI Tools
The Heppner ruling did not change the law. It applied longstanding principles to a new and increasingly common behavior. AI is not going away. And the pace of AI adoption does not change the obligation to protect client information. It raises the stakes for meeting it.
The difference is not whether you use AI. It is whether the tools you use are designed to meet the standards your work requires. Use the wrong tool and you risk exposing privileged information without realizing it. Use the right one and you can capture everything AI offers without compromising confidentiality.
To read more about how Prevail answers every question on that checklist, schedule a demo.
Real-time deposition analysis is here. See CheckMate in action.
