August Partners with AmLaw 150 Firm for the practice and business of law →
AmLaw 150 Firm Partners with August →

How to Use AI in Law Practice Without Violating Confidentiality

Learn how lawyers can use AI tools safely while protecting client confidentiality, maintaining compliance, and avoiding ethical violations.

Vivan Marwaha

Head of Marketing

Client confidentiality is the reason most attorneys hesitate before adopting AI tools, and it's a legitimate concern. Rule 1.6 requires attorneys to make reasonable efforts to protect client information from unauthorized disclosure, and that obligation extends to every platform and vendor they bring into their practice.

What happens to client data once it enters a third-party AI system is a question attorneys are responsible for answering before they act. Working through it is also what makes responsible AI adoption possible.

Why Confidentiality Concerns Matter with AI

The confidentiality risk in AI tools isn't hypothetical. Many platforms available to attorneys today operate as cloud-based systems that retain user inputs, and some use those inputs to improve their models. An attorney who submits a client contract or a deposition summary as part of a prompt may be feeding sensitive information into a system with no obligation to keep it private.

The lack of transparency around how some AI platforms handle data compounds the problem. Terms of service that technically disclose data retention are often not read before use, and the attorneys most likely to use consumer-grade tools are often the ones with the least institutional guidance about the risks involved.

Attorney-client privilege doesn't disappear the moment data leaves a firm's servers, but privilege protection requires that information remain confidential. Introducing client data into an unsecured third-party system creates a factual question about whether that condition was met.

What the Rules Say About Confidentiality

ABA Model Rule 1.6 governs confidentiality of client information. Under the rule, attorneys must make reasonable efforts to prevent unauthorized disclosure of information related to the representation.

The comments to Rule 1.6 extend that obligation explicitly to technology. Attorneys are responsible for evaluating the security practices of vendors and platforms they use when those platforms handle client information. The fact that a breach or disclosure originates with a third-party vendor doesn't shift responsibility away from the attorney who chose and used that vendor.

"Reasonable efforts" is the operative standard, and the ABA has indicated it requires attorneys to consider the sensitivity of the information involved and what additional safeguards are practically available. For AI tools handling client data, that standard calls for more than a glance at a platform's homepage.

When AI Use Becomes a Confidentiality Risk

The risk isn't evenly distributed across every AI use case in legal work. It concentrates around specific practices. Uploading contracts or pleadings directly into a public AI tool is the most common scenario where risk becomes concrete. Consumer-grade AI platforms were built for general use. The data protections appropriate for legal work aren't part of their default configuration.

Submitting deposition transcripts or medical records for summarization introduces personally identifiable client information into a system with unclear retention practices. Depending on the platform and the information involved, this can implicate both Rule 1.6 and applicable privacy regulations.

Using AI tools that explicitly retain prompt history or train on user inputs is a category of risk attorneys should understand before adopting any platform. Reviewing the data handling section of a vendor's terms before use is due diligence, not optional reading. 

Types of AI Tools and Their Risk Levels

Public AI Tools

General-purpose AI tools carry the highest confidentiality risk for legal work. They were designed for broad audiences and their data practices reflect that. Some retain prompt history by default. Some use inputs for model training. When client information goes into these tools, attorneys often have little visibility into what happens next.

That doesn't mean general AI tools have no place in legal work. Tasks that don't involve client data, such as researching general legal concepts or drafting internal communications with no client details, can be handled without creating a confidentiality problem.

Legal-Specific AI Platforms

Platforms built for legal work typically incorporate data protections that general tools don't. Many offer zero-retention agreements, meaning the platform won't store or train on inputs. Some carry compliance certifications relevant to professional services. The security posture is meaningfully different from consumer tools, and that difference matters when the work involves client documents. 

The due diligence is still necessary. Not all legal AI platforms are built to the same standard, and attorneys should confirm specific data handling practices before using any platform with client information.

Private or Enterprise AI Systems

Private deployment or enterprise configurations give firms the highest level of control over how AI handles data. These systems can be configured to process information within the firm's own infrastructure, with no third-party retention of client data.

For firms handling sensitive matters or subject to heightened regulatory requirements, this model represents the clearest path to AI adoption without confidentiality compromise. The implementation requires more investment than a platform subscription, but the trade-off is a precise understanding of where client data goes.

Best Practices for Protecting Client Data When Using AI

Limit the Client Information That Goes In

When AI assistance is needed for a specific matter, removing or substituting client names and identifying details before submitting a prompt reduces the information at risk without compromising the usefulness of the output.

Review Vendor Privacy Policies Before Adoption

The specific questions to answer are whether the platform retains inputs and whether data is used for training purposes. These terms determine whether using the platform with client information is compatible with Rule 1.6.

Use Platforms Built for Legal Work

The gap in data protection between consumer AI tools and legal-specific platforms is meaningful. Evaluating a platform's security posture before adoption is a manageable task that belongs earlier in the process than most firms put it.

Restrict Internal Access

Limiting which attorneys and staff can use firm AI tools, and for which tasks, reduces the opportunity for inadvertent disclosure.

Put the Policy in Writing

Documented expectations about which tools are approved, which data can be used, and what review is required are the practical foundation for consistent confidentiality protection across the firm.

Attorney Data Security and Vendor Due Diligence

Choosing an AI vendor is a professional responsibility decision, and the questions attorneys ask before adoption should reflect that. The first area to evaluate is data storage and retention. Where is information processed? Does the vendor retain inputs after the session ends, and under what circumstances can vendor employees access stored data? These conditions determine whether using the platform with client information is compatible with Rule 1.6.

Encryption practices matter for data in transit and at rest. Vendors providing services to law firms should confirm that client information is encrypted throughout its lifecycle within their system.

Compliance certifications provide an independent reference point. SOC 2 Type II is a relevant standard for legal technology vendors, as it requires an independent audit of security controls. Other certifications may be relevant depending on the practice area.

Contractual protections should be confirmed before any client information enters a platform. A data processing agreement that includes confidentiality obligations and limits on data use gives attorneys a formal basis for the security representations the vendor has made.

Building a Safe AI Workflow for Law Firms

A safe AI workflow is built around a clear understanding of which parts of the work involve client information and which don't.

Summarizing publicly available case law or drafting internal process documents doesn't require client data. These tasks can be handled with AI assistance on any platform without confidentiality implications. When client-specific work is involved, the workflow should begin with identifying what information actually needs to go into the AI tool. In many cases, the answer is less than attorneys initially assume. A contract clause can be reviewed for readability without including the client's name. A motion structure can be drafted from a sanitized fact pattern with identifying details removed.

Any AI output involving client matters should go through attorney review before use. Staff training is the operational piece that makes workflow policies functional. Attorneys and staff who understand why these practices exist are more likely to follow them consistently than those given a list of rules without context.

Common Mistakes Lawyers Make When Using AI

Assuming AI Tools Are Private by Default

Consumer AI platforms don't operate under the same confidentiality assumptions attorneys apply to internal systems. Treating a general-purpose AI tool as a private research environment creates real exposure.

Uploading Entire Case Files

The scope of information submitted in a prompt should match what the task actually requires. Submitting a full case file when a summary of the core facts would serve the same purpose introduces far more client information than necessary.

Skipping Vendor Policy Review

The data practices of an AI platform aren't disclosed in the interface. They're in the terms of service and privacy policy, and the provisions around data retention should be read before client information is involved.

Relying on Output Without Independent Review

If AI-generated output contains client information introduced during the session, that output needs to be reviewed before it goes anywhere outside the firm.

Operating Without an Internal AI Policy

Firms without clear guidance on AI tool use are leaving confidentiality decisions to individual judgment across every attorney and staff member using these tools. Written policies are what make consistent practice achievable.

Key Takeaways

AI can be used in legal practice without creating confidentiality problems. Getting there involves understanding which tools carry real risk and building workflows that control what client information enters AI systems in the first place. Vendor evaluation is part of that groundwork.

The obligation under Rule 1.6 doesn't adjust based on how useful a tool is. Attorneys who treat that obligation as the starting point for AI adoption, rather than an obstacle to it, end up with a more durable approach to both technology and client protection.

Have questions about protecting client confidentiality while adopting new technology? Contact August Law to discuss responsible innovation in legal practice.

Let's Talk Further

Request a demo or email us—we’ll spin up a live workflow for you, free of charge, in under a week.

Let's Talk Further

Request a demo or email us—we’ll spin up a live workflow for you, free of charge, in under a week.