August Partners with AmLaw 150 Firm for the practice and business of law →
AmLaw 150 Firm Partners with August →

Is AI Ethical for Lawyers? What the ABA Says About AI in Legal Practice

Is AI ethical for lawyers? Learn what the ABA says about AI in legal practice, including confidentiality, supervision, and ethical responsibilities.

Vivan Marwaha

Head of Marketing

If you've been holding off on AI tools because you weren't sure where the ethics landed, the ABA has weighed in. AI use in legal practice is permitted, and the professional rules attorneys already operate under cover it the same way they cover everything else in your practice. Understanding which rules apply and where attorneys have run into trouble can help you use AI confidently and correctly.

Why AI Ethics Has Become a Major Issue for Lawyers

A few years ago, AI in legal practice meant predictive analytics and e-discovery platforms. Today it means drafting assistants and research tools available to attorneys at firms of any size with a subscription.

AI’s proliferation happened fast enough that courts saw its consequences before most bar associations had issued formal guidance. Judges began asking whether AI was used to prepare filings, and attorneys started turning up in sanctions proceedings for submitting research they hadn't verified. The judicial system was paying attention before the profession had finished deciding what its official position was.

Disclosure requirements are already in place in several jurisdictions, and that list continues to grow. The ethics question stopped being theoretical the moment attorneys started getting sanctioned.

What the ABA Model Rules Say About AI

The ABA's approach has been consistent: apply the existing Model Rules to AI. The coverage those rules provide is more complete than many attorneys realize.

Rule 1.1 — Competence

Rule 1.1 requires competent representation. The comments were amended in 2012 to include "relevant technology" in the definition of what competence requires. That language applies directly to AI, including understanding where it fails.

Rule 1.6 — Confidentiality

Rule 1.6 governs confidentiality of client information. Every platform an attorney uses to handle client data falls within its reach, including AI tools with data retention practices that may conflict with privilege protections. 

Rules 5.1 and 5.3 — Supervision

These rules address supervision of attorneys and non-attorney staff. The supervising attorney is responsible for work product leaving the firm regardless of how it was produced, including first drafts generated by AI. 

Rule 3.3 — Candor Toward the Tribunal

Rule 3.3 requires candor in court filings. Submitting fabricated citations, regardless of their source, falls within it. This is the rule at the center of the most consequential AI sanctions cases to date. 

The through line across these rules is professional judgment. AI is a tool that requires it, and the rules were written to govern exactly that. 

ABA Guidance on Technology Competence

Technology competence, as the ABA defines it, is practical rather than technical. Attorneys need to understand what a tool does in practice and where it fails. 

AI tools used in legal work can produce output that reads as authoritative while being factually wrong, and better prompting reduces errors at the margins without eliminating them. An attorney reviewing AI output without that understanding has no reliable basis for catching errors before they become professional responsibility problems.

The ABA's position is unambiguous: blind reliance on any tool is incompatible with the duty of competence, and attorneys must verify outputs before use. That obligation doesn't scale back based on how confident the output looks.

Confidentiality and Client Data Risks

Consumer-grade AI tools were built for broad audiences, and their data practices reflect that. Many operate under terms of service that allow inputs to be retained and used in ways incompatible with attorney-client privilege. 

Privileged information entered into a third-party system without appropriate protections can lose those protections. Vendor security practices vary considerably, and the ABA makes clear that evaluating those practices before adoption is the attorney's responsibility.

Before using any AI tool with client data, attorneys should be asking vendors direct questions: how is data stored, who can access it, and does a formal data processing agreement exist that addresses professional responsibility requirements. Unclear answers are informative.

Supervisory Responsibilities When Using AI

The obligation to supervise extends to technology. Rules 5.1 and 5.3 establish that supervising attorneys are responsible for the work product that leaves their firm, and AI-generated first drafts fall within that responsibility.

For attorneys managing junior staff, there's a specific consideration worth accounting for: associates and paralegals who are comfortable with AI tools may also be more susceptible to overreliance on them. Reviewing their AI-assisted work requires enough familiarity with where those tools fail to catch what they might miss.

Establishing internal expectations in writing is what gives the supervisory structure substance. An informal understanding about appropriate AI use, however well-intentioned, can leave attorneys exposed.

Real-World Examples of AI Ethics Failures

Two federal cases in 2023 became the clearest illustrations of what AI ethics failures look like in practice.

In Mata v. Avianca, attorneys submitted a brief citing cases that ChatGPT had fabricated, complete with docket numbers and party names that didn't correspond to actual cases. The attorneys hadn't verified any of them. The court sanctioned them, and the case became the most widely circulated example of what skipping verification looks like when it goes wrong. 

Park v. Kim, decided by the Second Circuit around the same time, involved an AI-generated brief submitted without adequate attorney review. The court referred the matter for potential discipline. 

Both cases prompted courts to begin issuing standing orders requiring disclosure of AI use in filings. That trend has continued across jurisdictions and shows no sign of slowing.

Best Practices for Ethical AI Use in Law Firms

The ABA's ethical framework for AI doesn't foreclose its use. Attorneys who understand where the obligations fall can work within them productively. 

Treat AI as a first-pass tool. Use it for research and drafting, then apply attorney judgment on accuracy and legal soundness before anything goes out. The output is a starting point.

Verify every citation independently. Every case an AI tool returns should be confirmed in a reliable legal database before it appears in a filing. The Mata sanctions exist as a practical reminder of what skipping that step produces. 

Evaluate tools before using them with client data. Confidential materials should only go into platforms built for legal use, with explicit data protections in place. Review vendor terms and data processing practices before adoption. 

Build a written AI policy. It should specify which tools are approved, for which tasks, and what review is required before work leaves the firm.

Train everyone who uses these tools. An AI policy that goes unread offers no protection.

The Future of AI Ethics in the Legal Profession

The ABA's decision to apply existing rules to AI signals a deliberate pace. State bars in several jurisdictions have moved faster, and formal ABA opinions focused specifically on AI use are expected to follow at the national level.

Judicial expectations are developing on their own timeline. Disclosure requirements vary significantly by court today, and attorneys can reasonably expect those requirements to become more standardized, with greater specificity about what adequate supervision of AI-assisted work looks like.

Transparency requirements are also expanding outside the legal profession in ways that will eventually inform bar guidance. Firms that have established considered internal practices now will have less adjusting to do when formal requirements arrive.

Key Takeaways

AI use in legal practice is ethical and permitted. The ABA has made that clear by applying the Model Rules to it rather than restricting it.

What those rules require is consistent with good practice regardless of what tools are being used: understand the technology, protect client data, review what goes out under your name, and maintain oversight of everyone in your firm who touches client work. Attorneys who do that with AI are practicing responsibly, and the work product is better for it.

Have questions about using AI tools in your legal practice while maintaining compliance with professional responsibility rules? Contact August Law to discuss responsible adoption strategies.

Let's Talk Further

Request a demo or email us—we’ll spin up a live workflow for you, free of charge, in under a week.

Let's Talk Further

Request a demo or email us—we’ll spin up a live workflow for you, free of charge, in under a week.