6
min read

A Lawyer’s Practical Guide for Malpractice Insurance Compliance in the Age of AI

First Drafts Team
Our In-House Panel of Lawyers, Engineers, and Other Experts

A Lawyer’s Practical Guide for Malpractice Insurance Compliance in the Age of AI

Introduction

The legal profession is on the cusp of another technological revolution, this time driven by artificial intelligence (AI). From research and drafting to document review, AI tools promise unprecedented efficiency. However, with these powerful capabilities come new and significant risks, particularly concerning professional malpractice and client confidentiality. While malpractice insurers are beginning to inquire about AI policies, specific mandates are likely still a few years away, pending more data from AI-related malpractice cases. While insurers aren't yet prescriptive about specific policy requirements, forward-thinking firms should begin implementing comprehensive frameworks now. There is a crucial window for law firms to proactively implement robust AI governance frameworks, not just to satisfy future insurer demands, but to safeguard their practice and clients now.

The Current Landscape

Malpractice insurers are in a transitional phase. They recognize the need for AI policies but lack sufficient data from malpractice cases to mandate specific requirements. This creates both an opportunity and a challenge for law firms. The opportunity lies in proactively establishing best practices before rigid requirements emerge. The challenge is navigating this uncharted territory without clear guidance.

The core challenge lies in the very nature of many generative AI models. They can produce polished, coherent text that gives an"illusion of competence," yet be riddled with inaccuracies, most notably "hallucinated" case citations – references to non-existent legal authorities. These recent cases involving AI-generated legal briefs with hallucinated citations underscore the urgency of this issue. These incidents typically follow a pattern: an attorney uses AI to generate polished-looking work product, the hallucinated citations go undetected through traditional review processes, and the errors are only discovered when opposing counsel or judges attempt to verify the cases. This isn't just an academic concern; it’s leading to real-world consequences, such as judges striking briefs laden with phantom cases.

Step 1: Transparency In Use

One of the most critical yet overlooked aspects of AI use in legal practice is transparency. Compounding the issue of legal malpractice when using AI is a pervasive reluctance among legal professionals to openly admit their use of AI. There's a stigma: a fear of being perceived as lazy,incompetent, or cutting corners.

This secrecy is dangerous. If senior attorneys and co-counsel are unaware that a draft was AI-assisted, they may review it with a traditional lens, missing the subtle but critical errors AI is prone to making,such as entirely fabricated citations for otherwise sound legal principles. Traditional review generally focuses on spelling, grammar, and glaring legal missteps, not necessarily verifying every single factual allegation and legal citation,especially if the overarching legal argument seems correct.

AI-generated content thus requires different scrutiny—specifically verification that cited cases exist and support the stated propositions. Without knowing AI was used, reviewers may not perform these additional checks. Firms should mandate disclosure whenever AI assists in document creation, and foster a culture of transparency surrounding the use of AI. This isn't about shaming or penalizing AI use; it's about ensuring appropriate review procedures are followed.

Step 2: Mandatory Verification

It's tempting to believe that Silicon Valley will eventually"fix" AI hallucinations. However, a fundamental characteristic of many current large language models (LLMs) makes this unlikely. These models are designed to respond, to complete a prompt. If asked to provide a legal citation for a proposition and one isn't readily available in its training data, it won't simply say "I don't know." Instead, it will often construct a plausible-sounding, yet entirely fictional, citation.

It is thus up to the lawyers to act as a post-processing filter, “fixing” and “verifying” what the AI has generated before filing. The"shepherdize everything" approach before filing is crucial; law firms should utilize tools like LexisNexis’ Document Analysis feature or Westlaw’s Quick Check to check for all citations. This simple step, which can often be performed by support staff, can catch the vast majority of hallucinated citations (though not all). It must become as routine as a spell-check.

Step 3: The Three-Tiered List Approach

To navigate this complex landscape, firms should consider implementing a clear, tiered AI usage policy. This "Whitelist, Greylist, Blacklist" model offers a structured approach:

  1. The Whitelist: This category includes AI tools that are generally approved for use, often with specific guidelines.
    • Examples: First Drafts, Westlaw, LexisNexis.
    • Critical Caveat: Even with whitelisted tools, a non-negotiable final step must be to verify all citations using traditional case-checking tools like Shepard's or KeyCite, and a final attorney review of the work product for accuracy and completeness before filing.
  2. The Greylist: These are AI tools or applications that can be used, but with more significant restrictions and requiring explicit approval, often on a case-by-case basis.
    • Examples: Using a general-purpose LLM like ChatGPT for brainstorming non-confidential legal theories, or generating a very generic, non-client-specific letter template.
    • Restriction & Approval Process: Use might be contingent on not inputting any client data, or the output being strictly for internal research and never client-facing without extensive human redrafting and verification. Approval should come from a designated "AI Reviewer," an "AI Czar," or a committee knowledgeable about the specific risks and capabilities of the tool in question. They would define permissible uses and necessary safeguards.
  3. The Blacklist: This category contains AI tools and practices that are strictly prohibited due to unacceptable risks, primarily concerning client confidentiality and data security, or known unreliability.
    • Examples: Using a free AI “record and transcribe” to help you take notes about your confidential attorney-client discussions where data retention and usage policies are unclear or unfavorable. Using AI tools with a known high propensity for critical errors in legal contexts (i.e., ChatGPT).
    • Rationale: The risk of client data being absorbed into the model's training set or otherwise breached, and the high likelihood of unreliable outputs, make these uses untenable.

Step 4: Ongoing Education and Adaptation

The AI landscape is evolving rapidly. AI tools are already providing a significant, and increasing, competitive advantage. Firms that ignore them risk falling behind. However, unchecked adoption courts disaster.The malpractice risks associated with AI hallucinations and data privacy breaches are too severe to ignore.

Policies will need to be reviewed and updated regularly as tools improve and new risks emerge. And the time for law firms to act is now.By implementing a clear, practical AI policy and fostering a culture of transparency and rigorous verification, firms can harness the benefits of AI while diligently managing its inherent risks. Waiting for insurers to dictate terms or for a catastrophic error to force a change is a gamble no prudent firm should take.

Ready to shave hours off of drafting litigation documents?
Start a free trial in minutes and see the difference!