The AI “Hallucination” Panic

There’s been a wave of alarm about generative AI producing fabricated cases, invented quotes, or incorrect statements of law — so-called “AI hallucinations.” Those incidents have prompted serious discussion about competence, supervision, verification, and accountability. Lawyers have been referred to bar authorities, judges’ orders questioned, and, in some places, discipline threatened.

That public outrage is understandable. Fabricated authority or factual invention can distort adjudication, mislead courts, and harm opposing parties. But it’s important to remember that improper or misleading use of case law by human attorneys is longstanding, widespread, and sometimes intentional. Courts and disciplinary systems have not always reacted as forcefully to those human failures as they now react to AI errors. That asymmetry deserves attention.

Human Misuse of Case Law Is Nothing New

Human misuse of case law is neither new nor rare. Lawyers frequently misquote cases, overstate holdings, or cite cases as supporting propositions they do not actually support. Common problems include: mischaracterizing factual predicates of precedent; citing dicta as binding law; and attributing legal rules to cases that stand for the opposite. These errors range from sloppy research to deliberate misrepresentation. Judges regularly point out counsel’s misstatements in opinions; appellate reversals sometimes rest on inaccurate framing of precedent.

Many misuses resemble intentional misconduct — yet are often treated more leniently. Knowingly misrepresenting a case can be ethical misconduct (and sometimes fraud on the court). Still, courts often respond with admonitions, corrected briefing, modest sanctions, or no action at all. Selective citation, misleading paraphrase, or false factual claims about cases often appear intentional, but referrals for discipline, public reprimands, or disbarment remain relatively rare compared with the vigorous reactions now seen when AI invents cases.

Causes for this disparity include judicial reluctance to impose severe sanctions on fellow officers of the court, the difficulty of proving intent, and institutional norms that favor adversarial correction over formal discipline.

AI hallucinations expose systemic verification gaps — but those gaps predated AI. AI errors feel novel and dramatic: an authoritative-sounding paragraph that invents cases or law seems like blatant deception, even when unintentional. Media coverage amplifies the shock. The underlying problem is perennial: courts and opposing counsel cannot rely on the mere presence of a citation. Verification requires checking primary authorities; the duty of competence includes confirming that cited cases exist and say what counsel claims. Reliance on secondary summaries, unchecked bundled citations, or the assumption that the other side did the homework are human failures that AI makes more visible but did not create.

Policy and Practice Implications

Equal verification and accountability standards for human and machine work.

  • If quoting a nonexistent case risks discipline regardless of author, lawyers must verify authorities whether a brief was written by a person or generated by an app.

Focus on substance and intent, not just mechanism.

  • Intentional misrepresentation is an ethical violation irrespective of AI use. Genuine negligence should be met with proportionate remedies aimed at correcting the record and protecting litigants.

Clear guidance on AI use without minimizing human misconduct.

  • Require disclosure of AI assistance, verification of citations and facts, and supervisory responsibility for junior lawyers using tools.

Invest in training and systems.

  • Law firms and courts need robust citation-checking processes, primary-source verification training, and technological aids that flag nonexistent or inconsistent authorities.

Calibrate sanctions to culpability.

  • Apply a spectrum of responses (corrections, evidentiary remedies, fines, bar referrals) based on whether the misrepresentation was negligent, reckless, or intentional — regardless of whether AI played a role.
Conclusion

AI hallucinations in legal writing are serious and deserve strong attention. But treating them as a new ethical category that warrants harsher treatment than comparable human errors is misleading. The profession already faces pervasive problems with misused, misquoted, and distorted caselaw—sometimes indistinguishable from intentional misconduct. Fair, consistent standards that hold lawyers accountable whether errors come from a human or an algorithm, together with better verification, disclosure, and training, are the right path forward. Learn more about how law firms can effectively use AI.