The 8 Most Common Types of AI Hallucinations/False Citations in Case Law

"I am significantly concerned about the last three types. Reading and accurately interpreting judgments is inherently complex. I've spent years carefully analysing and re-analysing judicial decisions, recognising distinctions, and developing my interpretation of what is expressed. Colleagues frequently interpret judgments differently, leading to legal challenges and subsequent reviews by higher courts. The creeping presence of AI hallucinations in this interpretative process presents an unresolved challenge."

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Hallucinations in Case Law and the Types of AI hallucinations.

AI Hallucinations in Case Law
AI Hallucinations in Case Law/False Citations

Introduction

A question frequently raised during my recent presentations on AI law, particularly concerning false citations and AI hallucinations in case law, is whether these false citations represent a solvable problem or are a permanent challenge we must manage. This question isn’t straightforward, as it first requires us to clearly define the exact nature of the problem we’re facing.

One important point often missed is that the real issue isn’t always the glaringly obvious hallucinations/false citations, but rather the subtle ones. To understand this fully, let me first outline how these hallucinations/false citations manifest.

As I was reviewing four recent cases involving false citations/AI hallucinations in case law, along with reflections from my presentations and talks, I realised that I’ve not dedicated a blog precisely on this subject so below I provide a breakdown of what I find the most common types ranked from the easiest to detect to those that are most insidiously deceptive.

Common False Citations/AI Hallucinations in Case Law

1. Fabricated Case and Citation

Easiest to spot. Invented both parties and citation details. These are pure fabrications and, thankfully, tend to be obvious once cross-checked.

2. Wrong Case Name, Right Citation

The citation leads to a real case but the parties are reversed or misnamed. This can create confusion, especially in older or similarly named cases.

3. Right Case Name, Wrong Citation

The parties are correct, but the citation is wrong or entirely fabricated. This is often overlooked, particularly if the user recognises the case name and assumes the rest follows.

4. Conflated Authorities

Multiple real cases are mashed together. You may see correct principles and names may be present, but misattributed or merged into a non-existent hybrid. This creates legal Frankensteins that may sound credible but don’t exist.

5. Correct Law, Invented Authority

The legal principle is sound, but a case is invented to back it up. This type of hallucination is particularly insidious because it reinforces a valid point with a phantom authority.

6. Real Case, Misstated Facts or Ratio

This one is particularly difficult to spot and subject to quite some nuance at times. A real case is cited and the citation is likely right, but the legal principle, outcome of factual context is misrepresented, sometimes subtly. This type of false citation/AI hallucination is especially dangerous, as it corrupts legitimate authority from within.

7. Misleading Paraphrase of Secondary Authority

Where language is pulled from a secondary source e.g. textbooks, articles, or headnotes and rewrites it inaccurately while retaining the tone of scholarship. The citation may be to a real author, but the wording and emphasis are misinterpretations.

8. Real Citations citing False Citations

A source is referenced, maybe containing one of the above errors, potentially cited in legitimate cases or scholarly articles. This also includes chains of AI-generated references, where one false citation is cited by another repeatedly. For instance, a judge cites a false citation and makes a comment on it, which may be interpreted as lending some apparent legitimacy, further embedding the error in the legal discourse.

Comment

I will continue adding to this list as further examples emerge. However, I believe the first five hallucinations will become easier to identify as the technology advances, especially by those tracking these issues closely.

However, I am significantly concerned about the last three types. Reading and accurately interpreting judgments is inherently complex. I’ve spent years carefully analysing and re-analysing judicial decisions, recognising distinctions, and developing my interpretation of what is expressed. Colleagues frequently interpret judgments differently, leading to legal challenges and subsequent reviews by higher courts. The creeping presence of false citations/AI hallucinations in this interpretative process presents an unresolved challenge.

I think the problem is solvable but not without a significant amount of human oversight. That process may be extraordinarily time-consuming, raising critical questions about who bears the financial burden of the task. I am not sure how we address this issue and it raises further questions: how can we ensure consistency in identifying and correcting false citations/AI hallucinations when human interpretations of judgments already vary significantly? Could new regulatory frameworks or guidelines effectively mitigate the risks associated with false citations/AI hallucinations in legal contexts, and if so, what form should these take? I look forward to exploring these issues in further legal articles so please let me know your thoughts.

This article is part of my broader legal commentary available through my Substack newsletter.
Subscribing ensures you receive immediate updates, in-depth analysis, and exclusive legal insights as they are published.
➔ [Subscribe here to stay informed].

What are false citations or AI hallucinations in case law?

AI hallucinations in case law are outputs where an AI tool invents, distorts, or misattributes legal authorities, citations, or what a judgment actually decided. Sometimes the error is obvious (a made-up case). More often, it hides inside something that looks plausible, like a real case name with the wrong citation, or a real authority with a subtly misstated ratio.

Why do false citations or AI hallucinations matter in legal writing and litigation?

False citations can contaminate pleadings, skeleton arguments, advice, and academic commentary. They can mislead other parties and the court. There is a risk is that errors spread, get repeated, and start to look legitimate through repetition, especially where a false authority is cited, discussed, and then re-cited by others.