Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction
I’m taking some time to catch up on the many Fabricated Legal Authority/AI hallucination cases from July, there are more than you might expect. First up: a federal court in the US has issued a striking 51-page order sanctioning lawyers who relied on fabricated legal citations. This full case can be read here.
Fabricated Citations: What Happened in Johnson v Dunn?
The case involved an incarcerated claimant who alleged that the defendant, a former commissioner of the Alabama Department of Corrections, had submitted false case citations in two motions. The defendant’s three attorneys confirmed, both in writing and during a hearing, that the citations were entirely generated by ChatGPT and were hallucinations. As the court succinctly put it, “…In simpler terms, the citations were completely made up.”
The court was faced with the critical task of determining the appropriate sanction for such severe professional misconduct. In doing so, it emphasised:
“The court must determine an appropriate sanction. Fabricating legal authority is serious misconduct that demands a serious sanction. In the court’s view, it demands substantially greater accountability than the reprimands and modest fines that have become common as courts confront this form of AI misuse. As a practical matter, time is telling us – quickly and loudly – that those sanctions are insufficient deterrents. In principle, they do not account for the danger that fake citations pose for the fair administration of justice and the integrity of the judicial system. And in any event, they have little effect when the lawyer’s client (here, an Alabama government agency) learns of the attorney’s misconduct and continues to retain him.”
Court’s Core Principles for AI-Related Sanctions
In determining a reasonable and proportionate sanction, the court set out three core principles. Any sanction must:
- Have sufficient deterrent force to make the misuse of AI unprofitable for lawyers and litigants.
- Correspond to the extreme dereliction of professional responsibility that fabricated citations reflect, whether generated by artificial or human intelligence.
- Effectively communicate that invented authorities have no place in a court of law.
Mitigation Efforts by the Lawyers Involved
The lawyers involved raised several mitigating points, including:
- Accepting full responsibility.
- Acknowledging their failure to properly verify the citations.
- Arguing the lapse was reckless but not intentionally deceptive.
- Highlighting their embarrassment resulting from media coverage, along with tightened firm policies
- Emphasising remedial actions, such as giving lectures to students on AI risks and conducting an independent investigation into AI guidance.
The Court’s Reasoning: Why Mild Sanctions Aren’t Enough
It was suggested that a modest fine and a formal warning was appropriate in all the circumstances. The Court disagreed:
“Having considered these cases carefully, the court finds that a fine and public reprimand are insufficient here. If fines and public embarrassment were effective deterrents, there would not be so many cases to cite. And in any event, fines do not account for the extreme dereliction of professional responsibility that fabricating citations reflects, nor for the many harms it causes. In any event, a fine would not rectify the egregious misconduct in this case.
The court finds that (1) a public reprimand paired with a limited publication requirement, (2) disqualification, and (3) referral to applicable licensing authorities are necessary to rectify the misconduct here and vindicate judicial authority. Disqualification fits well: lawyers should know that if they make false statements in court proceedings, they will no longer have the professional opportunity to participate in those proceedings. Similarly, litigants should have assurance that false statements will not be allowed in their cases, and no court should be required to allow an attorney responsible for making false statements in the proceedings to continue in the proceedings. Likewise, a public reprimand with limited publication fits: it makes other clients, counsel, and courts aware of the lawyer’s misconduct so that they may assess whether any measures are needed to protect their proceedings. Finally, the referral to licensing authorities is a bare minimum in the light of the primary nature of a lawyer’s professional responsibility not to make things up.
The court further finds that no lesser sanction will serve the necessary deterrent purpose, otherwise rectify this misconduct, or vindicate judicial authority. [named attorneys] are well-trained, experienced attorneys who work at a large, high-functioning, well-regarded law firm. They benefitted from repeated warnings, internal controls, and firm policies about the dangers of AI misuse. They have regular access to gold-standard legal research databases. They must have known they would be deeply embarrassed in this kind of situation, and that there could be harsh consequences with the court and their law firm. And yet here we are. The reality that this lapse in judgment presented in the most spectacularly unforced fashion underscores the need for more than a fine and reprimand.”
The Final Order: Sanctions Imposed by the Court
According, the Court Ordered that:
- The lawyers were publicly reprimanded.
- To give effect to the reprimand, they were ordered to provide a copy of the order to their clients, opposing counsel, and the presiding judge in every pending state or federal case in which they were counsel of record, as well as every attorney in their firm.
- The Clerk of Court was directed to submit the sanctions order for publication in the Federal Supplement.
- The lawyers were disqualified from further participation in the case.
- The order was referred to the state bar associations in all jurisdictions where the attorneys were licensed.
- The Clerk of Court was directed to serve the order on the general counsel of the Alabama State Bar and other applicable licensing authorities.
Comment
The hallucinations here were likely Type 1 to 3, which are the most easily detected. It’s unfortunate that the attorneys did not identify these errors earlier. During a recent presentation, I was asked whether fabricated legal authorities or AI hallucinations were limited to junior members of the profession lacking experience and overly relying on AI tools. In response, I shared my own experience of how hallucinations almost found their way into my legal work, partly inspiring the Natural and Artificial Law project.
The reality is that no one is immune from this risk. If you use AI in any aspect of your legal practice, there remains a possibility that hallucinations will creep in. From the AI Hallucination Tracker, you’ll note that even highly experienced lawyers and judges have fallen foul of similar errors. This case exemplifies how senior counsel, working within respected firms, remain vulnerable to critical lapses of judgement caused by AI-generated misinformation. Despite robust internal policies, explicit warnings, and strong professional incentives to avoid negligence, these experienced lawyers still fell into error, demonstrating clearly that no one is immune from AI-related misteps. We must all remain vigilant.
On a separate note, I must again express caution: the court’s decision to set out the “five problematic citations across two motions” in full, along with their false legal principles, raises concerns. By openly documenting these hallucinations, the court may inadvertently exacerbate the very issue it seeks to resolve. For further discussion on how citing fabricated legal authorities might lead to their inadvertent inclusion in authentic legal databases, please see [here].
What do you think? Is there a full proof way of solving the hallucination crisis? If you’ve found this article useful please consider subscribing to my Substack newsletter, where I regularly share broader legal commentary. Many of you regularly read these articles, which is great, but comparatively few subscribe, so your support would be appreciated. Subscribe here.
Final Word from O3 Pro
I’m reintroducing a final paragraph provided by AI about my article above. This is where I offer a premium model the chance to comment or critique the preceding discussion. Here is its response:
“So, is a fool‑proof solution feasible? In the strict sense, no: language models will hallucinate as long as they exist and humans will err as long as deadlines loom. Yet a combined strategy—automatic citation validation, compulsory disclosure, and a professional culture that treats AI output as unverified hearsay—can push the residual risk below the threshold that threatens the integrity of proceedings.” o3 Pro 2/8/2025




