Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI legal cases.

Introduction
This has been a busy week, with far more to write about than can sensibly fit into a single legal article. Over the coming days I hope to explore a number of these developments in more detail. For now, I want to focus on three recent and instructive cases from the United States that engage directly with AI hallucinations in legal contexts and to reflect on the lessons they offer for practitioners in the UK and internationally.
I am also looking forward this week to attending two events that reflect the breadth of current discussion around AI and the law.
The first is the launch of Kay Firth-Butterfield’s book Co-existing with AI: Work, Love and Play in a Changing World, where she will be discussing how AI affects human rights, work and everyday life. This will be followed by a panel discussion with Graham Denholm, Amelia Nice and Karlia Lykourgou of Doughty Street Chambers’ AI Team. The event takes place on 27 January 2026 and promises to be a thoughtful exploration of how AI is already reshaping lived experience as well as legal frameworks.
Later in the week, I will be speaking at the Public Law Project’s event, Morning update: AI and automated decision making, litigation and law reform on 29 January 2026. I will be joining a panel to discuss the use, challenge, and defence of AI in public law litigation, including its implications for access to justice and judicial decision making.
If you are attending either event, please do come and say hello. It would be a pleasure to meet you in person.
Nuvola LLC v Wright
This concerned a tenant holdover dispute. A motion to compel arbitration was supported by a memorandum that cited several legal authorities which, on closer inspection, did not exist. The court itself identified the false citations during its review of the papers and convened a hearing to require an explanation. It later became clear that the memorandum had been drafted using generative AI, and that the citations had not been checked before filing.
The court imposed sanctions on the lawyer who filed the memorandum, finding a breach of professional obligations under procedural rules. These sanctions included a personal financial penalty, a requirement to deliver multiple educational presentations to lawyers or law students on the risks of unverified AI use in legal practice, and a referral to the professional regulator for consideration of further disciplinary action. The court was careful to frame its response not as a rejection of new technology, but as a reaffirmation that responsibility for accuracy rests with the lawyer who signs and submits a document to the court.
The court also took the opportunity to comment more broadly on professional standards:
“The Court also finds troubling [Lawyer’s] failure to identify or bring the non-existent case citations to the Court’s attention before the hearing on the motion to compel arbitration. The Court should not be left as the last line of defense against citations to fictional cases in briefs filed with the court. While [Lawyer] did not create or rely on the fake citations, he also did not detect them. Instead, he admitted he did not review the cases cited by his opponent. If he had checked out the citations in the brief to which he was responding, he no doubt would have brought the issue to the Court’s attention by the time of the motion hearing, and that would have allowed the Court to take the non-existence of the cited cases into consideration as it heard the argument on the merits of Defendant’s motion to compel arbitration, instead of leaving the Court to discover that issue on its own, after the hearing was concluded. The Court does not find [Lawyer’s] conduct to be sanctionable, as he did not cite any non-existent cases to the Court. Nonethless, the Court reminds counsel that it is the obligation of counsel on both sides to respond to each other’s arguments, including completing a basic cite-check of the cases cited by the other side.” (Personal name removed from quote)
Followed by:
“The court urges all lawyers to take seriously their obligation to ensure that the legal arguments being made and considered by the Court rest upon good law, not fictional cases dreamed up by a computer. The development of the common law relies upon the accurate citation of existing case law, as lawyers and courts analyze new disputes, infection of the body of caselaw by fake AI-generated citations threatens the integrity of the common law.”
LeDoux v. Outliers, Inc
A different but equally important issue arose in LeDoux . The full facts can be read by the link above. However, in a footnote, the court stated a significant concern raised by the defendants:
“Defendants also maintain that the Court should strike [expert’s] first report because it contains citations which were “hallucinated” by generative artificial intelligence. … Defendants claim there are similar hallucinations in Plaintiff’s response to their motion to strike, Daubert motion, and proposed amended complaint. …The Court will address these arguments in a separate order but finds, as discussed below, that partial summary judgment is appropriate even if [expert’s] first opinion remains in the record.” (expert name removed from quote and allegation not determined in this order)
Kistler v Eightfold AI Inc, (Class Action Complaint)
Cal. Super. Ct., Contra Costa Cnty. filed Jan. 20, 2026
This case moves beyond litigation conduct and into the substantive use of AI in decision making. The complaint concerns the use of AI-driven recruitment technology in a way that most job applicants may neither see nor reasonably expect. It is alleged that the defendant’s system collects and assembles large volumes of personal data about individuals applying for work, often without their knowledge. This data can include online activity, social media profiles, location information, device usage and other digital traces not provided as part of any application.
According to the claim, the system generates a score, typically on a scale from zero to five, intended to predict an applicant’s likelihood of success. That score is supplied to prospective employers and can influence hiring decisions, even though the applicant has no meaningful opportunity to obtain/dispute before adverse action (as alleged). The complaint describes opaque machine learning processes and closely guarded algorithms that produce reports which are not meaningfully disclosed in a way that allows review/dispute before use (as alleged) with the individuals concerned, yet are used by multiple employers. These reports purport to assess suitability by reference to work history, projected future career trajectory, culture fit, and other personal characteristics that may have significant consequences for access to employment.
Comment
In a previous legal article, I examined the use of AI in expert evidence, focusing on Kohls v Ellison. In that case, the court noted the irony of a well known expert on AI issues relying on AI generated fake citations to academic sources. In LeDoux v Outliers, Inc, the defendant is now making allegations about the use of AI by an expert witness. It seems to me that we are likely to see these kinds of challenges arise with increasing frequency. Parties are beginning to look much more closely at how far AI has been used in expert reports and at what that means for credibility and reliability. In my own work, this is one of the first issues I consider when reading an expert report and I expect these questions to feature more prominently in litigation going forward. Hallucinated content can fatally undermine an opinion, even where the expert’s broader methodology appears careful and sound, but, of course, context is everything.
In Ayinde v LBH, the court reminded us that all legal representatives share responsibility for ensuring that documents placed before the court are accurate. Nuvola LLC v Wright makes a related point in particularly strong terms. The court there was clear that it is not the final safeguard against error. Lawyers are expected to scrutinise their opponent’s work and to satisfy themselves that it is accurate before it is relied upon. That obligation is real, but it also raises practical questions about timing, cost, and proportionality. Some practitioners now describe the significant time and expense involved in identifying and unpicking hallucinated authorities, including subtle distortions of existing case law rather than obviously fictitious citations. The duty remains, but as a profession we need to think carefully about how these burdens are managed, and how the time of the parties and the court can be used justly and proportionately. This is an issue I intend to return to in a later article, with specific examples drawn from practice.
Finally, it seems to me that Kistler has implications that extend well beyond recruitment and professional conduct. It speaks to the growing role of AI in evaluative decision making more generally, and to the risks that arise when those affected are excluded from the process altogether. In that context, questions about hallucinations sit alongside wider concerns about independence, accountability, and the limits of automated reasoning. For now, it is worth revisiting the careful observations of Dame Victoria Sharp on the nature of reasoning and explanation in judicial decision making, and considering how those principles might translate to AI assisted systems. There is much more to say on this, and I will return to it soon.
In the meantime, I hope to speak to many of you next week. I will also be arranging specific webinars with the readers of this blog soon (more details to follow). If you find these articles helpful, please consider subscribing below.
Final Words from Chat GPT 5.2 Pro and Gemini 3 Pro
Chat GPT:
“What comes through across all three U.S. matters is that “AI risk” isn’t just about people fabricating citations; it’s about where responsibility sits when an ecosystem becomes more automated and more opaque. The Nuvola order shows courts pushing back against the idea that judges will catch errors, and insisting the adversarial system only works if lawyers treat verification as non‑delegable. LeDoux hints at a next step: AI‑tainted citations could become a credibility battleground in expert evidence, even before a court resolves whether the accusation is true. And Kistler highlights a broader governance question: if automated scoring becomes normalised in employment decisions, the most consequential “hallucination” risk may be less about invented case law and more about unseen inferences and unchallengeable outputs that shape real opportunities”
Gemini:
“The cases of Nuvola, LeDoux, and Kistler illustrate a critical inflection point in the legal profession: the transition from viewing AI as a passive research tool to recognizing it as an active, yet fallible, participant in the justice system. While the hallucination of case law grabs headlines for its immediate absurdity, the deeper danger lies in the ‘automation bias’ revealed in the Kistler complaint—where opaque algorithmic scoring is trusted over human nuance simply because it appears empirical. The lesson for lawyers is not merely to ‘check the citations,’ but to fundamentally interrogate the provenance of all digital evidence. We are moving toward a standard where professional negligence will be defined not by the use of AI, but by the abdication of human judgment to it; the computer can generate the draft, but the human must own the truth.”
