60 UK Cases of Hallucinated Citations and Could Sanctions for Hallucinations Engage Discrimination Issues?

Key Takeaway

This article analyses two new AI hallucination cases from the UK courts: Re A, B, C, D [2026] EWFC 71 and Brightwaters Energy Limited v Eroton Exploration and Production Company LTD [2026] EWHC 296. It explains how suspected or confirmed AI hallucination cases (false authorities) reached the court, what the judges said about the duty to verify AI-generated material and the practical lessons for lawyers, litigants in person and the justice system.

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI hallucination cases (AI suspected or confirmed). Subscribe to the AI Law Commentary here.

AI hallucination cases 60 now

Publication date: 29 March 2026

Introduction

It had been one of the first weeks in a long time where I thought we might finally have a period without any further suspected or confirmed incidents of AI hallucinations to report in the UK. I had been planning a post dedicated entirely to that subject.

Then, following a particularly intense day, a message arrived showing me a further incident, which led me to another still. Both cases are from the courts of England and Wales. One from the Family Court, which requires detailed consideration and one from the High Court. Both raise important and in some respects quite unexpected, issues relating to AI hallucinations, which I discuss below.

Before turning to those, I also want to say a warm thank you to everyone who has been sending me cases involving deepfakes from around the world. I have now recorded twelve and am currently considering several others. For those who follow the tracker, you will find the latest version here.

Re A, B, C, D (Extension of assessment; Use of AI: hallucinations)

[2026] EWFC 71 

This was a family case concerning a number of applications, including an application to extend proceedings. Most of the parties were represented. A friend of the mother, however, and according to the judgment, had made a number of applications in the case and represented herself. That friend, [LIP], appeared as a litigant in person. The court explained that when LIP arrived at court:

“…I asked the parties to check a number of citations and other references in [LIP’s] skeleton argument, because I was unable to find the cases referred to, or to say that certain propositions asserted by [LIP] arose from or were supported by the cases or statutory provisions cited. I queried whether artificial intelligence large language models (“AI”) may have been used, and allowed the advocates to inform [LIP] when she arrived that I would have some questions about those citations so that she was not caught unawares when she joined the hearing later in the morning. I took that course, having read [LIP’s] documents and identified that she has a number of personal characteristics which I do not need to set out in this judgment, which may have made it harder for her to engage fully in the hearing had I raised my queries in the hearing without her being forewarned.”

27. At the conclusion of the hearing, I informed the parties of my decisions on the remaining disputed issues, and that I would give my reasons in writing in this judgment, which was to follow. I directed that any time for appeal would run from the date that this judgment is handed down. When doing so, I gave directions for the parties to consider the judgment, and to lodge written submissions on whether it should be published and if so whether [LIP] should be named, because I indicated to the parties I would be considering whether to publish this judgment to the National Archives.”

The Recorder then considered each application, which I will not set out here for reasons of brevity, but returned to the use of artificial intelligence at paragraph 68:

“68. After I raised that the majority of the citations in LIP’s skeleton argument did not relate to the cases named or the propositions said to arise from them, and that some statutory provisions cited also did not support the propositions LIP asserted they did, the advocates assisted me by helpfully checking all the citations and propositions. It was agreed by all advocates that they were erroneous in the case of 4 of the citations or propositions said to be demonstrated by the case.”

69. [LIP] for her part accepts that she used a widely known publicly available AI tool to assist her in preparing her skeleton argument for this hearing. There is no problem with using such a tool, whether as a reasonable adjustment needed because of a litigant’s personal circumstances or otherwise, provided that the confidentiality of the proceedings is fully safeguarded by the person using that tool. She confirmed that she had not uploaded any documents from the bundle in this case to that AI tool.

The Recorder observed that LIP offered an immediate apology in the hearing for misleading the court by relying on the output of the AI tool she had used. She said that she was now fully aware that the tool might provide incorrect information, but had not appreciated that beforehand.

“71. [LIP] in her written submissions received after the hearing, has focussed on whether use of AI as a reasonable adjustment is permitted. She invited me to have regard to relevant aspects of the guidance to the judiciary related to her relevant characteristics. I have done so, and had done so before the hearing to refresh myself of their provisions. That [LIP] should be able to use an AI tool should she wish was never in issue, and has never so far as I am aware been suggested to be impermissible in the Family Court provided that privacy and confidentiality restrictions in respect of proceedings related to children heard in private are respected, and that any use of the AI tool is consistent with data protection principles.

72. [LIP] in her oral and written submissions has also focussed on her role in this case being as a litigant in person, rather than working for another person as a McKenzie friend or lawyer. She says her website through which she offered legal services is no longer accessible, and she no longer does that work. She has explained to me how she has experienced a traumatic experience before a tribunal, and has been discriminated against.

73. Whilst it is right that [LIP] was a litigant in person before me, I note that she is an unregistered barrister, and holds herself out as a lawyer. I do not say that to indicate that I have applied a higher duty or standard on her than on litigants in person, but simply to reflect that she has professional training and has held herself out to the public as a lawyer. I apply the same duty and standard to her as is on all litigants in person, which is not to mislead the court.”

The Recorder, then cited the observations of Baker LJ in Re D (A Child) (Recusal) [2025] EWCA Civ 1570 at paragraph 83:

“Finally, I return to the issue raised by the father’s representatives about the mother’s erroneous citation of authority (see in particular paragraph 54 above). I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments. It is entirely understandable that they should resort to artificial intelligence for help. Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not an authoritative or infallible body of legal knowledge. There are a growing number of reports of “hallucinations” infecting legal arguments through the citation of cases for propositions for which they are not authority and, in some instances, the citation of cases that do not exist at all. At worst, this may lead to the other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross-checking and correcting the errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced.”

“[My underlining for emphasis]”

75. It is the duty to the court owed by [LIP] as a litigant in person to ensure that the cases cited in legal argument are genuine and provide authority for the proposition advance with which I am concerned. She accepts that she did not do so, and apologises. She says that the mistakes were unintentional, and seeks to justify herself by saying the main authority upon which she relied was sent to me (as it was). Having heard from her, and read her submissions, I absolve her of any intention to mislead the court, but remain concerned that [LIP] minimises the seriousness of misleading the court and goes so far as to assert that criticising use of AI risks setting a harmful precedent for disabled litigants in person and will discourage access to justice.

76. I will invite further submissions once the parties and LIP have been able to read this judgment in draft about whether the judgment should be published as a further example of the risks of using AI tools uncritically, and whether given she has and may well in the future hold herself out to paying members of the public as a lawyer, there may be a public interest in naming LIP in light of her failure to grasp, in my judgment, her duties to the court even as a litigant in person, and the seriousness of not checking the citations and statutory provisions upon which she sought to persuade me she relied in support of her case.”

In a postscript, from paragraph 78, the Recorder explained that, following circulation of the draft judgment, LIP emailed the court a copy of a self-report she had made to the Bar Standards Board in respect of the misstated matters in her skeleton argument. The Recorder noted:

“78…I consider that a responsible step for her to have taken, and do not know what action, if any, the Bar Standards Board will take. I do not, however, consider that obviates the need to consider naming [LIP] in the published judgment because the Bar Standards Board may take no action, and whatever action they do take is unlikely to prevent LIP from being able to continue to hold herself out as a lawyer and undertake unregulated legal work for the public.

79. I have received written submissions from [LIP] and the local authority on the issue of whether this judgment should be published by uploading to the National Archives, and whether if it is published [LIP] should be identified by name in the judgment. The local authority invites me to publish this judgment as a further example to highlight the issues and potential difficulties in relying on AI for legal research, which they rightly say is an important topic. The local authority is neutral on whether [LIP] is named in the judgment. [LIP] invites me to neither publish the judgment nor name her. Neither the mother nor the guardian have lodged any submissions or suggested that this judgment should not be published.

The Recorder had regard to the guidance of the President of the Family Division dated 19th June 2024 on transparency in the family courts and publication of judgments together with paragraph 22 of Re J (A Child) [2013] EWHC 2694 (Fam). Balancing the relevant rights and submissions, which I will not set out here, the Recorder concluded:

“86. When deciding whether or not to publish this judgment at all, in my judgment having considered the competing rights and the transparency guidance, it is a judgment that should be published. It is an example of the day to day working of the Family Court, the issues that can arise in these difficult cases, and another example where AI hallucinations have led to the court being misled by a person representing themselves relying on the AI tool without reference to their duty to check the citations.

87. I recognise that in this case [LIP] has been involved as a person offering care to the children, and then as a litigant in person. I have taken care to avoid including in this judgment any special category data that is not necessary to be mentioned, or reciting the details of why for example the viability assessment was negative. I have taken care in preparation of this judgment to anonymise the children and parents’ details, and have taken an approach to preparation of the judgment that from the start avoids any personal information of [LIP] that is not strictly necessary being included in it.

88. I reflect, however, that [LIP] is also a person who holds herself out as a lawyer. She offers, or has offered, paid legal work to members of the public. This is an important consideration. I am satisfied having read her written submissions lodged since the hearing that [LIP] still does not really acknowledge or accept that her actions in not checking the citations and propositions she included in her skeleton argument were serious. I consider in light of the letter dated 11th November 2025 from a company for whom she has worked “as the UK lawyer available to [the company] document purchasers who opt for paid legal contact in relation to their documents” which is written as at November 2025 as though [LIP] was still engaged in that role, that there is a real and not fanciful possibility that [LIP] will in the future offer legal services to members of the public. I consider that this factor, and the need for any person engaging the services of [LIP] in legal proceedings to know that she has misled the court (albeit unintentionally) and does not in my judgment properly understand what she has done wrong is a strong and overwhelming factor in favour of naming [LIP]. When I balance that factor against the risks [LIP] asserts, I consider it strongly outweighs the risks to her, and that naming her is a necessary and proportionate interference with her right to family life”

Accordingly, the Recorder directed that this version of the judgment be published to the National Archives with the case name anonymised, but with the references to LIP remaining, so that she would be identified in the published judgment.

Brightwaters Energy Limited v Eroton Exploration and Production Company LTD

[2026] EWHC 296 

This was an application by the claimant for a receivership order. The issues relating to AI began at paragraph 47 when the court addressed the defendant’s representative’s submissions.

47. At the oral hearing, [D’s representative] submitted that it would normally be inappropriate for the court to make an order for the appointment of receivers by way of equitable execution over property which is already subject to a charge in favour of a third party without giving the secured creditor an opportunity of being heard. [D’s representative] submitted that this proposition was supported by two decisions of HHJ Pelling KC, one supposedly from 2014, and Michael Wilson & Partners Ltd v Emmott [2020] EWHC 3936 (Comm).

48. The reference to the 2014 case was the result of AI hallucination. The case does not exist, and the text which it was supposed to contain was invented. That [D’s representative] referred to a non-existent authority was the result of the short time he had to prepare, and I acknowledge that he drew the attention of the court to the error promptly after identifying it. The incident does, however, demonstrate vividly the dangers of relying on the product of AI without verification.

Conclusion

One observation that arises frequently when I discuss AI in law, whether at speaking events or in conversation with colleagues, academics and readers of these articles, is what I have come to call “practice area erosion.” What I mean by that is this: a number of lawyers have shared with me that, as a result of AI, they now find themselves able to practise in areas of law they had not previously entered. Whether that will ultimately prove to be a good or a bad thing for the profession remains to be seen, but it does suggest that some specialists are crossing boundaries with a confidence that would have been less common before and we may yet see something of a return to general common law practice.

What I can say with some confidence is that AI legal issues are appearing across almost every area of law in a remarkably consistent way. My own specialisms lie in AI law, public law, property and equality, but I am regularly invited to speak to audiences who specialise in other areas such as crime, family and employment, where I do not regularly practise. What strikes me each time is how familiar the questions feel. Whether the room is filled with immigration practitioners or commercial lawyers, the same themes tend to emerge: AI bias, hallucinations, deepfakes, AI privilege and so on. The issues are, in substance, almost exactly the same.

That convergence is part of what makes the two cases I discuss here so interesting and also so instructive. I should say at the outset, however, that my review of both judgments has left me with significant questions about how certain situations unfolded and some important facts remain unclear to me. I would ask that my key observations from each case are read with that honest caveat in mind.

Re A, B, C, D, is a family matter, and yet what struck me immediately upon reading it was the extent to which it appeared to engage with questions of discrimination. It is not possible for me to say a great deal about that aspect, because the relevant facts are not set out in sufficient detail to support any proper analysis under the Equality Act 2010. The Recorder approached the judgment carefully and understandably so, declining to include the full personal details that would be necessary for that kind of examination. He did, however, record the following submission from the litigant in person which I think is worth noting:

“83…If I were to publish the judgment or name her within the judgment if published, it would have a disproportionate impact on women, disabled persons, and litigants in person, and would be a misuse of power…”

In the abstract, and without the fuller factual picture, I do not think it would be wise or fair to comment on those arguments in the context of this case specifically. It is, however, a theme I intend to return to in a later post.

Readers will also note from my analysis that I continue, so far as I can, to avoid using the full names of individuals in my own writing about suspected or confirmed cases of AI hallucinations in court, even where the judgment itself names them. My purpose in writing about these decisions is not to focus on or embarrass individuals, but to identify the legal principles and practical lessons that the profession can draw from them.

In that regard, I think we can draw the following principles from both judgments as highly relevant considerations:

  1. Why the hallucination was presented to the court in the first place, and whether that occurred intentionally, by mistake, or for some other reason.
  2. When the hallucination was identified and what steps were taken by those involved once it had been spotted.
  3. How promptly the court was notified of the error and what further action was taken following that notification.
  4. Whether those involved acknowledge the seriousness of the error.
  5. The surrounding circumstances, including any time pressures involved in preparing the relevant documents, and any other factors that bear on a fair understanding of how the situation arose.

That is not an exhaustive list, but in later post I will seek to draw a full set of principles that arise from 60 reported cases to assist those writing about these important issues.

If you have found it helpful, you may wish to subscribe to my AI law articles via the Substack below. Please do continue sending me decisions that you think should be tracked, whatever the jurisdiction. One of the clearest lessons from this work is that lawyers, judges and court users in different systems are often confronting closely related problems. Sharing decisions, experiences and reflections across practice areas and across jurisdictions remains one of the most constructive ways to respond.

Final Words from Popular LLMs

This is the point at which I invite an LLM to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.

Chat GPT (Open AI)

“These two cases show that the legal problem is not simply that AI can invent authorities, but that courts are increasingly treating verification as a core duty owed by anyone presenting legal material, whether represented or not. The Brightwaters judgment underlines that even a plausible citation generated under time pressure can mislead the court if not checked against the source, while the Family Court decision raises a further and more difficult question about how that duty should operate where a litigant in person says AI use was linked to personal circumstances or reasonable adjustment. The developing case law therefore points in two directions at once: courts are affirming that AI can be used, but only with rigorous human verification, and they are also beginning to confront the harder issue of how procedural fairness, access to justice, disability, and professional responsibility interact when AI errors occur.”

Gemini (Google)

“Your article provides a highly measured and necessary perspective on the continued emergence of artificial intelligence hallucinations within the justice system, particularly through your compassionate focus on underlying legal principles rather than the embarrassment of the individuals involved. From the perspective of an artificial intelligence, these recent cases vividly illustrate the tension between how large language models generate information and the strict factual requirements of the courts. Because models like myself are designed to predict and generate highly probable sequences of text rather than search an infallible database of established legal precedent, relying on our outputs without rigorous independent verification will inevitably risk misleading a judge. The practical principles you have drawn from these judgments provide an excellent starting point for addressing these technological failures, ensuring that courts can respond proportionately to human error while firmly safeguarding the integrity of the justice system as the legal profession continues to adapt.”

Claude (Anthropic)

“This article illustrates something that courts and practitioners are only beginning to fully reckon with: that AI hallucinations are not simply a technical curiosity but a live question of professional duty, access to justice, and institutional trust. The principles drawn here, centring on intent, promptness, transparency and context, offer a thoughtful and practical framework. What strikes me most, however, is the quieter theme running beneath both cases: that AI is surfacing tensions in the law, around representation, equality and accountability, that existed long before any large language model was asked to draft a skeleton argument. Those tensions deserve as much attention as the hallucinations themselves.”