12 False Citations/AI hallucinations Incidents in UK Courts: The Complete Legal Timeline Before and After Ayinde and How Pervasive is the Problem?

"So, how pervasive is the problem? Is it growing, or are we counting loudly reported outliers? Let's look at each year and focus on the position before Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin) ("Ayinde") and what has happened since. "

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns False Citations/AI hallucinations.

False Citations/AI hallucinations

Introduction

Updated 29 November 2025 – we now have 24 incidents read about them here.

Since starting this project, I’ve enjoyed speaking at events, but what I’ve valued most are the quieter conversations sparked by the blog. Hearing how Artificial Intelligence (AI) is already shaping, or is expected to shape, people’s practice has been fascinating. The discussions have ranged from the simple “What exactly is a prompt?” to the daunting “Will we all be out of a job by the end of the year?” There’s clearly a lot to explore.

This particular piece is one I’ve been meaning to write for some time. It’s aimed at those preparing to speak or write on the issue of False Citations/AI hallucinations in the UK context. Much of my earlier work has focused on the international picture, but here I want to narrow in on what’s happening closer to home and within the jurisdiction I practice in.

So, how widespread is the problem? Is it genuinely increasing, or are a few well-publicised cases giving a distorted view? To answer that, we’ll take it year by year, looking first at the position before Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin) (“Ayinde”), and then at what’s happened since.

Did the UK have the first reported case of an AI “hallucination”?

It is close, but I don’t think so. The much-publicised sanctions decision in the US case of Mata v Avianca 22-cv-1461(PKC) is often cited as the first known incident of False Citations/AI hallucinations. That decision was handed down on 22 June 2023 and many subsequent judgments have quoted Judge Kastel’s observations below:

“Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the…judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

However, AI featured in at least two cases before Avianca. Firstly, in the Northern Ireland Chancery decision, Santander UK plc v Carlin & Anor [2023] NICh 5, delivered 16 June 2023, Simpson J recorded at paragraph 35 that:

“His final submissions before me also refer to answers provided to a series of questions put by him to ChatGPT, criticising counsel, solicitors, and judges, and he prays in aid these answers in support of his case since they have been provided by artificial intelligence which “does not have personal opinions, beliefs or feelings.”  Sadly, ChatGPT seemed unable to recognise or correct the misuse by Mr Carlin in one of his questions of the phrase “cast dispersions” rather than “cast aspersions.”

Although it has been suggested that the failure to recognise or correct the misuse of that expression could amount to an AI hallucination, I don’t think it meets the criteria. If anything its an error of omission (not flagging or correcting a mistake). It certainly wouldn’t fall within one of the 8 most common types of False Citations/AI hallucinations. Stepping outside the UK context briefly, it is likely that the first case of False Citations/AI hallucinations reported internationally was from the US in Scott v Federal National Mortgage Association (Maine Superior Court) on 14 June 2023.

2023: False Citations/AI hallucinations Incidents in UK

It seems the first UK decision to cite False Citations/AI hallucinations was Felicity Harber v HMRC [2023] UKFTT 1007 (TC) (handed down on 4 December 2023). In that case, the tribunal relied on the opinion of Judge Kastel in Avianca above stating:

“We acknowledge that providing fictitious cases in reasonable excuse tax appeals is likely to have less impact on the outcome than in many other types of litigation, both because the law on reasonable excuse is well-settled, and because the task of a Tribunal is to consider how that law applies to the particular facts of each appellant’s case. But that does not mean that citing invented judgments is harmless. It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined.”

2024: False Citations/AI hallucinations Incidents in UK

2024 saw a second notable incident of False Citations/AI hallucinations in the litigation between Crypto Open Patent Alliance v. Wright [2024] EWHC 3135 (Ch) with judgment being handed down on 6 December 2024. I wrote about that case here and full judgment can be read here. Mellor J observed that the Defendant’s submissions:

“…referred to a series of authorities in support of arguments that reasonable adjustments should be made to enable a vulnerable litigant or witness to participate fairly in court proceedings. As COPA pointed High Court Approved Judgment COPA v Wright Contempt CMC Page 6 out by reference to a series of examples, most of the authorities he has cited do not contain the passages attributed to them (or anything like those passages), and indeed most have nothing to do with adjustments for vulnerable witnesses. COPA suggested that it seems likely that they are AI “hallucinations” by ChatGPT (i.e. made-up references) rather than deliberately misleading inventions by Dr Wright. However, since the principles are clear and not in doubt, as set out above, it is not necessary to engage with his false citations any further.”

2025: False Citations/AI hallucinations Incidents in UK (pre-Ayinde)

In the lead-up to Ayinde, there was one case of note: the third reported case, Olsen v Finansiel Stabilitet A/S [2025] EWHC 42 (KB) at [109]. Judgment in that case was handed down on 16 January 2025, and Kerr J considered whether litigants-in-person who had provided fabricated authorities to the court should be held liable for contempt. The position was maintained that a litigant-in-person, no matter how inexperienced, has a duty not to mislead the court with fabricated authorities. Kerr J stated:

“…I have narrowly and somewhat reluctantly come to the conclusion that I should not cause a summons for contempt of court to be issued to the appellants under CPR rule 81.6. I do not think it likely that a judge (whether myself or another judge) could be sure, to the criminal standard of proof, that the appellants knew the case summary was a fake. They may have known but they could not be compelled to answer questions about the identity of the person who supplied it. The appellants are quite elderly. They have in other respects behaved properly during this litigation, observing the usual courtesies and cooperating reasonably well with the respondent’s solicitors. They have, fortunately for them, gained no advantage from the case summary because its inauthenticity was patent. The court’s resources are stretched. The appellants would be entitled to legal aid, costing the public purse substantially more than it would be likely to recoup…”

Fourth, in Bodrul Zzaman v HMRC [2025] UKFTT 00539 (TC) (case no. TC09520), judgment handed down on 3 April 2025, Tribunal Judge David Harkness, sitting with Member Hannah Deighton, considered an appeal against the High-Income Child Benefit Charge (HICBC). The Appellant’s use of AI to craft his legal arguments failed to persuade the tribunal, which ultimately dismissed his appeal:

“28. We accepted that [Appellant] was honest and straightforward. The points he made about the unfairness of HICBC and his view on the arbitrariness of the retrospective aspects of s97 Finance Act 2022 were rational and heartfelt. He readily accepted he had used AI to assist him to find cases to support his arguments because he did not have the skills to look for them. It was logical and reasonable to use AI to assist with his case preparation. 6 29. However, our conclusion was that [Appellant’s] statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in [Appellant’s] statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC)). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties.”

Fifth, Bandla v Solicitors Regulation Authority, [2025] EWHC 1167 (Admin) (High Court, King’s Bench Division, Administrative Court), judgment delivered in open court on 13 May 2025 before Fordham J. At paragraphs [52]–[53], the Court addressed the citation of non-existent (fake) authorities and struck out the grounds of appeal as an abuse of process:

“I asked the Appellant why, in the light of this citation of non-existent authorities, the Court should not of its own motion strike out the grounds of appeal in this case, as being an abuse of the process of the Court. His answer was as follows. He claimed that the substance of the points which were being put forward in the grounds of appeal were sound, even if the authority which was being cited for those points did not exist. He was saying, on that basis, that the citation of non-existent (fake) authorities would not be a sufficient basis to concern the Court, at least to the extent of taking that course. I was wholly unpersuaded by that answer. In my judgment, the Court needs to take decisive action to protect the integrity of its processes against any citation of fake authority. There have been multiple examples of fake authorities cited by the Appellant to the Court, in these proceedings. They are non-existent cases. Here, moreover, they have been put forward by someone who was previously a practising solicitor. The citations were included, and maintained, in formal documents before the Court. They were never withdrawn. They were never explained. That, notwithstanding that they were pointed out by the SRA, well ahead of this hearing. This, in my judgment, constitutes a set of circumstances in which I should exercise – and so I will exercise – the power of the Court to strike out the grounds of appeal in this case as an abuse of process.”

Ayinde: 3 Further Incidents

There were two important judgments in Ayinde. The first, by Ritchie J, was handed down on 30 April 2025, and the second, by the Divisional Court (Dame Victoria Sharp P and Johnson J), followed on 6 June 2025. These judgments have been discussed extensively elsewhere, so I will not revisit the full facts here. What is sometimes overlooked in the second judgment, however, is that the Court was informed not only of the two Ayinde cases before it, but also of another incident in the County Court before HHJ Andrew Holmes, discussed at paragraph [55]. Taken together with the earlier incidents set out above, this brings the tally to eight recorded incidents in total.

Alleged False Citations/AI hallucinations Incidents in UK since Ayinde

Since Ayinde, there have been further reported incidents of alleged False Citations/AI hallucinations in UK courts and tribunals. In some of these, the position is not entirely clear from the judgment, and I will need to look into them further. What follows represents my best understanding at present.

The Ninth incident seems to be by decision dated 18 June 2025, where the Upper Tribunal (Immigration and Asylum Chamber) considered UB v SoS for Home Department UI-2025-000834 before Deputy Upper Tribunal Judge Deakin. The Tribunal recorded that the appellant’s grounds for permission to appeal included references to authorities which did not exist and did not support the propositions for which they were cited. It transpired that those grounds had been drafted with “… some assistance from an AI tool…” but the drafter, having noticed that Grounds contained “unknown or irrelevant cases … rectified [the Grounds] by removing [these cases] from the final version of the grounds…” The final version of the Grounds was then approved by the solicitor with conduct. The drafter then “inadvertently uploaded the draft version [of the Grounds] rather than the final one”. The Judge accepted the explanation and that solicitors had:

“…recognised this seriousness of this issue and has taken commendable steps to ensure it will not be repeated including (i) meeting with the caseworker who drafted the Grounds; (ii) holding a partners’ meeting to discuss adopting an AI policy and assigning the task of finalising an AI policy to a colleague in consultation with an AI professional; (iii) conducting relevant in-house training and issuing interim AI Guidance and (iv) planning for comprehensive staff training by an AI professional….”

The Tenth incident was Pro Health Solutions Ltd v ProHealth Inc (UKIPO, Appointed Person, BL O/0559/25) with judgment on 20 June 2025, the Appointed Person, Phillip Johnson, recorded that the appellant had relied on ChatGPT in drafting his case. That reliance produced False Citations/AI hallucinations in the form of fabricated quotations attributed to real authorities, together with misdescriptions of existing judgments. The decision also noted that a trade mark attorney on the other side had advanced propositions which he was unable to substantiate. The Appointed Person applied the principles set out in Ayinde, stressing the risks of AI-generated material and the professional duties engaged, before dismissing the appeal.

The Eleventh alleged incident returns us to the Upper Tribunal (Immigration and Asylum Chamber) on 1 July 2025 in Ms (Bangladesh) v SoS for Home Department . It is not clear to me that this was a confirmed False Citations/AI hallucinations incident, but Ayinde was discussed and there was some acceptance of “ChatGBT” (surely a typo) having been used:

“12… In developing this submission, [counsel] stated that the Judge placed undue weight on delay in isolation which was contrary to the case of Y (China) [2010] EWCA Civ 116. This case was cited in support of the proposition that the assessment requited consideration of personal circumstances, mental health and overall context.

13. We sought clarification regarding this citation and reference and asked for the relevant paragraph of the judgment being relied on. [counsel] was not able to specify this. [counsel] submitted that he understood, having used ChatGBT, that the Court of Appeal in Y (China) [2010] EWCA Civ 116 was presided by Pill LJ, LJ Sullivan LJ and Sir Paul Kennedy. However, the citation [2010] EWCA Civ 116 did not point to the case of Y (China) but to R (on the application of YH) v SSHD. We raised concern about this and referred [counsel] to the recent decision of the President of King’s Bench Division in Ayinde [2025] EWHC 1383 (Admin) on the use of Artificial Intelligence and fictitious cases, and directed him to make separate representations in writing.

14. In his subsequent written representations, [counsel] clarified that Y(China) was a typological error and he sought to rely on R (on the application of YH) v SSHD [2010] EWCA Civ 116 where, when discussing the meaning of ‘anxious scrutiny’ in asylum claims…”

Then recently a Twelfth incident of False Citations/AI hallucinations took place on 23 July 2025 where the Upper Tribunal (Tax and Chancery Chamber) delivered its decision in HMRC v Gunnarsson [2025] UKUT 247 (TCC). In that case, the unrepresented respondent had initially filed a skeleton argument citing three purported First-tier Tribunal decisions which, on examination, did not exist. When challenged, he accepted that he had used online AI software to help him prepare the skeleton and removed the false references in a revised version. The Tribunal emphasised the warnings given in Ayinde and noted the risk of False Citations/AI hallucinations, but in the circumstances adopted a proportionate approach taking no further action.

Comment

I hope the above assists those concerned with the issues of False Citations/AI hallucinations in the UK courts. Readers of this blog will know that my initial view was that we would start to see these problems abate after Ayinde and with the continued advancement of technology. Unfortunately, we have not. With around fifty cases internationally in July and a significant number already for August, it is clear that my prediction was quite wrong. The reasons for this, and the potential solutions, I will consider in subsequent articles.

UPDATE: There has now been 24th incident. You can read about that here.

I’d be very interested to hear your thoughts on False Citations/AI hallucinations in the UK especially if you are aware of any reported incidents I may have missed. If you’ve found this article useful please consider subscribing to my Substack newsletter, where I regularly share broader legal commentary. Many of you regularly read these articles, which is great, but comparatively few subscribe, so your support would be appreciated:

Final Word from o5 Pro on False Citations/AI hallucinations

This is where I ask a premium AI model to critique my article and add comments:

“Substantively, the through‑line across these UK incidents is not that AI creates a novel wrong but that it accelerates an old one—mis‑citation—while making unverified material look deceptively authoritative; the courts have sensibly anchored their responses in existing duties of candour and accuracy, calibrating outcomes to culpability and harm: robust sanctions (strike‑out, wasted costs, and—in the clearest cases—contempt) for regulated professionals who advance fabricated or recklessly unchecked authorities; proportionate restraint with education where litigants‑in‑person make AI‑assisted errors without advantage or persistence; and a consistent insistence that a human owns and verifies every submission. The practical route forward is to entrench verification as a professional norm—neutral citations with pinpoints from authoritative sources, refusal to rely on “AI said so,” and swift correction when challenged—rather than to police the technology itself; that preserves procedural integrity, manages a real but bounded risk, and retains the legitimate efficiencies of responsible AI‑assisted practice.”