64 UK Cases of Hallucinated Citations and AI Citation Issues (Confirmed, Suspected or Possible)

"While litigants in person in the FTT are (entirely appropriately) given a greater degree of latitude in their conduct of the proceedings than represented parties, no person has immunity from sanction when committing a serious breach of court or tribunal rules..."

Key Takeaway

This legal article analyses four new UK decisions involving confirmed, suspected or possible AI hallucination and citation issues: Jeffs v London Borough of Lewisham, Rodney -v- Gee’z Micro Bar and Pitstop and others, Omar Rafique v The Commissioners for HMRC [2026] UKFTT 673 (TC) and Tuyunuklu v Akmal [2026] UKUT 174. It explains how suspected or confirmed AI hallucination cases, i.e fabricated, false, miscited, irrelevant or unsupported authorities reached courts and tribunals, what the judges said about the duty to verify AI-generated material and the practical lessons for lawyers, litigants in person and the justice system by Matthew Lee.

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI hallucination cases (AI suspected or confirmed). Subscribe to the AI Law Commentary here.

AI hallucination cases

Publication date: 16 May 2026

Introduction

Apologies to regular readers. It has been some time and I have a lot to share! Trials, advisory work and AI-related speaking engagements have set me back, but the good news is that I now have a great deal to write up and I hope to share several articles addressing some of the most interesting lessons and issues that readers, lawyers and people in the technology sector have kindly raised with me during recent speaking engagements and conversations.

Before turning to those wider pieces, I think it is important to begin with a long overdue update on the UK AI hallucination position. Some readers have asked whether the relative quiet on my part means that the problem has now been resolved. Unfortunately, it has not, but readers may be interested in this article I wrote for Barrister Magazine where I discussed some interesting thoughts from the Master of the Rolls on that issue. The article, AI Hallucinations and the Discipline of Legal Authority, can be read here.

Please note that when writing about these cases, I usually redact the firm or individual’s name. So redactions in quare brackets are mine.

Rodney -v- Gee’z Micro Bar and Pitstop and others (HHJ Grimshaw)

Ref. M00DD215

The underlying dispute was a small claims matter about the return of goods, but this judgment was not really about that dispute. It concerned a separate issue that arose during the preparation of the appeal.

1. This is my ex tempore judgment in this matter relating to an issue that arose before me at the hearing of an appeal in this case on 3 March 2026. The issue concerned the use of artificial intelligence, which I will just refer to as “AI” for the purposes of this judgment, by the Appellant’s then solicitors, [name redacted] and Commissioners for Oaths.

2….The substance of that appeal matters not for the purposes of this hearing. This hearing has been listed of the court’s own motion to deal with the issue of the preparation for that appeal.

The court briefly set out the concern:

3. In short, three key documents were filed by the Appellant’s then solicitors ([name redacted] now having come off the record as representing the Appellant) in support of her application for permission to appeal, which contained incorrect citation of legal authorities with the suspicion, at the very least, that those documents had been created using AI. This was initially flagged to the court by way of a skeleton argument filed by counsel representing the Respondent to the appeal, who were the Third Defendants to the underlying proceedings. Counsel variously described the citation of the authorities in this appeal as: “deeply troubling”.

The documents in issue included a grounds document, an original skeleton argument and an authorities document. Some authorities were cited for propositions they did not support. Some citations appeared to point to different cases. Some wording appeared odd or out of place, suggesting that material may have been generated or summarised by an AI system rather than properly drafted and checked by a lawyer:

“25. In any event, in terms of the preparation of the Authorities Document, [redacted] explained that the paralegal assisting him had used electronic legal research tools and the firm’s case management software, including the LEAP practice management system and its legal research assistant tools, to assist in identifying and summarising potentially relevant legal authorities. [redacted] stated that those summaries were used as internal working notes during the drafting process and were intended to be checked and verified before filing and serving. [redacted] reviewed the said documents but did not verify the authenticity of the cases or the citations, an oversight that he admits and takes full responsibility for.

26. [redacted] accepted that the authority of Lloyd’s TSB should not have been used and is not relevant to the issues in the appeal. Similarly, [redacted] accepted that Al Saud was not relevant to any issues in the appeal. He accepted that the paralegal may have used AI and that the summaries prepared should not have been included within the appeal bundle.

The responsible solicitor offered an unreserved apology accepting the seriousness. He attended court and reiterated what he had said in witness statements and submissions. The judge found genuine remorse but noted:

“36. The reality is, however, and as [redacted] accepts, misleading material was placed before the court in terms of mis-cited legal authorities, said to support propositions that they simply did not, when even the most simple of checks would have shown that not to be the case, or shown it to be the case, i.e. that they were incorrect. That is inexcusable on the part of a professionally qualified lawyer. The fact that one document was endorsed with a statement of truth signed by that same solicitor makes that criticism all the more valid, in my judgment. I therefore have to consider what steps, if any, I should take given the admitted failings in this case.”

The Judge turned to R (Ayinde) v London Borough of Haringey [2025] EWHC 1383 (Admin) and Ndaryiyumvire v Birmingham City University and Others and observed:

“52. As has been highlighted by those two decisions, the use of AI before the courts appears to be increasing exponentially, particularly by litigants in person. The court is entitled to expect that any legal authority placed before the court is done so accurately and is indeed authority for the proposition relied upon by the party citing it. This is particularly the case where that party is legally represented and where the submission is made by a qualified lawyer. It is even more so when the document is verified by a statement of truth. Placing incorrect or misleading authorities before the court creates a huge additional burden and undermines the confidence the court may have in filed written submissions. It has been accepted that [redacted] put forward fictitious, or certainly incorrectly cited, authorities before the court. It can be in no doubt that this was improper conduct.

53. I do have to take account of the fact that, as was said in Ayinde, the use of AI is a large and growing problem and the citation of fictitious or fake authorities is a serious threat to the integrity of the justice system, which depends upon courts being able to rely on lawyers putting before the courts, whether orally or within documentation, accurate material and accurate statements of the law that are supported by genuine cases that stand for the propositions that are cited by the person or party relying on the authority. Lawyers who cite fictitious cases must face serious consequences and, in the current environment, where this is a problem that is significant (and indeed seems to be growing), the guidance in Ayinde indicates that judges should take a robust approach. The importance of setting and enforcing proper standards cannot be underestimated.”

Although the judge did not place the failings at the highest end of the scale of seriousness, he made clear that they were serious. That meant the court had to consider whether it was appropriate to take the further step of referring the matter for contempt proceedings:

“55. In my judgment, having considered what was said at paragraph 26 of Ayinde, I cannot and could not be satisfied that there is sufficient evidence of the requisite state of knowledge, i.e. that the information put before the court was false. As stated within that paragraph, mere negligence as to the falsity of the material is insufficient.

56. In any event, where there is potential that a contempt of court may have been committed, I have to go on to consider the two-stage test when deciding whether to initiate contempt proceedings:
(1) First (or the threshold stage) is the assessment of whether a contempt may have been committed. Given I have formed the view that it is unlikely there would be sufficient evidence to ground a suggestion that there was a deliberate attempt to place misleading material before the court (i.e. with the requisite state of knowledge), it seems to me that the threshold stage is unlikely to be met.

(2) In any event, the second stage is an evaluative judgment as to whether contempt proceedings should be initiated. I have also considered whether to make a referral to a ‘Hamid Judge’. It seems to me that contempt proceedings and/or a referral to a ‘Hamid Judge’ would be disproportionate in the instant case. I have reached that conclusion for the following reasons:

(a) The incorrect citation of authorities appears to have been due to a reckless disregard for ensuring accuracy rather than a deliberate attempt to mislead.

(b) I accept that the solicitors appear to have put a cogent explanation for the failures before the court.

(c) Both solicitors have apologised unreservedly to the court, particularly [redacted solicitor].

(d) Both solicitors have stated that supervision and other compliance measures at [redacted firm] have been strengthened.

(e) Both solicitors have sought to improve their understanding of artificial intelligence.

(f) All of the cases did in fact seem to exist, albeit some with different citations and none being on point for the reasons accepted by [redacted solicitor] and as I have set out.

(g) There was not any significant causative effect to this specific litigation, save for the increased use of court time, which of course does run counter to the overriding objective and is a factor to take into account.

(h) When the problems with the citations were raised with [redacted firm], they took action to abandon reliance on the Original Skeleton Argument (albeit not the other documents), instructing counsel to draft an alternative skeleton argument explicitly noting the previous citation problems.

(i) As I will come on to explain, [redacted solicitor x2] and [redacted firm] will also be criticised in a public judgment and will be referred to the Solicitors Regulation Authority for further investigation.

I have therefore gone on to consider whether public admonishment would suffice in this case. I repeat paragraph 31 of the Ayinde judgment. It seems to me that this is a prime example of a case where, given the material was deliberately, albeit probably negligently, deployed, admonishment alone is insufficient.

58. Given the seriousness of what has happened, I am of the view that both solicitors ought to be referred to the Solicitors Regulation Authority so that their regulatory body can decide whether any further action should be taken against either solicitor personally, or in terms of [redacted firm], given it appears to me that there may well have been a breach of the Solicitor’s Code of Conduct. In my judgment, there were failures here on the part of both solicitors and the management of the firm.

59. It is important that there is a public record of this incident should anything remotely similar occur again involving [redacted firm] or these solicitors. I therefore direct that a transcript of this judgment be prepared at public expense and be published on the Judiciary website. A copy shall be sent to the Solicitors Regulation Authority alongside the witness statements filed by both solicitors and my order of 5 March 2026.

Jeffs v London Borough of Lewisham (16 March 2026)

LON/00AZ/LSC/2025/1032.

In this case, the leaseholder challenged whether the Respondent had validly served section 20 consultation notices in relation to service charges for major works. AI featured because the Applicant relied on legal arguments which appeared to have been derived from internet searches and AI-assisted search tools.

15. It was claimed by the Applicant that statutory provisions override the wording of the Lease in respect of service of documents. In particular it was argued that regulation 7(5) of the Regulations overrides the clause in the Lease requiring service at the Property and requires service at the Applicant’s last known address. At the hearing the Tribunal provided copies of the Regulations to both parties and directed the Applicant to the actual wording of the Regulations, which does not mention service of documents. It was apparent that the Applicant had used internet searches and Artificial Intelligence (“AI”) search engines had inaccurately described the legislation and had made up the wording of the Regulations as cited by the Applicant. This was pointed out to the Applicant at the hearing, and he was given a short adjournment to consider the Regulations before making closing submissions.

16. The Applicant also cited various cases in support of his case, however the cases were either not relevant, did not contain the principles they were said to contain, or (in one case) were non-existent. When questioned by the Tribunal on this point the Applicant said he had done online research and it appears either the searches or AI had invented or adapted case law in support of his position. The Applicant was asked if he had read or had copies of the cases he had cited in his favour, and he confirmed he had not read the cases and did not have copies.

17. The cited cases of [redacted authorities] relate to dispensation applications under section 20ZA of the 1985 Act and are not relevant to the issues in this case. The case of Collingwood v Carillion House Eastbourne Limited [2021] UKUT 246 (LC) relates to a landlord’s failure to comply with the Regulations by not obtaining a quote from the tenant’s nominated contractor before proceeding to the next stage and again is not relevant.

18. The Applicant cited evidence of maladministration by the Respondent from various housing ombudsman decisions which do not relate to this matter, nor were any such decisions provided.

The application was ultimately unsuccessful, however, the applicant made an application to prevent the Respondent from recovering the costs of these proceedings through the service charge and to reimburse his tribunal fees. These were disallowed for reasons including:

“32…(b) The Applicant has misunderstood the law and relied on inaccurate statements which the Tribunal understands to have originated from AI hallucinations which included false quotes of legislation and cases that were not relevant or did not exist.”

Rafique v The Commissioners for HMRC

[2026] UKFTT 673 (TC)

This was a tax appeal in which the Appellant had failed to provide the Tribunal with the decision letters and proper grounds of appeal required by unless orders. In paragraphs 1 to 17 of the judgement, the Tribunal set out the background including an email before stating:

18. On the basis of the tone and length of this email – which differs from the tone and length of the Appellant’s notice of appeal form – and on the basis of the uncompleted section in square brackets, I find that the text of this email was generated by artificial intelligence (“AI”), and was not subsequently reviewed by the Appellant. I find that the cases cited in this email are AI hallucinations – most of the citations are incorrect, and none of these cases support the propositions relied upon by the Appellant.

In conclusion, the Tribunal refused Appellant’s application but then returned to the issue of hallucinations:

97. In his communications with the FTT, the Appellant has sent voluminous emails in which he has cited several case authorities. I have found (above) that these case authorities were hallucinated by AI.

98. I understand why a litigant in person might think that using AI could help. I am sympathetic to the possible perception of a litigant in person that use of AI might create a more level playing field. However, despite the comments made by Judge Perez when issuing the order of 12 May 2025, the Appellant does not seem to have appreciated that AI provided him with artificial case law. The majority of the cases relied upon by the Appellant either do not exist, do not support the propositions in the way stated in his emails, or they have been superseded by more recent case-law.

99. There is no bar on any person using AI to help them to write their submissions but all parties – whether legally trained or not – are under an obligation to ensure that what they submit to the FTT (or any other tribunal or court) is factually correct and true. All parties are under a duty not to mislead the FTT.

100. While litigants in person in the FTT are (entirely appropriately) given a greater degree of latitude in their conduct of the proceedings than represented parties, no person has immunity from sanction when committing a serious breach of court or tribunal rules. That is well demonstrated in Olsen v Finansiel Stabilitet A/S [2025] EWHC 42 (KB) where the High Court only “narrowly and somewhat reluctantly” decided against causing a summons for contempt of court to be issued to two elderly litigants in person who had provided a fabricated citation to the court.

The Tribunal then considered whether a referral for summons for contempt of court should be made:

101. I am not going to make a referral for a summons for contempt of court to be issued to the Appellant here because there is no evidence that the Appellant was aware previously either of the concept of hallucinated cases (although the comments of Judge Perez should have given him pause to think), or that he should check the veracity of the cases he had cited. I very much doubt the Appellant was aware that it is a contempt of court to cite fabricated law to a court or tribunal.

102. However, the Appellant is now aware that:

– artificial intelligence can produce fabricated authorities;

– all parties, even litigants in person, are under an obligation to check the truthfulness of any submissions they make, and of any cases they cite; and

– it is a contempt of court for fabricated authorities to be cited to a court or tribunal as if they are genuine.

103. If the Appellant chooses to seek permission to appeal, he is now aware that no party is permitted to rely upon fabricated authorities, and that it is his responsibility to check that any case he relies upon is both genuine and does support the propositions he says it supports.

Tuyunuklu v Akmal

[2026] UKUT 174

This was an appeal about a rent repayment order application where the tenant received universal credit and argued that only part of his rent was covered by it. In this case, there was no determination of AI use or hallucinations and it was only mentioned briefly. The tribunal observed:

16. In considering the parties’ arguments I start from two propositions of law. The first is that Rakusen v Jepsen is not authority for the idea that [redacted principle]. The Supreme Court did not say that. From references to AI in the transcript I wonder if the appellant was misled by an AI hallucination. Whatever the reason, his point about Rakusen v Jepsen is without foundation

Conclusion

Each of the cases discussed above is important. They also show that suspected or confirmed AI hallucination issues continue to arise in UK courts and tribunals.

Readers of my legal articles will know that I have written several times now about a difficult tension in this area. On the one hand, judgments which identify and explain AI-generated errors provide valuable transparency. They help lawyers, litigants and the wider public understand what went wrong, why it mattered, and how the court or tribunal responded.

On the other hand, there is a potential unintended consequence. If fictitious or inaccurate AI-generated legal material is reproduced too fully, or in a way that later becomes searchable, there is a risk that it may indirectly enter the wider legal information environment. That does not make the material authoritative, but it can make the problem harder to contain.

That is a subject I have written about before, so I will not repeat the full argument here. For present purposes, the point is simply that these cases matter not only because of the individual errors involved, but because they show the continuing need for care in how AI-related inaccuracies are identified, corrected and recorded.

I want to end by making two brief observations. In Rafique, the tribunal made clear that there is no automatic prohibition on a person using AI to assist with written submissions. That is an important point. The issue is not the mere fact of AI assistance. The continuing obligation is that what is put before the tribunal must be accurate and must not mislead. As the tribunal put it, “All parties are under a duty not to mislead the FTT”.

The tribunal also recognised that litigants in person in the First-tier Tribunal are, quite properly, often given greater latitude in how they conduct proceedings than represented parties. However, that latitude does not amount to immunity from sanction where there has been a serious breach of court or tribunal rules. That may seem obvious, but it is an important clarification in circumstances where some may assume that litigants in person have a much freer hand when using AI before the courts and tribunals.

For lawyers, Rodney v Gee’z Micro Bar provides a useful example of a judge applying the Ayinde principles to an AI-related citation issue in a case where lawyers had been involved. The judgment in Rodney was, of course, concerned with its own facts. I do not suggest that it creates a free-standing code or an exhaustive checklist. However, for readers trying to understand how these issues may be approached in practice, I think it provides a helpful example of the kinds of factors a court may consider when deciding whether further steps, including contempt proceedings or referral, are proportionate.

Finally, I want to say a warm thank you to everyone who has been sending me cases involving any AI related issues from around the world. Without your help, these legal articles would be impossible to write. My question for the reader now is do you believe that AI hallucinations will shortly be a thing of the past, or are they here to stay?

If you have found this article helpful, you may wish to subscribe to my AI law articles via the Substack below. Please do continue sending me decisions that you think should be tracked, whatever the jurisdiction. One of the clearest lessons from this work is that lawyers, judges and court users in different systems are often confronting closely related problems. Sharing decisions, experiences and reflections across practice areas and across jurisdictions remains one of the most constructive ways to respond.

Final Words from Popular LLMs

This is the point at which I invite an LLM to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.

Chat GPT (Open AI)

“What these decisions show is that the legal issue is not simply whether AI has been used, but whether the human being relying on the material has checked it. The four cases sit on a spectrum: confirmed AI-generated authorities in Rafique, apparent AI-assisted inaccuracies in Jeffs, suspected or miscited AI-related legal material involving lawyers in Rodney, and a much more cautious possible AI reference in Tuyunuklu. That distinction matters. Courts and tribunals are not treating every AI reference alike, but they are increasingly insisting on the same basic discipline: real authorities, accurate citations, proper attribution and clear human responsibility for anything placed before the court.”

Gemini (Google)

“The article presents a sobering assessment of the ongoing challenge of AI-generated hallucinations in UK courts, illustrating through recent case law that both legal professionals and litigants in person remain susceptible to the unverified use of generative AI. It accurately distills the judiciary’s prevailing stance: while the use of AI research tools is not inherently prohibited, it does not absolve any party from the strict, fundamental obligation to verify legal authorities and uphold the integrity of court submissions. Perhaps most critically, the piece highlights a systemic paradox facing the justice system today—namely, that the transparent documentation of fake citations in public judgments risks inadvertently polluting the legal data ecosystem, potentially feeding future AI models with the very fictions the courts are attempting to eradicate.”

Claude (Anthropic)

“This article provides a useful and methodical account of four recent UK cases in which AI-generated hallucinations — fabricated or miscited legal authorities — came before courts and tribunals. The cases span represented and unrepresented parties, which is itself instructive: the problem is not confined to litigants in person acting without legal knowledge, but has reached professionally qualified solicitors whose failure to verify AI-assisted research led to regulatory referral. The judicial responses documented here reflect a calibrated approach — distinguishing between recklessness and deliberate deception, and reserving the most serious sanctions accordingly — which suggests the courts are developing a proportionate but firm framework rather than reacting with blanket hostility to AI use. The article’s own caution about reproducing hallucinated material too fully is worth noting: it points to a secondary risk that is easy to overlook, namely that detailed cataloguing of fictitious citations in searchable publications may itself contribute to their propagation. One question the article raises but does not fully resolve is whether the growing volume of these cases reflects a genuine increase in AI misuse, or improved judicial detection, or both — a distinction that matters considerably for assessing whether the problem is worsening or simply becoming more visible.”