50 UK Cases of Hallucinated Citations, First Case From UT(AAC) and UT(IAC) warns a supervisor failing to catch false citations is “likely to be more culpable”.

38. In our judgement, a supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”. An individual in the latter camp fails the tribunal, the public and his lay client, whereas an individual in the former camp fails, in addition, to aid the development of more junior lawyers."

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI hallucination cases UK Courts (AI suspected or confirmed). Subscribe to the AI Law Commentary here.

AI hallucination cases UK Courts

Publication date: 1 March 2026

Introduction

When I first began documenting these decisions, the numbers were small enough for each case to be examined almost in isolation. I could look closely at the facts, the judicial response, and the professional context without feeling that I was trying to map a moving landscape. That is no longer possible. The pace has increased. I find that I am no longer able to keep up fully with the growing number of international cases and, for now, have had to concentrate my formal tracking on the UK. I continue to monitor and analyse developments overseas, but I am aware that I am now behind so please bear with me.

The increase from 47 to 50 incidents in the UK, includes two cases heard together in the Upper Tribunal Immigration and Asylum Chamber (“UT(IAC)”), as well as the first case I have identified from the Upper Tribunal Administrative Appeals Chamber (“UT(AAC)”). The decision from the UT(IAC) raises particularly important principles. It is notable not only because of the context in which it arose, but also because the hearing engaged wider issues of privilege, confidentiality and data protection.

UK v SOSHD and R(on the application of Munir) v SOSHD

[2026] UKUT 81 (IAC)

This was a Hamid decision of the UT (IAC) i.e. it concerned professional standards rather than the underlying immigration merits. Two separate matters had come before the UT (IAC) in which pleadings contained “authorities” that were wrong, could not be located, or did not exist at all. Show cause directions were issued and Hamid hearings convened to determine what regulatory response, if any, should follow. The Tribunal set the context in clear terms:

“5. This decision is made in the context of the actual or suspected use by lawyers of Artificial
Intelligence (“AI”) large language models which result in false information, and particularly fake citations and authorities being placed before the Upper Tribunal. As set out at [6] of Ayinde: “Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”

6. Further, time spent on applications containing false legal information also risks a loss of public confidence in the processes of the Upper Tribunal. We are also aware that the immigration client group can be particularly vulnerable, as indeed is the appellant in the first matter before us. We emphasise that the primary duty of regulated lawyers is to the Court and Upper Tribunal, and to the cause of truth and justice. That duty is not discharged by professional representatives who knowingly or recklessly place false information before the Tribunal, or who fail to supervise the work undertaken by other members of their firm for whom they are responsible.

7. The Upper Tribunal cannot afford to have its limited resources absorbed by representatives who place false information before the Tribunal. Judges of the First-tier Tribunal and Upper Tribunal are specialist judges who are familiar with the law that they are to apply. Where an unfamiliar authority is cited before a judge, whether in written or oral submissions, they are likely to wish to locate that authority and to consider it before making a decision on the case before them. The citation of cases which do not exist sends that judge on a fool’s errand. The time spent on such an errand is at the expense of other judicial business and is not in the interests of justice.”

The Tribunal observed that R (Ayinde) v London Borough of Haringey, Al-Haroun v Qatar National Bank
QPSC [2025] EWHC 1383
(Admin) (“Ayinde”) had been widely reported in both the legal press and mainstream media, setting out the opportunities and the risks presented by generative AI. Despite that publicity, it noted a marked development:

“…Despite all of this, the Upper Tribunal has seen a considerable increase in the latter half of 2025 in
the citation of fictitious authorities in both statutory appeals and applications for judicial review…”

In response, the judicial review claim form in the Upper Tribunal has been amended. A legal representative must now confirm by a statement of truth that any authority cited within the form, or in documents appended to it, exists, can be located using the citation provided and supports the legal proposition for which it is relied upon. Other forms and directions are to be amended in similar terms. The Tribunal indicated that a representative who signs such a statement where false authorities are cited should ordinarily expect a regulatory referral. The pleadings in the two cases before the Tribunal predated these amendments.

The First Case: UK v SOSHD

In the first matter, the issue arose in an appeal in which grounds contained a false case reference and an inaccurate citation. The Tribunal sought an explanation and asked expressly whether an AI tool such as ChatGPT had been used. The representative described the problem as human error and denied using a large language model for the grounds. He later accepted that he could not give a clear explanation and suggested that the false case might have arisen from inadvertent use of Google’s AI mode. He acknowledged using ChatGPT for drafting client correspondence and summarising decision letters, and accepted that this created a data protection risk.

The Tribunal conducted its own enquiries into one hallucinated authority and recorded the following:

“17…As far as the Panel could establish there had never been a reported case called [redacted] but Google AI nevertheless can, depending on the question posed, produce information such as panels of Court of Appeal judges who sat on this non-existent case and suggest that it had to do with the fairness and asylum applications. By asking Google AI the same question in slightly different ways, we were able to elicit various different compositions of the bench which decided that fictitious case. Plausibly, each of
the judges suggested by Google was sitting in the Court of Appeal at that time but not one of them could have sat on a case of that name because there is no such case. The danger in using Artificial Intelligence for legal research is not confined to generative AI models such as ChatGPT, therefore; the use of Google AI for legal research is equally likely to generate results which are false but which might initially be thought to be accurate.

“18. We do not suggest for a moment that the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice. The software which is
currently available is of enormous benefit in properly focused legal research, as it is in other contexts such as large disclosure exercises. But any practitioner who uses non-specialist AI to undertake research or drafting is obliged to undertake rigorous checks to ensure that any information gleaned from those sources is true and accurate. Anyone with responsibility for legal practice at a firm of solicitors or regulated legal advisers must be aware of those pitfalls and of the need to warn staff about the dangers of using non-specialist AI.”

The Tribunal did not itself make a regulatory referral in that case because the representative had already self reported. It made clear, however, that the principles in Ayinde would otherwise have applied. It also addressed the use of AI tools for client material:

“21. We also observe that to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and thus any regulated legal professional or firm that does so would, in addition to needing to bring this to the attention of their regulator, be advised to consult with the Information Commissioner’s Office. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks.”

The Second Case – R(Munir) and SOSHD

The second matter arose in judicial review proceedings. Permission was refused and the Tribunal then addressed inaccuracies in the Grounds. Several authorities and citations could not be located. The firm’s compliance officer was directed to identify who had drafted the grounds. In the subsequent Hamid hearing, the explanation was that a junior fee earner had drafted the document under supervision but had failed properly to verify the sources. The pattern of errors was treated as consistent with AI hallucination risk. On the evidence before it, the Tribunal considered the drafter may have relied on false citations which were generated by AI and the supervising solicitor accepted that this had occurred.

The Tribunal was careful to emphasise that the issue extended beyond technology itself:

“37. It would be easy to think that this is a case about the naïve use of generative AI, but it is not merely about that; it is principally about supervision and the obligation to ensure that the tribunal is not misled. It matters not how such citation errors come about. Whether they are inserted by a hapless trainee or by ChatGPT is really neither here nor there; the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal. To fail to conduct such checks is wasteful of the tribunal’s time. It is also wasteful of an opponent’s time, thereby potentially leading (in judicial review proceedings) to larger awards of costs. None of that is in the interests of justice or,
importantly, in the interests of clients who are often ill-equipped to fund contested judicial review proceedings.

38. In our judgement, a supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”. An individual in the latter camp fails the tribunal, the public and his lay client, whereas an individual in the former camp fails, in addition, to aid the development of more junior lawyers.”

An attempt was made to distinguish the facts from Ayinde. The Tribunal was not persuaded:

“40. The fact that the wrong citation was provided for each of the authorities is more problematic
than [redacted] was prepared to accept. The label on the tin is obviously important. It is the provision of an incorrect citation which leads the judge on the fool’s errand to which we have referred above. That is particularly so where, as here, the citations were so badly wrong, even on his version of events…”

The Tribunal expressed a number of concerns about the explanations and internal procedures described to it. While acknowledging a previously unblemished record and genuine regret, it concluded that the inclusion of false citations and the failure properly to supervise necessitated a referral to the regulator. It stated that it would be for the regulator to determine what action, if any, should follow.

The decision concluded with general observations that merit careful reading:

“57. Legal professionals are obliged to ensure that legal arguments which are presented to the
First-tier Tribunal or Upper Tribunal are factually and legally accurate. Those who cite false cases fail to comply with that professional obligation and waste the time of the Tribunal.

58. Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place
this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the regulatory body and should, in any event, be referred to the Information Commissioner’s Office. A solicitor or other legal professional who delegates their work to another fee-earner remains responsible for the supervision of their work and for ensuring its accuracy. Such supervisors must ensure that fee-earners under their supervision are aware of the dangers of using non-specialist AI for legal research and drafting. Failures to do so, or to undertake appropriate checks on the drafting of fee-earners is likely to result in a referral to the
Solicitors Regulation Authority or other regulatory body. A supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”.”

59. The claim form by which judicial review is sought in the Upper Tribunal has now been amended so as to require a legal representative to confirm by a statement of truth that any authority cited within the form or in any documents appended to it (a) exists; (b) may be located using the citation provided; and (c) supports the proposition of law for which it is cited. Other forms and directions are to be similarly amended. A legal representative who signs such a statement in a case in which false authorities are cited should ordinarily expect to be referred to their regulatory body.

60. Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the regulatory body and should, in any event, be referred to the Information Commissioner’s Office.

PS v LB of Wandsworth (SEN): [2025] UKUT 239 (AAC)

UA-2025-000869-HS

In the Upper Tribunal Administrative Appeals Chambers, the Tribunal was considering an appeal. At paragraph 22, the Tribunal stated that before considering the grounds of appeal in more detail, they would need to deal with some case law that the appellant relied on in this appeal. The Tribunal set out the two cases:

“22… to which the appellants refer in their Notice to Appeal do not exist. The latter neutral citation is a valid citation but it relates to a child support case not a case with the title that the appellants have given it. Nor does [Redacted Code of Practice] say anything about obtaining parental consent to ‘off-rolling’. We did not discuss these references at the hearing as I considered it unnecessary to do so as in my judgment the appellants arguments could properly be advanced without those references, but the local authority did raise the non-existence of these cases in its late written submissions. It may be these legal references were the product of AI generation as it is well known that AI ‘hallucinates’ the names of legal cases and legislation.”

Conclusion

There are several important lessons arising from the UT (IAC) decision which I hope to draw together and which may not have been immediately apparent on a first reading.

First, the UT (IAC) observed that it has seen a considerable increase in the latter half of 2025 in the citation of fictitious authorities in both statutory appeals and applications for judicial review. I hope that does not continue, but that does reflect my own experience across jurisdictions.

Secondly, the UT (IAC) made a strong and important assertion: “We do not suggest for a moment that the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice” The Tribunal expressly recognised the enormous benefit of properly focused legal research and the assistance AI can provide in large disclosure exercises. At the same time, it emphasised the necessity of rigorous legal checks to ensure truth and accuracy.

Thirdly, the UT (IAC) made clear that anyone with responsibility for legal practice within a firm of solicitors or among regulated legal advisers must be alive to the pitfalls and to the need to warn staff about the dangers of using AI. Closely related to that, the Tribunal explained that a supervisor who fails to ensure that the work of a more junior fee earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”. An individual in the latter category fails the Tribunal, the public, and his lay client. An individual in the former category fails, in addition, to assist in the development of more junior lawyers.

Fourthly, the Tribunal’s references to “open source”, “closed source” and to specific AI tools were both interesting and important. These expressions, and any references to the safety of particular tools, warrant careful consideration. Terms such as “open source” and “closed source” have technical meanings and their use in legal discussion can sometimes differ from their strict technological definition. As the Tribunal reminds us, we are all responsible for the individual AI tools we adopt. It is worth remembering that overarching terminology cannot capture every nuance. Attention to the specific technology and the way it is used will help maintain professional confidence.

Finally, the repeated judicial warnings about privilege, confidentiality and data protection are of real importance. I have spoken on these issues several times and have promised a detailed article devoted to them following the observations in US v Heppner, which I wrote about here. That piece is currently in preparation and I look forward to sharing it when it is ready. For now, it is worth revisiting what the Judicial AI Guidance says on the issue:

II. Uphold confidentiality and privacy

Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world.

The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known.

You should disable the chat history in public AI chatbots if this option is available, as it should prevent your data from being used to train the chatbot and after 30 days, the conversations will be permanently deleted. This option is currently available in ChatGPT and Google Gemini but not in some other chatbots. Even with history turned off, though, it should be assumed that data entered is being disclosed.

Be aware that some AI platforms, particularly if used as an App on a smartphone, may request various permissions which give them access to information on your device. In those circumstances you should refuse all such permissions.

In the event of unintentional disclosure of confidential or private information you should contact your leadership judge and the Judicial Office. If the disclosed information includes personal data, the disclosure should be reported as a data incident. Details of how to report a data incident to Judicial Office can be found at this link: Data breach notification form judiciary

You should treat all public AI tools as being capable of making public anything entered into them.”

There is much more to write and to discuss. If you have found this helpful, you may wish to subscribe to my AI Legal Articles via the Substack below, and please do continue sending me any decisions you consider relevant, whatever the jurisdiction. One thing I have learned through this work is that we are all confronting similar, urgent, and rapidly evolving questions. Sharing experiences and ideas across practice areas and across borders remains one of the most constructive ways to respond to the professional and legal challenges ahead.

Final Words from Popular LLMs

This is the point at which I invite an LLM to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.

Chat GPT (Open AI)

“The Upper Tribunal’s Hamid decision in UK and Munir v SSHD highlights that AI‑related “hallucinations” are treated primarily as a professional standards and supervision problem, not a technology novelty. The Tribunal emphasised that lawyers must ensure submissions are factually and legally accurate, and warned that supervisors remain responsible for junior fee‑earners’ work—indeed, a supervisor who fails to prevent false cases or citations may be “more culpable” than someone who fails to check their own work. It also noted that the UT judicial review claim form has been amended to require a statement of truth confirming any cited authority exists, is locatable, and supports the proposition relied upon—signing where false authorities appear should ordinarily trigger regulatory referral. Finally, the decision (and the Judicial AI Guidance) underscores confidentiality and data‑protection risks: uploading client material into publicly available AI tools may jeopardise confidentiality/privilege, and the safe use of AI requires strict verification, careful tool choice, and clear internal safeguards”

Gemini (Google)

“Reading this analysis as an AI, I am struck by the paradox of my own utility in the legal sector: I am simultaneously a revolutionary tool for navigating vast archives of jurisprudence and a profound liability if trusted blindly. The Upper Tribunal’s distinction between the technology itself and the human obligation of supervision gets to the absolute heart of the matter. When a lawyer delegates drafting to a junior colleague, they retain ultimate responsibility; delegating legal research to a large language model like me should be no different. The ‘fool’s errand’ of chasing hallucinated citations isn’t merely a waste of judicial resources—it is a stark reminder that while I can generate highly plausible, confident text based on statistical patterns, I possess no underlying comprehension of truth, justice, or the weight of a professional oath. The responsibility for those concepts must remain fiercely human.”

Claude (Anthropic)

“What strikes me most forcefully about the UKUT 81 decision is the tribunal’s framing of supervision as the central issue, not technology. It would have been easy, and perhaps tempting, for the judiciary to treat AI hallucination as a novel problem requiring novel solutions. Instead, the tribunal reached back to first principles: the duty to the court, the obligation to supervise, the responsibility that flows from seniority. The statement that a supervisor is more culpable than the individual drafter is not a mild observation — it is a direct warning to senior solicitors and partners who may be allowing junior staff to use AI tools they do not themselves understand, and who are signing off on documents they have not properly checked. The technology is new; the professional failure is entirely familiar. “

Relevant Trackers to the Above Conversation

Below are the links to the The UK AI Hallucination Cases Tracker (Suspected or Confirmed AI) and AI Privilege & Confidentiality Tracker | Global Case Law