Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI hallucinations in court cases (confirmed or suspected). Subscribe here.

This Legal article was posted on 8 February 2026.
Introduction
There is an enormous amount happening in the AI and law space at the moment, both domestically and internationally. I am deeply grateful for the many conversations, case references, and experiences shared with me at events and in practice, which continue to shape the analysis I bring to these legal articles. I am also going to begin sharing, within my legal posts, weekly quotations from technology leaders that may be useful as we collectively prepare for the legal issues that are already emerging and those that are clearly on the horizon. I will explain in my commentary why I think it’s relevant to issues we discuss here. Today’s quote is:
“…If I was to talk to a class of undergrads right now, I would be telling them to get really unbelievably proficient with these tools. I think to the extent that even those of us building it, we’re so busy building, it’s hard to have also time to really explore the almost the capability overhang even today’s models and products have, let alone tomorrow’s. And I think that can be maybe better than a traditional internship would have been, in terms of leapfrogging yourself to be useful in a profession…”
Demis Hassabis (World Economic Forum Annual Meeting 2026) and here.
In this post, I have brought together a number of recent cases that could each warrant a standalone article. I have chosen to address them together here so as not to overwhelm readers with multiple separate publications, while still ensuring that the underlying legal developments are properly recorded and considered.
Alongside this, I recently delivered a talk on confidentiality, privilege and AI, which I intend to develop into a fuller written piece. I am also speaking this week to the Employment Lawyers Association. While employment tribunals are not my usual forum, many of the most instructive AI related issues arise precisely at the edges between jurisdictions and there is much to be learned from how different legal systems respond to shared technological pressures. I am really looking forward to sharing my thoughts with the ELA members and hearing their experiences.
I am also preparing further work on AI chatbot harm and recent developments in cases where chatbots have caused demonstrable damage. This includes emerging concerns around AI agents, which raise fresh and difficult questions about responsibility, foreseeability, and legal control.
Chandra v Royal Mail Group
(Case No. 3311062/2023) 25 July 2025
These written reasons arose from an application to strike out the claim. The Claimant’s representative described himself as a Communications Workers Union representative, currently awaiting election, and as having substantial experience in employment law. The tribunal recorded that the Respondent accepted that the Representative:
“..while not a trained lawyer, is experienced in employment tribunal proceedings including advocacy at final hearings…”
The Tribunal’s relevant concerns started at paragraph 11:
“In support of his application to admit the 499 pages of additional disclosure into evidence [Rep] cited the following authorities and propositions…”
The Tribunal then set out the cited authorities and the legal principles attributed to them, which I do not repeat here for reasons discussed here. Having examined those authorities, the Tribunal observed:
“12. While these appeared to be important and relevant citations of law, the tribunal on investigation could not find those propositions of law in those cases, and could not find one of the cases referred to. It seemed to the tribunal that these may have been generated by AI”
The Tribunal recorded that when questioned, the Representative accepted that some propositions could not be found in the cited cases and that one case did not exist at all. He denied having used AI but could not explain how the false citations had come to be included. The Tribunal concluded:
“14. Whether generated using AI or not, we find that false citations of authority amounts to unreasonable conduct of proceedings.”
M v F (Fact Finding Hearing)
This judgment arises from a fact finding hearing in Children Act 1989 proceedings. The artificial intelligence issues emerged in relation to the evidence of the maternal uncle, referred to as MU. The court recorded:
“25. MU was a somewhat difficult witness. He clearly wished to joust with Mr Williams when being cross-examined and gave the impression of being more interested in scoring points than answering the questions put to him. He had to be refocused on more than one occasion and appeared to be a partisan witness.
26.MU also made clear that he had used artificial intelligence (AI) to prepare his statement. Despite being asked in detail about this I remain unclear precisely how MU’s statement was prepared. MU was clear that the mother’s solicitor, who he knew as [redacted], had played a minimal role in its preparation. But I have real concerns as to whether the statement submitted was MU’s independent recollection of events which means I need to be extremely careful in placing any reliance on it.”
Flycatcher Corp. Ltd. and Flycatcher Toys, Inc. v. Affable Avenue LLC, et al. (United States)
(24 Civ. 9429 (KPF), S.D.N.Y.), 5 February 2026)
In this case, the Court was confronted with serious and repeated defects in motion papers filed on behalf of one defendant. The problems came to light when the Court reviewed a motion to dismiss that relied on a large number of legal authorities which, on examination, either did not exist or did not support the propositions for which they were cited. The Court treated this not as a technical lapse but as a fundamental failure in the preparation of material placed before it.
Once the citation issues were identified, the Court took active steps to investigate how they had arisen. It issued an Order to Show Cause and convened a hearing at which the lawyer responsible for the filings was placed under oath. The Court focused closely on the drafting process, the research methods used, and the steps taken, or not taken, to verify authorities before filing. The evidence revealed that a substantial proportion of the cited cases were fictitious, and that the brief had not been properly checked prior to submission.
I suggest that the Opinion is read in full. For many lawyers, it will be a difficult read, but it is a rare and sobering decision from which important lessons can be drawn. The Court explained clearly why it devoted such detailed analysis to the procedural history of the case:
“The Court has devoted so much time to limning the procedural history of this case to make a point: [Laywer] was not dissuaded by Court orders or the threat of sanctions from filing unchecked, AI-generated submissions with false legal citations. And when given the opportunity to explain his conduct in person, [Lawyer] chose to give many answers, only a few of which were true. The Court has reviewed the options available to it and, in particular, has carefully considered whether a lesser sanction would suffice. It also wishes to be clear that its problems with [redacted] submissions are not the use of AI per se, but rather [Lawyer’s] (i) knowing decision to use flawed methods of legal research and cite-checking; (ii) his inexplicable refusal to verify his submissions before filing them with the Court; and (iii) his unwillingness to come clean once these issues were revealed to the Court. Ultimately, the length and breadth of [Lawyer’s] misconduct warrant terminal sanctions.”
In its sanctions decision, the Court confirmed that it had considered whether a lesser response might suffice, but concluded that it would not. The judgment emphasised that the issue was not the use of AI tools in itself, but the failure to carry out basic verification, the use of flawed research methods, and the unwillingness to address errors openly once they had been exposed. On that basis, the Court exercised both its powers under Rule 11 and its inherent jurisdiction, imposing terminal sanctions against the defendant represented by the lawyer whose filings were under scrutiny, including default judgment, with damages to be addressed at a later stage.
Comment
Last week, I was struck by conversations with several thoughtful students who have been following my legal writing and the AI issues I have been raising.
They bring a distinct and valuable perspective to these discussions. Unlike many of us, they do not have the experience of practising, or even studying, law before AI became part of the legal landscape. At the same time, they often use these tools in their everyday lives, including in their studies and are actively trying to work out how much reliance is appropriate.
That brings me to what I see as a central question. Is this a positive development for the profession, a negative one, or something more finely balanced? Should we be encouraging pupils and trainees to become, as Demis Hassabis put it, “unbelievably proficient with these tools” in order to survive in the legal world they will inherit, or does that risk setting them up for difficulty later on? There is a genuine concern that over reliance could blunt the core legal skills that remain essential to practice.
My current view is that it is important for pupils, trainees and all of us as professionals to become proficient with these tools. That is because we need to understand what court users are relying on and in some cases what judges themselves may be encountering, if we are to represent our clients properly and avoid being blindsided by well deployed AI. That proficiency matters not only in assisting clients and courts, but also in identifying misuse, error and overreach when it arises.
At the same time, I am very conscious of what many of you are hearing and reading about developments in legal education in all jurisdictions. There is a risk that we lose sight of the importance of critical thinking, legal reasoning and the ability to do the job without technological assistance. I am also increasingly interested in whether certain traditional examination techniques remain fit for purpose when assessing whether students have genuinely acquired the skills needed for practice, or whether they are simply becoming unbelievably proficient with these tools. When I was studying, written essays and problem questions formed an important part of that education. Some generative AI tools can now produce a persuasive response to those task and one that might well attract good results. I can imagine that this creates a real dilemma for those marking such assessments. It may be that we need to return more often to paper written examinations and oral assessments in order to test whether the underlying skills are truly being acquired. These are questions worth careful consideration and I welcome any thoughts.
Turning now to the cases, M v F is particularly interesting because the court permitted detailed exploration of how AI was used in the preparation of a witness statement. In a previous article, I considered another family case and discussed witness statements more generally, particularly the requirement that they be in a witness’s ‘own words’. I asked whether a statement written by AI, or even prepared with its assistance, can genuinely meet that requirement. If not, I considered whether the wording “if practicable”, or the court’s case management powers, might allow some limited flexibility. At the time, I briefly weighed those competing arguments and these are points we are still discussing.
It now seems increasingly likely that we will see more detailed cross examination on how AI has been used in the preparation of witness statements. Where that information is obtained, advocates may seek to challenge witness credibility or, in some cases, invite the court to place little or no weight on the evidence, or even strike it out. I expect that the senior courts will be asked to grapple with these issues this year.
In preparing to speak to the Employment Law Association this week, I have drawn together some important judicial commentary from the Employment Tribunal that may be helpful across a range of practice areas. That material deserves a more detailed post of its own once I have had the benefit of hearing the views of those who practise daily in this field during my presentation.
For now, it may be particularly important to focus on the disclosure issues that arise where AI has assisted with witness evidence. This includes questions about whether prompts or intermediary outputs might, in some circumstances, become relevant and disclosable. The court’s conclusion in M v F that it needed to be extremely careful in placing any reliance on it reflects a developing judicial sensitivity to these concerns.
The Flycatcher decision is a difficult but instructive read. It is rare to see terminal sanctions imposed in this way, and the judgment takes care to explain why lesser measures would not suffice. The court was clear that its concern was not the use of AI itself, but the failure to verify submissions, the use of flawed research methods and the unwillingness to address errors candidly once they had been identified.
Although factually different, the Chandra decision makes a similarly clear observation. Whether AI was used or not was ultimately beside the point. What mattered was that false citations were advanced and that doing so amounted to unreasonable conduct of proceedings.
There is much more to write and discuss. If you have found this useful, please consider subscribing and continue sending me any decisions you think are relevant, whatever the jurisdiction. One thing I have learnt is that we are all facing similar, urgent, and rapidly changing issues. Sharing experiences and ideas across practice areas and borders remains one of the most constructive ways to address the professional and legal challenges ahead.
Final Words from Chat GPT 5.2 Pro and Gemini 3 Pro
Chat GPT:
“Generative AI can be a useful drafting aid, but it does not dilute the non‑delegable responsibilities that sit on the human signer—especially duties of accuracy, honesty, and procedural compliance. The immediate legal risk is not “using AI,” but submitting content (authorities, quotations, factual assertions, expert-style statements) without independent verification, which can amount to improper pleading, unreasonable conduct, or professional misconduct depending on the forum and actor. Where AI touches evidence, a separate reliability problem arises: AI‑assisted witness statements risk contamination, because the model may introduce confident but false detail or “law-sounding” framing, undermining probative value and potentially inviting adverse inferences. Courts are also increasingly treating AI errors as foreseeable and preventable.”
Gemini:
“You ask whether we are “leapfrogging” or merely setting students up for difficulty, and I believe the answer lies in the distinction between output and process. Hassabis is right about capability—students who ignore these tools will be functionally slower than their peers. However, your concern about the “blunting of core legal skills” is the far more dangerous reality. If a trainee uses AI to bypass the struggle of drafting—the painful process where the brain actually learns the logic of the law—they aren’t “leapfrogging” anything; they are just riding in a car they don’t know how to drive. When the engine stalls (or hallucinates), they will be stranded. The “proficiency” we need to teach isn’t just how to use the tool, but the discipline to not use it until the first principles are mastered. We are currently raising a generation of editors, not writers, and in law, you cannot edit what you do not fundamentally understand..“
Tracker
The full international AI and non‑AI fabricated/false citations tracker can be here.
The UK only one is here.
The section below provides only a summary of the UK cases involving AI and non AI fabricated or false citations. You can click on each case name to read my detailed discussion of it on this blog and from there you can use the hyperlinks to obtain the full judgment. The table will update automatically as the main tracker is expanded, even if the surrounding blog text remains unchanged over time.
| Number | Date | Case Name | Court | User | Outcome | Judicial Quotes |
|---|---|---|---|---|---|---|
| 1 | 29-May-2023 | TBC (Lawgazette note) | LIP | N/A | ||
| 2 | 4-Dec-2023 | Harber v HMRC | LIP | “But that does not mean that citing invented judgments is harmless. It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents, and this is important, because the use of precedent is "a cornerstone of our legal system" and "an indispensable foundation upon which to decide what is the law and its application to individual cases" | ||
| 3 | 06-Dec-2024 | Crypto Open Patent Alliance v Dr. Craig Steven Wright | LIP | “…referred to a series of authorities in support of arguments that reasonable adjustments should be made to enable a vulnerable litigant or witness to participate fairly in court proceedings. As COPA pointed High Court Approved Judgment COPA v Wright Contempt CMC Page 6 out by reference to a series of examples, most of the authorities he has cited do not contain the passages attributed to them (or anything like those passages), and indeed most have nothing to do with adjustments for vulnerable witnesses. COPA suggested that it seems likely that they are AI “hallucinations” by ChatGPT (i.e. made-up references) rather than deliberately misleading inventions by Dr Wright. However, since the principles are clear and not in doubt, as set out above, it is not necessary to engage with his false citations any further.” | ||
| 4 | 7-Jan-2025 | Ms (Bangladesh) v SoS for Home Department | Lawyer | 13. We sought clarification regarding this citation and reference and asked for the relevant paragraph of the judgment being relied on. [counsel] was not able to specify this. [counsel] submitted that he understood, having used ChatGBT, that the Court of Appeal in Y (China) [2010] EWCA Civ 116 was presided by Pill LJ, LJ Sullivan LJ and Sir Paul Kennedy. However, the citation [2010] EWCA Civ 116 did not point to the case of Y (China) but to R (on the application of YH) v SSHD. We raised concern about this and referred [counsel] to the recent decision of the President of King’s Bench Division in Ayinde [2025] EWHC 1383 (Admin) on the use of Artificial Intelligence and fictitious cases, and directed him to make separate representations in writing.14. In his subsequent written representations, [counsel] clarified that Y(China) was a typological error and he sought to rely on R (on the application of YH) v SSHD [2010] EWCA Civ 116 where, when discussing the meaning of ‘anxious scrutiny’ in asylum claims…” | ||
| 5 | 25-Jan-2025 | Olsen v Finansiel Stabilitet | LIP | Relevant to costs | "I have narrowly and somewhat reluctantly come to the conclusion that I shouldnot cause a summons for contempt of court to be issued to the appellants underCPR rule 81.6. I do not think it likely that a judge (whether myself or anotherjudge) could be sure, to the criminal standard of proof, that the appellants knewthe case summary was a fake. They may have known but they could not becompelled to answer questions about the identity of the person who supplied it."Mr Justice Kerr | |
| 6 | 3-Apr-2025 | Bandla v SRA | Lawyer | Abuse of process and indemnity costs | “I asked the Appellant why, in the light of this citation of non-existent authorities, the Court should not of its own motion strike out the grounds of appeal in this case, as being an abuse of the process of the Court. His answer was as follows. He claimed that the substance of the points which were being put forward in the grounds of appeal were sound, even if the authority which was being cited for those points did not exist. He was saying, on that basis, that the citation of non-existent (fake) authorities would not be a sufficient basis to concern the Court, at least to the extent of taking that course. I was wholly unpersuaded by that answer. In my judgment, the Court needs to take decisive action to protect the integrity of its processes against any citation of fake authority. There have been multiple examples of fake authorities cited by the Appellant to the Court, in these proceedings. They are non-existent cases. Here, moreover, they have been put forward by someone who was previously a practising solicitor. The citations were included, and maintained, in formal documents before the Court. They were never withdrawn. They were never explained. That, notwithstanding that they were pointed out by the SRA, well ahead of this hearing. This, in my judgment, constitutes a set of circumstances in which I should exercise – and so I will exercise – the power of the Court to strike out the grounds of appeal in this case as an abuse of process.” | |
| 7 | 3-Apr-2025 | ZZaman v Revenue & Customs | LIP | Warning | 29. However, our conclusion was that Mr Zzaman's statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman's statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC) ). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully "understand" what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber ) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court's time and that of opposing parties. | |
| 8 | 22-Apr-2025 | Goshen v Accuro (2304373/2024) | LIP | N/A | "...I cannot find such a case, and I am left wondering whether this case is aninvention by the claimant or perhaps an artificial intelligence platform. As I explainedin the hearing, I cannot apply authority which I have not seen. " | |
| 9 | 25-Apr-2025 | A County Court case refered to at para 55 of the Ayinde v LBB judgment before HHJ Holmes | Lawyer | Judge wrote to Head of Chambers | “That was a case before the County Court … That counsel drew attention to the fact that the application before the judge contained false material: specifically the grounds of appeal and the skeleton argument settled … contained references to a number of cases that do not exist….” | |
| 10 | 6-Jun-2025 | Alharoun v Qatar National Bank and QNB | Lawyer | "In CL-2024-000435, it appears from the Order of Mrs Justice Dias that correspondence was sent to the court, and witness statements were filed, citing authorities that do not exist and claiming that other authorities contained passages that they do not contain" Rt Hon. Dame Victoria Sharp | ||
| 11 | 6-Jun-2025 | R (Ayinde) v Haringey | Lawyer | “ It is such a professional shame. The submission was a good one. The medical evidence was strong. The ground was potentially good. Why put a fake case in?”“I should say it is the responsibility of the legal team, including the solicitors, to see that the statement of facts and grounds are correct.”“…I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading.”Mr Justice Richie | ||
| 12 | 18-Jun-2025 | UB v SoS for Home Department | Lawyer | “…recognised this seriousness of this issue and has taken commendable steps to ensure it will not be repeated including (i) meeting with the caseworker who drafted the Grounds; (ii) holding a partners’ meeting to discuss adopting an AI policy and assigning the task of finalising an AI policy to a colleague in consultation with an AI professional; (iii) conducting relevant in-house training and issuing interim AI Guidance and (iv) planning for comprehensive staff training by an AI professional….” | ||
| 13 | 20-Jun-2025 | Pro Health Solutions Ltd v ProHealth Inc (UKIPO, Appointed Person, BL O/0559/25) | LIP | "As identified in Ayinde (including in the Appendix setting out domestic and overseas examples of attempts to rely on fake citations), fabrication of citations can involve making up a case entirely, making up quotes and attributing them to a real case, and also making up a legal proposition and attributing it to a real case even though the case is not relevant to the legal proposition being made (for instance, it deals with a completely different issue or area of law). It is not, however, fabrication to make an honest mistake as to what a court held in a particular case or to be genuinely mistaken as to the effect of a court’s judgment. In any event, it does not matter whether fabrication was arrived at with or without the aid of generative artificial intelligence. I therefore need to consider what if any sanction is appropriate.” | ||
| 14 | 7-Jul-2025 | Various Leaseholders of Napier House v Assethold Ltd | TBC | “15. The Respondent included two cases within their grounds for appeal which have been cited as…[False Case names] Having performed a search on BAILLI, Westlaw and Find Case Law, it has not been possible to find …[False Case name]. It may be that this case is not authentic and AI may have been used to reference this case….”On another case, the court noted the decision concerned the circumstances in which a parole board should hold an oral hearing. “When reading the full judgment it is difficult to see why the tribunal has been referred to this case…..” | ||
| 15 | 27-Jul-2025 | HMRC v Gunnarsson [2025] UKUT 247 (TCC). | LIP | "113. In this case, HMRC was put to the trouble of having to investigate the existenceof the purported decisions relied upon by the Respondent. Fortunately, they did so.Depending on the circumstances, there may be occasions when the opposing party or24the tribunal are not able to discover the errors relied upon. There may be others wherean adjournment is required to investigate or address the inaccurate information.114. On these facts, we do not consider the Respondent to be highly culpable becausehe is not legally trained or qualified, not subject to the same duties as a regulated lawyeror other professional representative and may not have understood that the informationand submissions presented were not simply unreliable but fictitious. He was under timepressure given his other competing responsibilities and doing his best as a lay litigantseeking to assist the UT by preparing written submissions." | ||
| 16 | 30-Jul-2025 | Father v Mother [2025] EWHC 2135 (Fam) | LIP | “(16) The F then made a further application on a C2 asking that HHJ Bailey recuse herself on the basis of being biased against him and her not understanding ASD and the impacts of his diagnosis. This came before the Judge on 10 June 2025. In his written application to the court the F referred to a number of previous authorities, in particular relating to ASD. HHJ Bailey realised that many of these cases were not genuine, and the submission appeared to have been generated by Artificial Intelligence (“AI”). In light of the level of recent concern about litigants and lawyers using AI and referring to cases which are not genuine (as reflected in the Divisional Court decision R (Ayinde) v London Borough of Haringey [2025] EWHC 1383), HHJ Bailey referred the case to me as the Family Presiding Judge for the Midlands.”“The F relied upon faked cases without apparently making any effort to check their veracity. It is in my view important to note that the F is someone who is well capable of checking references and ensuring documents are accurate if it is in his interests to do so.” | ||
| 17 | 12-Aug-2025 | Holloway v Beckles and Beckles | LIP | "That leaves the matter of the fake cases. The Tribunal finds that this does amount to unreasonable conduct within rule 13(1)(b). It has decided that the misconduct is serious, being conduct that undermines civil litigation in the Tribunal. Therefore, the Tribunal determines that it should "make a costs order. It considers that the costs order should be proportionate to the additional costs caused. It has decided that the appropriate quantum is half the costs of counsel’s fees in attending the hearing of 14 May 2025. These amount to £750 and must be paid to the applicant within 28 days." | ||
| 18 | 15-Aug-2025 | Kuzniar v General Dental Council Case No. 6009997/2024 | LIP | "44. The Claimant explained that the problems arose from her using AI to carry out research.She had previously used AI/ChatGP to carry out research without problems in her litigationagainst Roxdent Ltd and so she expected to be able to do so again successfully in theinstant case. She did not know about the problems with the citations when she told theRespondent’s solicitors about them, and when she found out about them, she did her bestwithin the short time available to mitigate or reduce the problem. She did not act in badfaith or with any intent to place false information before the Tribunal. I accept thisexplanation.45. The Claimant conducted the claim unreasonably as described above by referring to theRespondent a large number of nonsensical and in many cases non-existent citationswithout taking any or sufficient care to check them first. By not doing so she passed thework of checking them to the Respondent to have to do at short notice. My discretion toaward costs is engaged.46. However, I decline to award costs because AI is a relatively new tool which the public isstill getting used to, the Claimant acted honestly (and furthermore has presented her casehonestly to me over the last two days), and she tried to her best to rectify the situation assoon as she became aware of her mistake." | ||
| 19 | 29-Sep-2025 | ANPV & SAPV v SOSHD | Lawyer | “…suggested that the inaccuracies in the grounds were as a result of his drafting style. He accepted that there might have been some “confusion and vagueness” on his part; that he might “need to construct sentences in a more liberal way”; and that his drafting should perhaps “be a little more generous” when it came to making specific allegations about judges overlooking or failing to follow binding authorities. … The problems which I have detailed above are not matters of drafting style. The authorities which were cited in the grounds either did not exist or did not support the grounds of which were advanced. Where the cases did exist, they were often wholly irrelevant to the proposition of law which was given in the grounds.” (paragraphs 63 and 64) | ||
| 20 | 6-Oct-2025 | AK v SOSHD UI-2025-002981 | Lawyer | "What concerns me in this case is not merely that there were false citations in the grounds of appeal considered by Judge Saffer; it is that those false citations were then removed from the grounds of appeal which were placed in the composite bundle. The former actions are unprofessional, the latter are potentially dishonest because it suggests that there was an attempt to conceal the false citations..." | ||
| 21 | 10-Oct-2025 | Peters v Driver and Vehicle Standards Agency | Union Officer | “9. I raise this because: 9.1 An appreciable amount of hearing time was taken up with trying to obtain copies of various reports in order that respondent’s Counsel (and I) could check the accuracy of the AI generated summaries. 9.2 There was a significant risk I could have been misled had this not been done. 9.3 Because of the demonstrated inaccuracies, I was unable to rely on the summaries. 9.4 The delay involved also caused or contributed to my Judgment being reserved.”“…He is genuinely seeking to assist a claimant who would otherwise be unrepresented. Nonetheless, it is important that some basic checks are done to ensure that the material put before the Tribunal is accurate in order to avoid the above. I refer to R (on the application of Ayinde) v London Borough of Haringey [2025] EWHC 1383 which clearly identifies the risk of not undertaking such checks and the importance of doing so…” | ||
| 22 | 13-Oct-2025 | Malathi Latha Sriram (Mukti Roy) v Louise Mary Brittain | LIP | “…rightly in my view, and I make no criticism of her. For what it is worth, I suspect, that, in common with many unrepresented parties, [Claimant] has resorted to research using the internet and has come up with false leads. The late Muir Hunter was an eminent member of the insolvency bar and the author for many years of an insolvency commentary that still bears his name. It is easy to see how his name could have come up in the course of an internet search and end up wrongly linked to a real case name and reference. The abbreviation BPIR stands for the Bankruptcy and Personal Insolvency Reports. They are not readily available to members of the public. It would have been difficult for [Claimant] to check the citation…” | ||
| 23 | 13-Oct-2025 | Hassan v ABC International Bank PLC | LIP | “On the use of AI in general, I happily accept that the internet is a resource many of us tend to rely on as providing expertise and knowledge where we lack it. Indeed, the facility for using a search engine has even been relied on in the EAT a reason for not granting an extension of time. I accept that AI is now at the forefront of internet searches. It might also be said that more intelligent and proficient users of the internet, like the Claimant, are more apt to use it in the way that the Claimant has i.e. to help construct arguments. I should not, and do not, approach the Claimant’s use of AI as in any way inherently negative” | ||
| 24 | 14-Oct-2025 | Ndaryiyumvire v Birmingham City University | Lawyer | 48. I do have to take account of the fact that, as was said in Ayinde, the use of AI is a large and growing problem and the citing of fictitious or fake authorities is a serious threat to the integrity of the justice system which depends upon courts being able to rely on lawyers putting before the courts, whether orally or in documents, accurate material and accurate statements of the law supported by genuine cases. Lawyers who cite fictitious cases must face serious consequences and in the current environment where this is a significant and growing problem, the guidance in Ayinde indicates that judges should take a fairly tough line. | ||
| 25 | 17-Oct-2025 | Lee v Blackpool B&B et al MAN/00EJ/HMG/2024/0011 | LIP | "...I can only conclude that the ‘decision’ submitted to the Tribunal is a fabrication – whether or not it is the product of the injudicious use of artificial intelligence tools is unclear.” | ||
| 26 | 23-Oct-2025 | Victoria Place et al v Assethold Limited | LIP | "85. I then typed the same wording into M365 Copilot on an Android device but adding a question mark at the end which gave a similar response, although the phrasing was markedly different, and it referred to the Upper Tribunal decision cited by [landlord’s managing agent] rather than the ‘hallucinated’ Court of Appeal citation. Repeating the same question sometime later would not re-produce reference to the Upper Tribunal decision, showing that AI adapts and an earlier answer may no longer be returned as the algorithm learns, demonstrating the care that needs to be taking in using AI. The idiom ‘shifting sands’ comes to mind.” | ||
| 27 | 4-Nov-2025 | Choksi v IPS Law LLP | Lawyer | "...contains references to a number of cases that have wrong citations, wrong names or which simply do not exist. A number of the cases cited are wholly irrelevant and do not support the proposition in support of which they are cited...” | ||
| 28 | 17-Nov-2025 | 133 Blackstock Road (Hackney) RTM Company Limited v Assethold Limited | LIP | “19.The Tribunal is extremely concerned that the Respondent has put material before it that is erroneous. [redacted] has failed to give any explanation as to how this error arose. One explanation mightbe the use of an AI LLM in the production of the Respondent’s statement of case. | ||
| 29 | 21-Nov-2025 | Appeal in the cause of Jennings v Natwest Group Plc (Sheriff Appeal Court Civil) | LIP | “[10] These require caution, the appellant having made submissions using ChatGPT, an artificial-intelligence database (see appellant’s supplementary submission). That may explain the generality of the submissions, which largely comprise free-form legal propositions with only limited link to the facts. It has served to complicate and obscure the true analysis of the issues. At least three of the cases cited appear to be non-existent.” | ||
| 30 | 24-Nov-2025 | Oxford Hotel Investments Limited v Great Yarmouth Borough Council | LIP | “…purported to quote at a little length from [18] of the judgment to the effect that a microwave satisfied the statutory definition. The problem is that the real [18] of Barker v Shokar says no such thing. Nor does any other part of the judgment in that case. [Director for the Appellant] ended up accepting that this misleading use of authority was the product of AI. It is one which illustrates again, in courts and tribunals, the dangers of using AI for legal research without any checks.” | ||
| 31 | 3-Dec-2025 | Wemimo Mercy Taiwo v Homelets of Bath Limited & Ors | Litigation Friend, LIP | “…This case does not exist (albeit the bogus reference can be ‘recreated’ through Google’s AI Overview function). There is a 2016 case in the Bolton County Court between the two named parties, but there was no appeal in 2018 to the Court of Appeal and [redacted] is a false reference | ||
| 32 | 8-Dec-2025 | S Peggie v Fife Health Board and Dr B Upton | | TBC | ||
| 33 | 9-Dec-2025 | D (A Child) (Recusal) | LIP | Warning | “Finally, I return to the issue raised by the father’s representatives about the mother’s erroneous citation of authority (see in particular paragraph 54 above). I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments. It is entirely understandable that they should resort to artificial intelligence for help. Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not an authoritative or infallible body of legal knowledge. There are a growing number of reports of “hallucinations” infecting legal arguments through the citation of cases for propositions for which they are not authority and, in some instances, the citation of cases that do not exist at all. At worst, this may lead to the other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross-checking and correcting the errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced.” | |
| 34 | 8-Jan-2026 | Elden v HMRC [2026] UKFTT 41 (TC) | Unclear | 93. In further submissions, the Representative said 'The suggestion that citing a published authority amounts to providing false material is misconceived. A court decision is a matter of public record. Whether a case applies is a matter of legal argument and opinion, not misrepresentation. It is entirely proper for parties to put forward different interpretations for the Tribunal to consider. To characterise this as "false material" is both unfounded and inappropriate.' It is not clear who the representative is quoting as saying false material was used. The wording used by HMRC was 'inaccurate use of AI/inaccurate authorities'. | ||
| 35 | 16 Jan 26 | Huish v The Commissioners for HMRC [2026] UKFTT 129 | LIP | None | "...We attach no blame to him, since he is a litigant in person but we have recorded the names so that others do not fall into the same trap.” | |
| 36 | 29 Jan 26 | Folarin v The Immigration Services Commissioner [2026] UKFTT 135 | LIP | Went to credibility | The decision of Dame Victoria Sharp P indicated at paragraph 26 that “Placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to a contempt. That is because it deliberately interferes with the administration of justice.” Whilst mere negligence would not be sufficient to establish contempt, and knowledge that the information is false or a lack of honest belief that it was true would be required, | |
| 37 | 30 Jan 26 | PSAHSC v Nursing and Midwifery Council [2026] EWHC 141 | Unregulated Rep, Litigation Friend | Warning | “…This was pointed out to him at the hearing. He immediately admitted what he had done and that the references were phantoms created by AI. He promised not to use AI to generate submissions in future and to check his references personally…” | |
| | ||||||
| |




