Thirty one UK Cases of Hallucinated Citations, AI and Otherwise: “It is not ‘co-operating with the Tribunal’ to neither confirm nor deny the use of AI”

"93. In further submissions, the Representative said 'The suggestion that citing a published authority amounts to providing false material is misconceived. A court decision is a matter of public record. Whether a case applies is a matter of legal argument and opinion, not misrepresentation. It is entirely proper for parties to put forward different interpretations for the Tribunal to consider. To characterise this as "false material" is both unfounded and inappropriate.' It is not clear who the representative is quoting as saying false material was used. The wording used by HMRC was 'inaccurate use of AI/inaccurate authorities'.

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI hallucination cases. Subscribe here.

AI hallucination cases in UK

Update

Thank you to everyone who has shared this post. Readers have sent me further hallucination cases, taking the total beyond 31. These have been added to the tracker below and the update to this post can be read here.

Introduction

This post records the thirty-first case in the UK I have added to my ongoing list of decisions involving AI hallucination cases, whether generated by AI or arising through other forms of fabrication or error. If you would like to read my analysis of the previous thirty cases, you can click on the case that interests you from the table below, which will take you directly to my analysis.

Elden v HMRC

[2026] UKFTT 41 (TC)

The case arose from an application by HMRC to strike out an appeal against closure notices. The factual and procedural background is set out fully in the decision, but will focus on the issues relevant to AI and inaccurate citations from paragraph 59 onwards:

“On 18 September HMRC emailed the Tribunal, the Appellant and his Representative raising concerns about the use of AI in the Appellant’s skeleton argument, and the inaccurate citing of authorities within it. The Tribunal responded that these matters should be brought to the attention of the Tribunal at the hearing itself.”

The Appellant explained that he had not contributed to the preparation of the skeleton argument. HMRC believed AI had been heavily used. When this was put to the representatives, the Appellant explained that:

“73…he did not contribute to the preparation of the Skeleton Argument. HMRC believe AI was heavily used to produce the document. This was put to the Appellant’s Representatives by the Appellant, and they neither confirmed nor denied the use of AI, but he stated they stood by everything in the document save where the Appellant himself contradicted it.”

The Tribunal then set out the Appellant’s submissions and the cases relied on. From paragraph 89, the Tribunal addressed summaries of five cases included in the skeleton argument. These summaries contained no direct quotations. HMRC contended that the summaries were inaccurate and produced with the assistance of AI. The Appellant maintained that he had not drafted the document. Through the Appellant at the hearing:

“92 …the representative neither confirmed nor denied the use of AI, and said they stood by the summaries and it didn’t matter whether AI was used or not.”

In further submissions, the representative argued that:

‘”93…The suggestion that citing a published authority amounts to providing false material is misconceived. A court decision is a matter of public record. Whether a case applies is a matter of legal argument and opinion, not misrepresentation. It is entirely proper for parties to put forward different interpretations for the Tribunal to consider. To characterise this as “false material” is both unfounded and inappropriate.’ It is not clear who the representative is quoting as saying false material was used. The wording used by HMRC was ‘inaccurate use of AI/inaccurate authorities”

The Tribunal responded:

“94…To some extent, the Tribunal agrees that whether or not AI was used is not directly relevant. AI is a powerful tool that can be used to great effect, but the human who relies on its use bears the responsibility for the accuracy. At the same time, because AI is known to ‘hallucinate’, that is, to generate false or inaccurate information and present it as if it were factual, if AI has been used to produce a document and flaws are found in that document, particularly if the flaws, once pointed out, are not corrected, this leads to the rest of the document being treated with great caution. This then has a knock on effect on the time taken to consider and check all relevant points.

The Tribunal noted that the use of AI in legal proceedings is a rapidly developing area of case law and referred to earlier decisions, including Harber, Zzaman and Ayinde v London Borough of Haringey (link to these in table below). The Judge explained that it was therefore appropriate to test the accuracy of the summaries in order to determine whether the duty to verify AI output had been met.

Having analysed each cited case, the Judge overserved: “I am confident that no human being competent to summarise case law could summarise this case…” The Tribunal found that the summaries had been produced using AI and had not been verified with the degree of care required for tribunal submissions. This was said to raise serious implications for the administration of justice and public confidence. Tribunal time, as a public resource, had been significantly wasted by the need to analyse authorities that were either irrelevant or inaccurately summarised.

Following circulation of a draft decision, the Tribunal entered into correspondence with the Appellant’s representatives. The Tribunal, upon reviewing the draft judgment, asked the advisers to explain why the summaries in the skeleton were inaccurate. Only at that point was the use of AI acknowledged (it helped with conciseness) and it was accepted that the case references needed clarification.

A subsequent reply sought to justify the relevance of two of the cases but did not directly address why the summaries were inaccurate, beyond referring to time pressures. The Tribunal noted that this did not meaningfully engage with the concerns identified. A further explanation was later provided, attributing the errors to late instruction, urgent refocusing of the skeleton argument, human editing and document control mistakes. The Tribunal set out these explanations in detail but remained concerned.

The Tribunal also considered the Appellant’s conduct of the proceedings more broadly. While noting that he had largely left matters to his advisers, the Tribunal observed that he had been copied into most correspondence and could not have been unaware of missed deadlines. He appeared not to have appreciated the central importance of a witness statement in advance of the hearing.

In its findings of fact, the Tribunal’s view was:

“139. The case summaries were produced using AI and they have not been verified for accuracy with sufficient care as should be used when producing submissions for a Tribunal hearing. This lack of sufficient care amounts to professional incompetence on the part of any regulated individual or firm involved in the production of the skeleton argument.”

The Tribunal also identified:

“ (3) Obfuscation in relation to answering the question about the use of AI in the production of the skeleton argument. It is not ‘co-operating with the Tribunal’ to neither confirm nor deny the use of AI. A straightforward answer detailing the use of anything that could amount to the use of AI would not be difficult.”

Despite these findings, the Tribunal declined to strike out the appeal at that stage. It took into account the absence of a hearing date, the Appellant’s attendance at the hearing, and his stated intention to comply with future directions. The Tribunal instead imposed a series of Unless Orders.

These included requirements for a witness statement within a specified timeframe, confirmation of receipt of the hearing bundle, and strict conditions governing any future skeleton argument. Those conditions required the provision of full judgments, direct quotations with precise references, explanations of relevance, and statements of truth identifying who had checked each factual assertion or case summary and on what basis.

The strike-out application was therefore refused, although the Tribunal made clear that non-compliance with the directions would result in automatic strike out without further reference to the parties. For completeness, the Tribunal noted that by the time the decision was published, the witness statement had already been produced.

Comment

This case demonstrates with particular clarity the importance of being open with the court about how documents are prepared. The Tribunal was explicit that AI itself is not the issue. The difficulty arises where its use is obscured, its output is not properly verified, and inaccuracies are defended rather than corrected. Here, the Tribunal stopped short of striking out the appeal, but only after making serious findings and imposing robust safeguards.

As regular readers will notice, I have again refrained from setting out a detailed analysis of the false citations and summaries contained in the judgment. That decision reflects an ongoing concern of mine about inadvertently worsening the very problem these AI hallucination cases expose. As I have written elsewhere, well intentioned judges sometimes set out hallucinated cases and their false legal principles in full in order to demonstrate the scale of the difficulty. The risk is that doing so may unintentionally compound the problem, because AI generated inaccuracies can then become indirectly integrated into the wider legal record.

I have discussed this a few times now with colleagues both on this blog and in webinars. Some interesting observations were also made on LinkedIn on this issue. Not everyone agrees on the correct approach. Much of the discussion centred on the tension between transparency and protection. On one view, there is a strong instinct to spell out fabricated authorities and their purported principles in full, both to explain precisely what has gone wrong and to ensure accountability. On the other, there is an equally pressing concern about safeguarding the integrity of the body of law itself.

For now, I remain concerned. I am aware that large language models access and draw from my legal writing, and for that reason I continue to keep my full AI hallucinations database private. I do so because I am concerned that, if it were publicly searchable, search engines and AI tools would begin to republish false information and confer unintended legitimacy through repetition. If any readers wish to see this research, they are welcome to contact my clerks.

I have included below a section from my global hallucination tracker (AI and non‑AI hallucinated citations) that focuses solely on the UK position. I would be very grateful if readers who are aware of further cases not yet listed could share them with me, so that I can keep the tracker as accurate and up to date as possible. There is more analysis to follow shortly. If you find these articles helpful, please consider subscribing below.

Final Words from Chat GPT 5.2 Pro and Gemini 3 Pro

This is where I invite premium AI models to comment on or critique the preceding discussion. The below are the interesting aspects of each response.

ChatGPT:

“What stands out is how the Tribunal ties together candour, verification, and confidence in the process: AI can be a useful drafting tool, but the person putting the document forward carries responsibility for accuracy, and “neither confirm nor deny” responses about AI use only deepen suspicion when inaccuracies appear. Your further concern about whether judgments should reproduce fabricated cases captures the core dilemma in AI hallucination cases: transparency and accountability push towards spelling errors out in full, but doing so risks laundering false “law” into the searchable ecosystem that both humans and models draw from”

Gemini:

“The ruling in Elden v HMRC underscores a critical juncture in legal practice where the convenience of generative tools collides with the absolute imperative of professional candour. The Tribunal’s sharp criticism of the representatives’ initial refusal to confirm the origin of their submissions reveals that the core issue is not just the technology itself, but the attempt to evade responsibility for verifying its output. As AI hallucination cases continue to accumulate, they expose a systemic vulnerability: when legal professionals fail to act as a rigorous filter for automated content, they do not merely present weak arguments—they actively waste judicial resources and erode the foundational trust of the adversarial process.

Tracker

The full international AI and non‑AI fabricated/false citations tracker can be here, the below is just a summary of the UK AI and non‑AI fabricated/false citations. The section below provides only a summary of the UK cases involving AI and non AI fabricated or false citations. You can click on each case name to read my detailed discussion of it on this blog, and from there you can use the hyperlinks to obtain the full judgment. The table will update automatically as the main tracker is expanded, even if the surrounding blog text remains unchanged over time.

DateCaseJudicial Quote
29-May-2023TBC (Lawgazette note)The Gazette understands that the following day, the LiP returned to court with four case citations, each backing the point they were trying to make. On closer inspection by the barrister, it transpired that one case name had simply been fabricated, while the other three were real case names but with the relevant cited passages being completely different to the judgment in each. For all four citations, the paragraphs quoted were completely fictitious, though appearing completely legitimate.
4-Dec-2023Harber v HMRC“But that does not mean that citing invented judgments is harmless. It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents, and this is important, because the use of precedent is "a cornerstone of our legal system" and "an indispensable foundation upon which to decide what is the law and its application to individual cases"
06-Dec-2024Crypto Open Patent Alliance v Dr. Craig Steven Wright“…referred to a series of authorities in support of arguments that reasonable adjustments should be made to enable a vulnerable litigant or witness to participate fairly in court proceedings. As COPA pointed High Court Approved Judgment COPA v Wright Contempt CMC Page 6 out by reference to a series of examples, most of the authorities he has cited do not contain the passages attributed to them (or anything like those passages), and indeed most have nothing to do with adjustments for vulnerable witnesses. COPA suggested that it seems likely that they are AI “hallucinations” by ChatGPT (i.e. made-up references) rather than deliberately misleading inventions by Dr Wright. However, since the principles are clear and not in doubt, as set out above, it is not necessary to engage with his false citations any further.”
7-Jan-2025Ms (Bangladesh) v SoS for Home Department13. We sought clarification regarding this citation and reference and asked for the relevant paragraph of the judgment being relied on. [counsel] was not able to specify this. [counsel] submitted that he understood, having used ChatGBT, that the Court of Appeal in Y (China) [2010] EWCA Civ 116 was presided by Pill LJ, LJ Sullivan LJ and Sir Paul Kennedy. However, the citation [2010] EWCA Civ 116 did not point to the case of Y (China) but to R (on the application of YH) v SSHD. We raised concern about this and referred [counsel] to the recent decision of the President of King’s Bench Division in Ayinde [2025] EWHC 1383 (Admin) on the use of Artificial Intelligence and fictitious cases, and directed him to make separate representations in writing.14. In his subsequent written representations, [counsel] clarified that Y(China) was a typological error and he sought to rely on R (on the application of YH) v SSHD [2010] EWCA Civ 116 where, when discussing the meaning of ‘anxious scrutiny’ in asylum claims…”
25-Jan-2025Olsen v Finansiel Stabilitet"I have narrowly and somewhat reluctantly come to the conclusion that I shouldnot cause a summons for contempt of court to be issued to the appellants underCPR rule 81.6. I do not think it likely that a judge (whether myself or anotherjudge) could be sure, to the criminal standard of proof, that the appellants knewthe case summary was a fake. They may have known but they could not becompelled to answer questions about the identity of the person who supplied it."Mr Justice Kerr
3-Apr-2025Bandla v SRA“I asked the Appellant why, in the light of this citation of non-existent authorities, the Court should not of its own motion strike out the grounds of appeal in this case, as being an abuse of the process of the Court. His answer was as follows. He claimed that the substance of the points which were being put forward in the grounds of appeal were sound, even if the authority which was being cited for those points did not exist. He was saying, on that basis, that the citation of non-existent (fake) authorities would not be a sufficient basis to concern the Court, at least to the extent of taking that course. I was wholly unpersuaded by that answer. In my judgment, the Court needs to take decisive action to protect the integrity of its processes against any citation of fake authority. There have been multiple examples of fake authorities cited by the Appellant to the Court, in these proceedings. They are non-existent cases. Here, moreover, they have been put forward by someone who was previously a practising solicitor. The citations were included, and maintained, in formal documents before the Court. They were never withdrawn. They were never explained. That, notwithstanding that they were pointed out by the SRA, well ahead of this hearing. This, in my judgment, constitutes a set of circumstances in which I should exercise – and so I will exercise – the power of the Court to strike out the grounds of appeal in this case as an abuse of process.”
3-Apr-2025ZZaman v Revenue & Customs29. However, our conclusion was that Mr Zzaman's statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman's statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC) ). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully "understand" what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber ) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court's time and that of opposing parties.
22-Apr-2025Goshen v Accuro (2304373/2024)
"...I cannot find such a case, and I am left wondering whether this case is aninvention by the claimant or perhaps an artificial intelligence platform. As I explainedin the hearing, I cannot apply authority which I have not seen. "
25-Apr-2025A County Court case refered to at para 55 of the Ayinde v LBB judgment before HHJ Holmes“That was a case before the County Court … That counsel drew attention to the fact that the application before the judge contained false material: specifically the grounds of appeal and the skeleton argument settled … contained references to a number of cases that do not exist….”
6-Jun-2025Alharoun v Qatar National Bank and QNB"In CL-2024-000435, it appears from the Order of Mrs Justice Dias that correspondence was sent to the court, and witness statements were filed, citing authorities that do not exist and claiming that other authorities contained passages that they do not contain" Rt Hon. Dame Victoria Sharp
6-Jun-2025R (Ayinde) v Haringey“ It is such a professional shame. The submission was a good one. The medical evidence was strong. The ground was potentially good. Why put a fake case in?”“I should say it is the responsibility of the legal team, including the solicitors, to see that the statement of facts and grounds are correct.”“…I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading.”Mr Justice Richie
18-Jun-2025UB v SoS for Home Department“…recognised this seriousness of this issue and has taken commendable steps to ensure it will not be repeated including (i) meeting with the caseworker who drafted the Grounds; (ii) holding a partners’ meeting to discuss adopting an AI policy and assigning the task of finalising an AI policy to a colleague in consultation with an AI professional; (iii) conducting relevant in-house training and issuing interim AI Guidance and (iv) planning for comprehensive staff training by an AI professional….”
20-Jun-2025Pro Health Solutions Ltd v ProHealth Inc (UKIPO, Appointed Person, BL O/0559/25)"As identified in Ayinde (including in the Appendix setting out domestic and overseas examples of attempts to rely on fake citations), fabrication of citations can involve making up a case entirely, making up quotes and attributing them to a real case, and also making up a legal proposition and attributing it to a real case even though the case is not relevant to the legal proposition being made (for instance, it deals with a completely different issue or area of law). It is not, however, fabrication to make an honest mistake as to what a court held in a particular case or to be genuinely mistaken as to the effect of a court’s judgment. In any event, it does not matter whether fabrication was arrived at with or without the aid of generative artificial intelligence. I therefore need to consider what if any sanction is appropriate.”
7-Jul-2025Various Leaseholders of Napier House v Assethold Ltd “15. The Respondent included two cases within their grounds for appeal which have been cited as…[False Case names] Having performed a search on BAILLI, Westlaw and Find Case Law, it has not been possible to find …[False Case name]. It may be that this case is not authentic and AI may have been used to reference this case….”On another case, the court noted the decision concerned the circumstances in which a parole board should hold an oral hearing. “When reading the full judgment it is difficult to see why the tribunal has been referred to this case…..”
27-Jul-2025HMRC v Gunnarsson [2025] UKUT 247 (TCC)."113. In this case, HMRC was put to the trouble of having to investigate the existenceof the purported decisions relied upon by the Respondent. Fortunately, they did so.Depending on the circumstances, there may be occasions when the opposing party or24the tribunal are not able to discover the errors relied upon. There may be others wherean adjournment is required to investigate or address the inaccurate information.114. On these facts, we do not consider the Respondent to be highly culpable becausehe is not legally trained or qualified, not subject to the same duties as a regulated lawyeror other professional representative and may not have understood that the informationand submissions presented were not simply unreliable but fictitious. He was under timepressure given his other competing responsibilities and doing his best as a lay litigantseeking to assist the UT by preparing written submissions."
30-Jul-2025Father v Mother [2025] EWHC 2135 (Fam)“(16) The F then made a further application on a C2 asking that HHJ Bailey recuse herself on the basis of being biased against him and her not understanding ASD and the impacts of his diagnosis. This came before the Judge on 10 June 2025. In his written application to the court the F referred to a number of previous authorities, in particular relating to ASD. HHJ Bailey realised that many of these cases were not genuine, and the submission appeared to have been generated by Artificial Intelligence (“AI”). In light of the level of recent concern about litigants and lawyers using AI and referring to cases which are not genuine (as reflected in the Divisional Court decision R (Ayinde) v London Borough of Haringey [2025] EWHC 1383), HHJ Bailey referred the case to me as the Family Presiding Judge for the Midlands.”“The F relied upon faked cases without apparently making any effort to check their veracity. It is in my view important to note that the F is someone who is well capable of checking references and ensuring documents are accurate if it is in his interests to do so.”
12-Aug-2025Holloway v Beckles and Beckles "That leaves the matter of the fake cases. The Tribunal finds that this does amount to unreasonable conduct within rule 13(1)(b). It has decided that the misconduct is serious, being conduct that undermines civil litigation in the Tribunal. Therefore, the Tribunal determines that it should "make a costs order. It considers that the costs order should be proportionate to the additional costs caused. It has decided that the appropriate quantum is half the costs of counsel’s fees in attending the hearing of 14 May 2025. These amount to £750 and must be paid to the applicant within 28 days."
15-Aug-2025Kuzniar v General Dental Council Case No. 6009997/2024"44. The Claimant explained that the problems arose from her using AI to carry out research.She had previously used AI/ChatGP to carry out research without problems in her litigationagainst Roxdent Ltd and so she expected to be able to do so again successfully in theinstant case. She did not know about the problems with the citations when she told theRespondent’s solicitors about them, and when she found out about them, she did her bestwithin the short time available to mitigate or reduce the problem. She did not act in badfaith or with any intent to place false information before the Tribunal. I accept thisexplanation.45. The Claimant conducted the claim unreasonably as described above by referring to theRespondent a large number of nonsensical and in many cases non-existent citationswithout taking any or sufficient care to check them first. By not doing so she passed thework of checking them to the Respondent to have to do at short notice. My discretion toaward costs is engaged.46. However, I decline to award costs because AI is a relatively new tool which the public isstill getting used to, the Claimant acted honestly (and furthermore has presented her casehonestly to me over the last two days), and she tried to her best to rectify the situation assoon as she became aware of her mistake."
29-Sep-2025ANPV & SAPV v SOSHD“…suggested that the inaccuracies in the grounds were as a result of his drafting style. He accepted that there might have been some “confusion and vagueness” on his part; that he might “need to construct sentences in a more liberal way”; and that his drafting should perhaps “be a little more generous” when it came to making specific allegations about judges overlooking or failing to follow binding authorities. … The problems which I have detailed above are not matters of drafting style. The authorities which were cited in the grounds either did not exist or did not support the grounds of which were advanced. Where the cases did exist, they were often wholly irrelevant to the proposition of law which was given in the grounds.” (paragraphs 63 and 64)
6-Oct-2025AK v SOSHD UI-2025-002981"What concerns me in this case is not merely that there were false citations in the grounds of appeal considered by Judge Saffer; it is that those false citations were then removed from the grounds of appeal which were placed in the composite bundle. The former actions are unprofessional, the latter are potentially dishonest because it suggests that there was an attempt to conceal the false citations..."
10-Oct-2025Peters v Driver and Vehicle Standards Agency“9. I raise this because: 9.1 An appreciable amount of hearing time was taken up with trying to obtain copies of various reports in order that respondent’s Counsel (and I) could check the accuracy of the AI generated summaries. 9.2 There was a significant risk I could have been misled had this not been done. 9.3 Because of the demonstrated inaccuracies, I was unable to rely on the summaries. 9.4 The delay involved also caused or contributed to my Judgment being reserved.”“…He is genuinely seeking to assist a claimant who would otherwise be unrepresented. Nonetheless, it is important that some basic checks are done to ensure that the material put before the Tribunal is accurate in order to avoid the above. I refer to R (on the application of Ayinde) v London Borough of Haringey [2025] EWHC 1383 which clearly identifies the risk of not undertaking such checks and the importance of doing so…”
13-Oct-2025Malathi Latha Sriram (Mukti Roy) v Louise Mary Brittain“…rightly in my view, and I make no criticism of her. For what it is worth, I suspect, that, in common with many unrepresented parties, [Claimant] has resorted to research using the internet and has come up with false leads. The late Muir Hunter was an eminent member of the insolvency bar and the author for many years of an insolvency commentary that still bears his name. It is easy to see how his name could have come up in the course of an internet search and end up wrongly linked to a real case name and reference. The abbreviation BPIR stands for the Bankruptcy and Personal Insolvency Reports. They are not readily available to members of the public. It would have been difficult for [Claimant] to check the citation…”
13-Oct-2025Hassan v ABC International Bank PLC“On the use of AI in general, I happily accept that the internet is a resource many of us tend to rely on as providing expertise and knowledge where we lack it. Indeed, the facility for using a search engine has even been relied on in the EAT a reason for not granting an extension of time. I accept that AI is now at the forefront of internet searches. It might also be said that more intelligent and proficient users of the internet, like the Claimant, are more apt to use it in the way that the Claimant has i.e. to help construct arguments. I should not, and do not, approach the Claimant’s use of AI as in any way inherently negative”
14-Oct-2025Ndaryiyumvire v Birmingham City University48. I do have to take account of the fact that, as was said in Ayinde, the use of AI is a large and growing problem and the citing of fictitious or fake authorities is a serious threat to the integrity of the justice system which depends upon courts being able to rely on lawyers putting before the courts, whether orally or in documents, accurate material and accurate statements of the law supported by genuine cases. Lawyers who cite fictitious cases must face serious consequences and in the current environment where this is a significant and growing problem, the guidance in Ayinde indicates that judges should take a fairly tough line.
17-Oct-2025Lee v Blackpool B&B et al MAN/00EJ/HMG/2024/0011
"...I can only conclude that the ‘decision’ submitted to the Tribunal is a fabrication – whether or not it is the product of the injudicious use of artificial intelligence tools is unclear.”
23-Oct-2025Victoria Place et al v Assethold Limited "85. I then typed the same wording into M365 Copilot on an Android device but adding a question mark at the end which gave a similar response, although the phrasing was markedly different, and it referred to the Upper Tribunal decision cited by [landlord’s managing agent] rather than the ‘hallucinated’ Court of Appeal citation. Repeating the same question sometime later would not re-produce reference to the Upper Tribunal decision, showing that AI adapts and an earlier answer may no longer be returned as the algorithm learns, demonstrating the care that needs to be taking in using AI. The idiom ‘shifting sands’ comes to mind.”
4-Nov-2025Choksi v IPS Law LLP"...contains references to a number of cases that have wrong citations, wrong names or which simply do not exist. A number of the cases cited are wholly irrelevant and do not support the proposition in support of which they are cited...”
17-Nov-2025133 Blackstock Road (Hackney) RTM Company Limited v Assethold Limited“19.The Tribunal is extremely concerned that the Respondent has put material before it that is erroneous. [redacted] has failed to give any explanation as to how this error arose. One explanation mightbe the use of an AI LLM in the production of the Respondent’s statement of case.
21-Nov-2025Appeal in the cause of Jennings v Natwest Group Plc (Sheriff Appeal Court Civil)“[10] These require caution, the appellant having made submissions using ChatGPT, an artificial-intelligence database (see appellant’s supplementary submission). That may explain the generality of the submissions, which largely comprise free-form legal propositions with only limited link to the facts. It has served to complicate and obscure the true analysis of the issues. At least three of the cases cited appear to be non-existent.”
24-Nov-2025Oxford Hotel Investments Limited v Great Yarmouth Borough Council“…purported to quote at a little length from [18] of the judgment to the effect that a microwave satisfied the statutory definition. The problem is that the real [18] of Barker v Shokar says no such thing. Nor does any other part of the judgment in that case. [Director for the Appellant] ended up accepting that this misleading use of authority was the product of AI. It is one which illustrates again, in courts and tribunals, the dangers of using AI for legal research without any checks.”
3-Dec-2025Wemimo Mercy Taiwo v Homelets of Bath Limited & Ors“…This case does not exist (albeit the bogus reference can be ‘recreated’ through Google’s AI Overview function). There is a 2016 case in the Bolton County Court between the two named parties, but there was no appeal in 2018 to the Court of Appeal and [redacted] is a false reference
8-Dec-2025S Peggie v Fife Health Board and Dr B UptonDetails TBC
9-Dec-2025D (A Child) (Recusal)“Finally, I return to the issue raised by the father’s representatives about the mother’s erroneous citation of authority (see in particular paragraph 54 above). I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments. It is entirely understandable that they should resort to artificial intelligence for help. Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not an authoritative or infallible body of legal knowledge. There are a growing number of reports of “hallucinations” infecting legal arguments through the citation of cases for propositions for which they are not authority and, in some instances, the citation of cases that do not exist at all. At worst, this may lead to the other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross-checking and correcting the errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced.”
8-Jan-2026Elden v HMRC [2026] UKFTT 41 (TC)93. In further submissions, the Representative said 'The suggestion that citing a published authority amounts to providing false material is misconceived. A court decision is a matter of public record. Whether a case applies is a matter of legal argument and opinion, not misrepresentation. It is entirely proper for parties to put forward different interpretations for the Tribunal to consider. To characterise this as "false material" is both unfounded and inappropriate.' It is not clear who the representative is quoting as saying false material was used. The wording used by HMRC was 'inaccurate use of AI/inaccurate authorities'.