Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI privilege and AI hallucination cases (AI suspected or confirmed). Subscribe to the AI Law Commentary here.

This Legal Article was posted on 15 February 2026
Introduction
If I could get one piece of policy passed right now relative to AI the thing I would most like, and this is in tension with some of the other things that we talked about, is I’d like there to be a concept of AI privilege. When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information. We have decided that society has an interest in that being privileged and that we don’t, and that a subpoena can’t get, that the government can’t come asking your doctor for it, whatever. I think we should have the same concept for AI I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you’d get if you’re talking to the human version of this. And right now we don’t have that. And I think it would be a great, great policy to adopt.“
It was a pleasure to speak with the Employment Lawyers Association last week and with the Public Law Project before that. The questions and comments were thoughtful and reflected a deep understanding of the many issues we are facing internationally and across a wide range of practice areas. The readership of this blog continues to grow each day, as does the generosity of readers who send me judgments, articles and their own reflections, all of which help to deepen and refine my understanding. I have a great deal to write up and we have a lot to discuss.
Interestingly, in those presentations and others over the past few weeks, we have touched on AI in the context of privilege and confidentiality. We discussed some hypotheticals and considered the AI Judicial Guidance. As is so often the case in this area, theory was quickly followed by practice, in the form of an important decision from the United States, which inspired sharing the quote above and I discuss below.
My focus this week has also been on hallucinations in the UKIPO. For those who may be less familiar with it, as I once was, the UKIPO is the United Kingdom Intellectual Property Office, the government body responsible for registering and administering trade marks, patents and designs. It also determines disputes such as oppositions and invalidity actions, usually through written decisions issued by Hearing Officers. There are several interesting AI related cases there, but for present purposes I will focus on those that raise questions about AI hallucinations.
As with everything I track, this includes actual or suspected use of generative artificial intelligence tools. Unfortunately, while I was drafting this article, the website went down. As a result, some links do not work and I was unable to complete parts of my research, but hopefully they will be back up by the time you read this.
United States v. Bradley Heppner
This US federal criminal case in the Southern District of New York, is an important one to watch. The government’s motion can be read at the link above in full and the preliminary statement tells us what it’s about:
“The Government respectfully moves this Court for a ruling that approximately thirty-one
documents the defendant Bradley Heppner generated through a commercial artificial intelligence
tool (the “AI Documents”) are neither protected by the attorney-client privilege nor shielded by
the work product doctrine. The defendant created these documents before his arrest by inputting
queries into a third-party AI platform. He later shared them with his defense counsel. For three
independent reasons, no privilege attaches.”
The Motion sets out three arguments:
“First, the AI Documents fail every element of the attorney-client privilege. They are not
communications between a client and attorney—the AI tool is plainly not an attorney, and no
attorney was involved when he created the documents. They were not made for the purpose of
obtaining legal advice—the AI platform’s terms of service expressly disclaim any attorney-client
relationship and state that the tool does not provide legal advice. And they are not confidential—
the defendant voluntarily shared his queries with the AI tool, and the AI responses were generated
from a third-party commercial platform whose privacy policy permits disclosure to governmental
authorities.”Second, the defendant cannot retroactively cloak unprivileged documents with privilege
by later transmitting them to counsel. Well-settled law holds that preexisting, non-privileged
materials do not become privileged merely because a client eventually shares them with an
attorney.Third, the work product doctrine does not protect these materials. Defense counsel has
represented that the defendant created the AI Documents on his own initiative—not at counsel’s
behest or direction. The doctrine shields materials prepared by or for a party’s attorney or
representative; it does not protect a layperson’s independent internet research”
At the time of writing, I have not seen a written judgment and I would be very grateful to receive a copy if any reader sees one before I do. However, from colleagues and some helpful online sources including a Law360 report, it appears that on 10 February 2026 the motion came before Judge Jed S. Rakoff. Law360 explains that Judge Rakoff quickly dispatched the first theory, saying Tuesday, “I’m not seeing remotely any basis for any claim of attorney-client privilege.” After hearing argument, Judge Rakoff concluded that the AI documents were not prepared by attorneys.
The report also highlights an interesting alternative legal argument which we are likely to see developed more fully in due course. In summary, if the prosecution seeks to deploy a defendant’s generated “AI documents” at trial, the defence team could become witnesses in relation to that evidence, placing them in potential conflict with their role as advocates. That situation could arise because the defence lawyers may be the only people able to address key factual issues.
This possibility brings to mind the Bar Standards Board Handbook. Rule rC21.10 provides that a barrister must not accept instructions where “there is a real prospect that you are not going to be able to maintain your independence.” The accompanying guidance (gC73) explains that this may arise, for example, where a barrister is likely to be called as a witness in the proceedings.
Pro Health Solutions Ltd v ProHealth Inc (BL O/0559/25)
Before turning to the more recent decisions, I should reshare a link to the first UKIPO case I identified on this issue, which I wrote about here. It was an important decision and is referred to in some of the cases discussed below.
Onyinye Udokporo v Enrich International Ltd
After dismissing the Grounds of Appeal, the Appointed Person turned to fabricated references and quotes from paragraph 24:
“The Grounds of Appeal in this case were another unfortunate example of a party putting forward fabricated references in their submissions. The Grounds included citations for numerous cases, including two cases that do not exist.”
The Appointed Person then went through each example showing the errors and explain there were also quotes from real cases where the quotation is not in the judgment in the form quoted. The Appellant:
“…filed her own Grounds of Appeal. She admitted that she had used artificial intelligence to write her submissions for an earlier part of the proceedings, but said she had received an (unnamed) lawyer’s help in drafting the Grounds of Appeal. I asked her to provide copies of the two judgments and to indicate the source of the quotations. In relation to the fabricated references, she provided links to two similarly named cases”
The Appointed Person went through these cases explaining the errors that occurred and concluded:
“Whether Ms Udokporo relied on a lawyer for guidance, used generative artificial intelligence, or wrote the grounds entirely herself, it is entirely improper for a party to provide fabricated sources or quotations to a court or tribunal.
It is to be expected that litigants-in-person may put forward arguments that are irrelevant to the issues to be decided or entirely without merit (as happened here). They are not experienced in the complexities of the law and cannot be expected to meet the higher standards that would be expected of professional representatives.
However, putting fabricated court references before a tribunal is a very serious matter and being inexperienced can never be a justification for doing it (and neither can blind trust in outputs from generative artificial intelligence). The Divisional Court made clear in Ayinde, R (On the Application Of) v London Borough of Haringey [2025] EWHC 1383 (Admin) how serious it is to put forward fabricated references whatever their source.
Litigants-in-person who put their name to a document before the registrar or the Appointed Person must be able to provide all the material cited by them and that material must relate to what they are saying, and likewise any quotation they rely upon must be accurate (albeit I accept that innocent transcription or typographical errors are not representative of improper conduct). If a party cannot provide the authorities they rely upon, their conduct is unreasonable within the meaning of Tribunal Practice Notice 1/2023 and “off-scale” costs are usually appropriate: see Pro Health Solutions (O/559/25), [23]-[24].
The Respondent was professionally represented and played only a limited role in the appeal. It filed what it called a Respondent’s Notice, but was in fact a written response to the Grounds of Appeal. If the Respondent wants to apply for its full costs for writing this Notice and considering the Grounds of Appeal then it has until 4pm on 15 December 2025 to provide its Schedule of Costs. Both parties will then have until 4pm on 5 January 2025 to provide any written submissions on the costs order that I should make.“
An Concrete Ltd v Wasserman Boxing Limited: MISFITS BOXING consolidated oppositions
In An Concrete Ltd v Wasserman Boxing Ltd (O/0010/26), the Hearing Officer stated:
“70. Whilst there have been recent cases in which the courts and the Appointed Person have warned parties of the risks of using artificial intelligence for legal research, drafting skeleton arguments or written submissions, and I sense that [redacted] might have used some sort of artificial intelligence tool to draft its submissions in lieu, I accept that it might have misunderstood the references, the lack of experience being a mitigation. Hence, I do not think it would be appropriate to apply any sanction in this case.”
OscarTech UK Ltd v Orthofix S.R.L.
In OscarTech Ltd v Oskar Technologies Ltd (O/1013/25), the decision records that the applicant’s representative confirmed he had used an AI tool to assist in drafting a skeleton argument. After the opposing party queried certain cited authorities, the representative withdrew reliance on two case references that were inaccurately cited:
“106… explained in the hearing that, he had used an AI tool to help him prepare for the hearing. 107. On the matter of the use of AI in legal research, it is appropriate to note that Mr Phillip Johnson, sitting as the Appointed Person in his recent decision BL O/0559/25, underlined the risks associated with AI tools; in particular, the fictitious case references that can be generated; and incorrect authorities cited in support of legal arguments. Mr Johnson made it clear that even litigants-in-person have a duty not to mislead the court [or tribunal] and, in observing that duty, they are urged to be alert to the risks associated with the use of ‘ChatGPT’ and the like.”
Warwick Econometrics Ltd v The University of Warwick
This decision concerns oppositions to two UK trade mark applications and the related procedural questions.
At paragraph 18, the Hearing Officer noted:
“Upon reading the skeleton arguments, it was noted that paragraph 20 referred to [case name]. However, I was unable to locate this case when searching either the case name or BL number provided. Therefore, on 4 September 2025, the Registry emailed the applicant’s representative asking them to clarify what case they were relying on and to provide the correct details for such case.”
At the hearing, the Hearing Officer asked whether artificial intelligence (including “Chat GPT”) had been used to produce the skeleton arguments (both the original and corrected versions). The applicant’s representative denied that any such tool had been used.
After the hearing, the decision records an email exchange in which the applicant’s representative said he could not locate the cited case and believed the incorrect title may have appeared in a “Google Overview” while searching for the relevant case number; he noted that these summaries are automatically generated, may have combined unrelated cases and apologised. The Hearing Officer then observed that this explanation did not appear consistent with the earlier denial that any AI (including “Chat GPT”) had been used to produce the skeleton arguments.
Robert Sulić v Antonio Nuno Correia Ramos Marques
This appeal concerned whether a UK designation of an international trade mark should be refused due to conflict with earlier UK marks. AI appeared in the decision in the context of the quality of the appeal documents rather than the substantive trade mark merits:
“6. The Grounds of Appeal in this case are entirely unsatisfactory. They make statements which mix up legal and factual matters, rely on irrelevant things, and the sole reference to a statutory provision is to one which does not exist. It does not matter whether these were drafted by the Appellant without any assistance or some form of generative AI was used. It is not acceptable, even from a litigant in person, to provide grounds which makes the task of appellate review so difficult. Indeed, the difficulty was made more profound because neither side provided any written submissions and both declined a hearing. This means this appeal has to be determined based on the Appellant’s confused set of criticisms of the Hearing Officer’s decision.
7. Nevertheless, I will address the Appellant’s challenges to the decision as far as is possible from the Grounds of Appeal as drafted.”
Comment
On these important questions of privilege and confidentiality, I have a lot to share and will need to do a more detailed analysis in a subsequent post. By then, I hope we have the written reasons in US v Heppner. Readers of the blog who recently attended my presentations on these issues, will know that great caution must be adopted when using any AI model. It is a good idea to have regard to the Judicial AI Guidance which puts it this way:
“II. Uphold confidentiality and privacy
Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world.The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known.
You should disable the chat history in public AI chatbots if this option is available, as it should prevent your data from being used to train the chatbot and after 30 days, the conversations will be permanently deleted. This option is currently available in ChatGPT and Google Gemini but not in some other chatbots. Even with history turned off, though, it should be assumed that data entered is being disclosed.
Be aware that some AI platforms, particularly if used as an App on a smartphone, may request various permissions which give them access to information on your device. In those circumstances you should refuse all such permissions.
In the event of unintentional disclosure of confidential or private information you should contact your leadership judge and the Judicial Office. If the disclosed information includes personal data, the disclosure should be reported as a data incident. Details of how to report a data incident to Judicial Office can be found at this link: Data breach notification form judiciaryYou should treat all public AI tools as being capable of making public anything entered into them.”
The recent hallucination cases serve as a helpful reminder of the duty of candour owed to every court and tribunal. Whether appearing as a litigant in person, a representative, or a lawyer, all who address the court must take care not to mislead it when responding to its questions. The AI Judicial Guidance recognises that artificial intelligence may assist in legal work, provided it is used responsibly and with appropriate oversight. It is worth revisiting the observations of Dame Victoria Sharp:
“This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.” Ayinde -v- London Borough of Haringey, and Al-Haroun [2025] EWHC 1383.
There is much more to write and discuss. If you have found this useful, please consider subscribing to my AI Legal Articles via the Substack below and continue sending me any decisions you think are relevant, whatever the jurisdiction. One thing I have learnt is that we are all facing similar, urgent, and rapidly changing issues. Sharing experiences and ideas across practice areas and borders remains one of the most constructive ways to address the professional and legal challenges ahead.
Final Words from Chat GPT 5.2 Pro and Gemini 3 Pro
Chat GPT:
“The Government’s position in US v Heppner is that AI generated documents are not protected by attorney client privilege because they were created through a third party tool rather than in communication with a lawyer, confidentiality may have been lost through use of a commercial platform, and privilege cannot later be created by sending pre existing material to counsel; nor, on their case, does work product protection arise where the documents were generated independently rather than at a lawyer’s direction. Alongside this, recent UKIPO decisions demonstrate a growing judicial intolerance of fabricated or distorted authorities, with tribunals identifying non existent cases, misquotations, and blended citations suggestive of AI or automated summaries, reinforcing the principle that it is entirely improper for a party to provide fabricated sources or quotations to a court or tribunal.”
Gemini:
“This title works because it captures the ‘pincer movement’ closing in on AI in the legal sector. On one flank, you have the UKIPO exposing the messy reality of ‘suspected or confirmed’ hallucinations. On the other flank, you have US v. Heppner stripping away the shield of privilege—a failure of protection. By linking them, you are showing that the legal system is simultaneously attacking the reliability of the tool and the secrecy of its use. It’s a devastating combination for any lawyer hoping to use AI casually.”
Tracker
The full international AI and non‑AI fabricated/false citations tracker can be here.
The UK only one is here.
The section below provides only a summary of the UK cases involving AI and non AI fabricated or false citations. You can click on each case name to read my detailed discussion of it on this blog and from there you can use the hyperlinks to obtain the full judgment. The table will update automatically as the main tracker is expanded, even if the surrounding blog text remains unchanged over time.
| Number | Date | Case Name | Court | User | Outcome | Judicial Quotes |
|---|---|---|---|---|---|---|
| 60 | 11 Mar 2026 | Setu Kamal v Tax Policy Associates Ltd & Anor | Lawyer | 184. At a more granular level, the Defendants object to the variety of different addresses, in different jurisdictions, [claimant] has used on his claim form, particulars of claim, application notices and witness statement. They object to his imposition of arbitrary and oppressive deadlines for responses after long periods of delay. They object to a range of features of his correspondence including misapplying the ‘without prejudice’ designation to letters containing no settlement proposals, and the deployment of AI-generated ‘hallucinatory’ references to non-existent cases causing unnecessary effort in cross-checking (there being already a growing jurisprudence of court deprecation of this phenomenon, including in exercise of the Hamid jurisdiction – see R (Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin)). | ||
| 59 | 5 Mar 2026 | Re A, B, C, D (Extension of assessment; Use of AI: hallucinations) | LIP | It is the duty to the court owed by [Representative} as a litigant in person to ensure that the cases cited in legal argument are genuine and provide authority for the proposition advance with which I am concerned. She accepts that she did not do so, and apologises. She says that the mistakes were unintentional, and seeks to justify herself by saying the main authority upon which she relied was sent to me (as it was). Having heard from her, and read her submissions, I absolve her of any intention to mislead the court, but remain concerned that [Representative] minimises the seriousness of misleading the court and goes so far as to assert that criticising use of AI risks setting a harmful precedent for disabled litigants in person and will discourage access to justice. | ||
| 58 | 17 Feb 2026 | Brightwater Energy LTD v Eroton Exploration | Lawyer | 48. The reference to the 2014 case was the result of AI hallucination. The case does not exist, and the text which it was supposed to contain was invented. That [D's representative] referred to a non-existent authority was the result of the short time he had to prepare, and I acknowledge that he drew the attention of the court to the error promptly after identifying it. The incident does, however, demonstrate vividly the dangers of relying on the product of AI without verification. | ||
| 57 | 16 Feb 2026 | Hesabi v Gasony International LTD | LIP | "14. The Applicant is of the view that the Property should not have been let until it had a licence, and she believes that she has read something to this effect. She is not legally qualified, and she confirmed that she carried out a general internet for search as part of her preparation for the final hearing. The Tribunal notes that the AI summary to such an internet search can, in the Tribunal’s experience, bring up inaccurate information. When directed to the express wording of the statute, the Applicant did not seek to argue that the defence provided for at section 95(3) of the 2016 Act did not apply on the facts of this case." | ||
| 56 | 15 Feb 2026 | A v British Transport Police Authority | LIP | "77. The Tribunal have not set out the citations in this judgment because doing so may itself generate further ‘hallucinations." | ||
| 55 | 11 Feb 26 | Nwabueze v Simons Rodkin LLP | LIP | "... We observe, however, that the email appeared to have been drafted with the “assistance” of AI, or Artificial Intelligence, since [redacted case] did not go to the Court of Appeal, (2) there does not appear to be a case with the neutral citation of [redacted neutral citation], and (3) [redacted] was not about an anonymity order or the effect of a settlement agreement on an application for an anonymity order. In any event, the email was as follows.” | ||
| 54 | 30 Jan 26 | PSAHSC v Nursing and Midwifery Council [2026] EWHC 141 | Unregulated Rep, Litigation Friend | Warning | “…This was pointed out to him at the hearing. He immediately admitted what he had done and that the references were phantoms created by AI. He promised not to use AI to generate submissions in future and to check his references personally…” | |
| 53 | 30 Jan 2026 | Chandler v CC of Kent Police | LIP | "The Claimant was a police officer and therefore should have appreciated the dangers of relying on AI and independently researching the legal requirements. The advice he sets out in his submissions includes a section “Legal Advice: Consider seeking advice from a solicitor or other legal professional if you’re unsure about your rights or the process”. It appears that the Claimant did not seek advice although he references limited advice from the Police Federation in his submissions so presumably received some advice. This is not a legitimate reason for me to extend time for presentation of the claim." | ||
| 52 | 29 Jan 26 | Folarin v The Immigration Services Commissioner [2026] UKFTT 135 | LIP | Went to credibility | The decision of Dame Victoria Sharp P indicated at paragraph 26 that “Placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to a contempt. That is because it deliberately interferes with the administration of justice.” Whilst mere negligence would not be sufficient to establish contempt, and knowledge that the information is false or a lack of honest belief that it was true would be required, | |
| 51 | 19 January 2026 | Your Home Partners v Kellichan | LIP | [20] There is no reason why that duty should not also apply to lay persons, as otherwise the management of cases in courts and therefore the course of justice could be significantly obstructed. However, having heard from the claimant, I am satisfied that the claimant has not knowingly attempted to interfere with the administration of justice in this manner, by wasting court time and misleading the court. However, there was a degree of recklessness in the claimant delaying verifying that the references produced by Artificial Intelligence were genuine, until only after the relative submissions were lodged. | ||
| 50 | 16 Jan 26 | Huish v The Commissioners for HMRC [2026] UKFTT 129 | LIP | None | "...We attach no blame to him, since he is a litigant in person but we have recorded the names so that others do not fall into the same trap.” | |
| 49 | 12 January 2026 | Ms F Green v Imprint Creative Print Solutions Limited: 1809293/2024 | LIP | "The claimant responded to the respondent’s email of 2 December 2025 raising the time limit issue, by email. The case law she referred to therein could not be identified by the Tribunal or counsel for the respondent. The claimant readily accepted that she had used AI to generate the submissions. I am satisfied that the authorities referred to do not exist and should be disregarded by this Tribunal." | ||
| 48 | 8-Jan-2026 | Elden v HMRC [2026] UKFTT 41 (TC) | Unclear | 93. In further submissions, the Representative said 'The suggestion that citing a published authority amounts to providing false material is misconceived. A court decision is a matter of public record. Whether a case applies is a matter of legal argument and opinion, not misrepresentation. It is entirely proper for parties to put forward different interpretations for the Tribunal to consider. To characterise this as "false material" is both unfounded and inappropriate.' It is not clear who the representative is quoting as saying false material was used. The wording used by HMRC was 'inaccurate use of AI/inaccurate authorities'. | ||
| 47 | 30 December 2025 | M Peiu v Hywel Dda University Local Health Board: 1602474/2024 and 1604381/2024 | LIP | "Within her written submissions, the Claimant had provided a number of cases but was unable to provide citations or references for such cases and had not provided copies. Despite some efforts by Respondent counsel and the Tribunal to find some of the cases, this proved unfruitful. The Claimant was unable to provide references and indicated that she had researched through an AI applications, such as ChatGPT and was unable to provide references or give an indication of how they were relevant. She was informed that unless she was able to do so, they would not be relied upon and were not." | ||
| 46 | 12 December 2025 | Mr T D C Ferreira v Magic Life Ltd and Others: 3304596/2025 | LIP | “[Claimant’s Rep] produced during this hearing a skeleton argument in response to the skeleton argument produced by Miss Martin. Amongst other things, this contained the following: “The Respondents’ assertion that the ET1 was premature is a red herring. The principle in [redacted case] provides that a technically premature unfair dismissal claim is treated as presented on the Effective Date of Termination (EDT).” This reference to [redacted case] seemed to me to be potentially relevant to some of the matters I had to decide, but on seeking the case at the reference given (or indeed any other reference) I could not find it. I understand Miss Martin embarked on a similar search without success. I asked [Claimant’s Rep] if she could clarify this reference or provide a copy of the case she had in mind. She could not. I suggested to her that this was sometimes the kind of problem that arose where AI was used in the production of a document, and she accepted that in the limited time she had had she had used AI in the production of the document. In principle there is no objection to the use of AI by litigants, but there is a problem where such AI use produces material that may mislead the tribunal. I urge the claimant and [Claimant’s Rep] to check and verify any material that they use in this case that has been produced by AI, and in particular any references to case law or statute that is produced by AI. The claimant and/or his representative remain accountable for materials they submit to the court, whether prepared with the assistance of AI or not.” | ||
| 45 | 10 December 2025 | Mr J Harrison v Mr D May T/a Leeds Gymnastics Academy: 6000187/2024 | LIP | "Those written representations from [Respondent] contained numerous case law references. At least half of those references were non-existent and [Respondent] admitted that he had just used Chat GPT to produce his representations without checking any of the results. He said he was reasonably entitled to conclude that everything that Chat GPT said was reliable, and in fact, he said there was no reason to fact check the internet at all." | ||
| 44 | 9-Dec-2025 | D (A Child) (Recusal) | LIP | Warning | “Finally, I return to the issue raised by the father’s representatives about the mother’s erroneous citation of authority (see in particular paragraph 54 above). I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments. It is entirely understandable that they should resort to artificial intelligence for help. Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not an authoritative or infallible body of legal knowledge. There are a growing number of reports of “hallucinations” infecting legal arguments through the citation of cases for propositions for which they are not authority and, in some instances, the citation of cases that do not exist at all. At worst, this may lead to the other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross-checking and correcting the errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced.” | |
| 43 | 8-Dec-2025 | S Peggie v Fife Health Board and Dr B Upton | | TBC | ||
| 42 | 3-Dec-2025 | Wemimo Mercy Taiwo v Homelets of Bath Limited & Ors | Litigation Friend, LIP | “…This case does not exist (albeit the bogus reference can be ‘recreated’ through Google’s AI Overview function). There is a 2016 case in the Bolton County Court between the two named parties, but there was no appeal in 2018 to the Court of Appeal and [redacted] is a false reference | ||
| 41 | 2 December 2025 | O Harwood-Allen v London United Busways Ltd | LIP | |||
| 40 | 24-Nov-2025 | Oxford Hotel Investments Limited v Great Yarmouth Borough Council | LIP | “…purported to quote at a little length from [18] of the judgment to the effect that a microwave satisfied the statutory definition. The problem is that the real [18] of Barker v Shokar says no such thing. Nor does any other part of the judgment in that case. [Director for the Appellant] ended up accepting that this misleading use of authority was the product of AI. It is one which illustrates again, in courts and tribunals, the dangers of using AI for legal research without any checks.” | ||
| 39 | 21-Nov-2025 | Appeal in the cause of Jennings v Natwest Group Plc (Sheriff Appeal Court Civil) | LIP | “[10] These require caution, the appellant having made submissions using ChatGPT, an artificial-intelligence database (see appellant’s supplementary submission). That may explain the generality of the submissions, which largely comprise free-form legal propositions with only limited link to the facts. It has served to complicate and obscure the true analysis of the issues. At least three of the cases cited appear to be non-existent.” | ||
| 38 | 17-Nov-2025 | 133 Blackstock Road (Hackney) RTM Company Limited v Assethold Limited | LIP | “19.The Tribunal is extremely concerned that the Respondent has put material before it that is erroneous. [redacted] has failed to give any explanation as to how this error arose. One explanation mightbe the use of an AI LLM in the production of the Respondent’s statement of case. | ||
| 37 | 17 November 2025 | UK v SOSHD [2026] UKUT 81 (IAC) | Lawyer | 38. In our judgement, a supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”. An individual in the latter camp fails the tribunal, the public and his lay client, whereas an individual in the former camp fails, in addition, to aid the development of more junior lawyers.” | ||
| 36 | 17 November 2025 | R (Munir) v SOSHD [2026] UKUT 81 (IAC) | Lawyer | 38. In our judgement, a supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”. An individual in the latter camp fails the tribunal, the public and his lay client, whereas an individual in the former camp fails, in addition, to aid the development of more junior lawyers.” | ||
| 35 | 4-Nov-2025 | Choksi v IPS Law LLP | Lawyer | "...contains references to a number of cases that have wrong citations, wrong names or which simply do not exist. A number of the cases cited are wholly irrelevant and do not support the proposition in support of which they are cited...” | ||
| 34 | 23-Oct-2025 | Victoria Place et al v Assethold Limited | LIP | "85. I then typed the same wording into M365 Copilot on an Android device but adding a question mark at the end which gave a similar response, although the phrasing was markedly different, and it referred to the Upper Tribunal decision cited by [landlord’s managing agent] rather than the ‘hallucinated’ Court of Appeal citation. Repeating the same question sometime later would not re-produce reference to the Upper Tribunal decision, showing that AI adapts and an earlier answer may no longer be returned as the algorithm learns, demonstrating the care that needs to be taking in using AI. The idiom ‘shifting sands’ comes to mind.” | ||
| 33 | 17-Oct-2025 | Lee v Blackpool B&B et al MAN/00EJ/HMG/2024/0011 | LIP | "...I can only conclude that the ‘decision’ submitted to the Tribunal is a fabrication – whether or not it is the product of the injudicious use of artificial intelligence tools is unclear.” | ||
| 32 | 14-Oct-2025 | Ndaryiyumvire v Birmingham City University | Lawyer | 48. I do have to take account of the fact that, as was said in Ayinde, the use of AI is a large and growing problem and the citing of fictitious or fake authorities is a serious threat to the integrity of the justice system which depends upon courts being able to rely on lawyers putting before the courts, whether orally or in documents, accurate material and accurate statements of the law supported by genuine cases. Lawyers who cite fictitious cases must face serious consequences and in the current environment where this is a significant and growing problem, the guidance in Ayinde indicates that judges should take a fairly tough line. | ||
| 31 | 13-Oct-2025 | Malathi Latha Sriram (Mukti Roy) v Louise Mary Brittain | LIP | “…rightly in my view, and I make no criticism of her. For what it is worth, I suspect, that, in common with many unrepresented parties, [Claimant] has resorted to research using the internet and has come up with false leads. The late Muir Hunter was an eminent member of the insolvency bar and the author for many years of an insolvency commentary that still bears his name. It is easy to see how his name could have come up in the course of an internet search and end up wrongly linked to a real case name and reference. The abbreviation BPIR stands for the Bankruptcy and Personal Insolvency Reports. They are not readily available to members of the public. It would have been difficult for [Claimant] to check the citation…” | ||
| 30 | 13-Oct-2025 | Hassan v ABC International Bank PLC | LIP | “On the use of AI in general, I happily accept that the internet is a resource many of us tend to rely on as providing expertise and knowledge where we lack it. Indeed, the facility for using a search engine has even been relied on in the EAT a reason for not granting an extension of time. I accept that AI is now at the forefront of internet searches. It might also be said that more intelligent and proficient users of the internet, like the Claimant, are more apt to use it in the way that the Claimant has i.e. to help construct arguments. I should not, and do not, approach the Claimant’s use of AI as in any way inherently negative” | ||
| 29 | 10-Oct-2025 | Peters v Driver and Vehicle Standards Agency | Union Officer | “9. I raise this because: 9.1 An appreciable amount of hearing time was taken up with trying to obtain copies of various reports in order that respondent’s Counsel (and I) could check the accuracy of the AI generated summaries. 9.2 There was a significant risk I could have been misled had this not been done. 9.3 Because of the demonstrated inaccuracies, I was unable to rely on the summaries. 9.4 The delay involved also caused or contributed to my Judgment being reserved.”“…He is genuinely seeking to assist a claimant who would otherwise be unrepresented. Nonetheless, it is important that some basic checks are done to ensure that the material put before the Tribunal is accurate in order to avoid the above. I refer to R (on the application of Ayinde) v London Borough of Haringey [2025] EWHC 1383 which clearly identifies the risk of not undertaking such checks and the importance of doing so…” | ||
| 28 | 6-Oct-2025 | AK v SOSHD UI-2025-002918 | Lawyer | "What concerns me in this case is not merely that there were false citations in the grounds of appeal considered by Judge Saffer; it is that those false citations were then removed from the grounds of appeal which were placed in the composite bundle. The former actions are unprofessional, the latter are potentially dishonest because it suggests that there was an attempt to conceal the false citations..." | ||
| 27 | 29-Sep-2025 | ANPV & SAPV v SOSHD | Lawyer | “…suggested that the inaccuracies in the grounds were as a result of his drafting style. He accepted that there might have been some “confusion and vagueness” on his part; that he might “need to construct sentences in a more liberal way”; and that his drafting should perhaps “be a little more generous” when it came to making specific allegations about judges overlooking or failing to follow binding authorities. … The problems which I have detailed above are not matters of drafting style. The authorities which were cited in the grounds either did not exist or did not support the grounds of which were advanced. Where the cases did exist, they were often wholly irrelevant to the proposition of law which was given in the grounds.” (paragraphs 63 and 64) | ||
| 26 | 15-Aug-2025 | Kuzniar v General Dental Council Case No. 6009997/2024 | LIP | "44. The Claimant explained that the problems arose from her using AI to carry out research.She had previously used AI/ChatGP to carry out research without problems in her litigationagainst Roxdent Ltd and so she expected to be able to do so again successfully in theinstant case. She did not know about the problems with the citations when she told theRespondent’s solicitors about them, and when she found out about them, she did her bestwithin the short time available to mitigate or reduce the problem. She did not act in badfaith or with any intent to place false information before the Tribunal. I accept thisexplanation.45. The Claimant conducted the claim unreasonably as described above by referring to theRespondent a large number of nonsensical and in many cases non-existent citationswithout taking any or sufficient care to check them first. By not doing so she passed thework of checking them to the Respondent to have to do at short notice. My discretion toaward costs is engaged.46. However, I decline to award costs because AI is a relatively new tool which the public isstill getting used to, the Claimant acted honestly (and furthermore has presented her casehonestly to me over the last two days), and she tried to her best to rectify the situation assoon as she became aware of her mistake." | ||
| 25 | 12-Aug-2025 | Holloway v Beckles and Beckles | LIP | "That leaves the matter of the fake cases. The Tribunal finds that this does amount to unreasonable conduct within rule 13(1)(b). It has decided that the misconduct is serious, being conduct that undermines civil litigation in the Tribunal. Therefore, the Tribunal determines that it should "make a costs order. It considers that the costs order should be proportionate to the additional costs caused. It has decided that the appropriate quantum is half the costs of counsel’s fees in attending the hearing of 14 May 2025. These amount to £750 and must be paid to the applicant within 28 days." | ||
| 24 | 30-Jul-2025 | Father v Mother [2025] EWHC 2135 (Fam) | LIP | “(16) The F then made a further application on a C2 asking that HHJ Bailey recuse herself on the basis of being biased against him and her not understanding ASD and the impacts of his diagnosis. This came before the Judge on 10 June 2025. In his written application to the court the F referred to a number of previous authorities, in particular relating to ASD. HHJ Bailey realised that many of these cases were not genuine, and the submission appeared to have been generated by Artificial Intelligence (“AI”). In light of the level of recent concern about litigants and lawyers using AI and referring to cases which are not genuine (as reflected in the Divisional Court decision R (Ayinde) v London Borough of Haringey [2025] EWHC 1383), HHJ Bailey referred the case to me as the Family Presiding Judge for the Midlands.”“The F relied upon faked cases without apparently making any effort to check their veracity. It is in my view important to note that the F is someone who is well capable of checking references and ensuring documents are accurate if it is in his interests to do so.” | ||
| 23 | 30 Jul 2025 | O Ilunga v Portico Cooperate Reception Management Ltd | LIP | "The claimant decided that she would either prefer to withdraw her written closing submissions or not rely on any of the assertions as to the law in those submissions. I determined that the fairest course of action to both parties was for me to have regard to the claimant’s submissions, insofar as they advanced factual / evidential points, but that I would disregard any legal principles asserted. I also heard oral closing submissions from the claimant and Mr Green. I reserved my decision as we had reached the end of Day 4 by this time.” | ||
| 22 | 27-Jul-2025 | HMRC v Gunnarsson [2025] UKUT 247 (TCC). | LIP | "113. In this case, HMRC was put to the trouble of having to investigate the existenceof the purported decisions relied upon by the Respondent. Fortunately, they did so.Depending on the circumstances, there may be occasions when the opposing party or24the tribunal are not able to discover the errors relied upon. There may be others wherean adjournment is required to investigate or address the inaccurate information.114. On these facts, we do not consider the Respondent to be highly culpable becausehe is not legally trained or qualified, not subject to the same duties as a regulated lawyeror other professional representative and may not have understood that the informationand submissions presented were not simply unreliable but fictitious. He was under timepressure given his other competing responsibilities and doing his best as a lay litigantseeking to assist the UT by preparing written submissions." | ||
| 21 | 25 Jul 25 | Chandra v Royal Mail Group | Unclear | “14. Whether generated using AI or not, we find that false citations of authority amounts to unreasonable conduct of proceedings.” | ||
| 20 | 11 July 2025 | PS v LB Wandsworth | LIP | We did not discuss these references at the hearing as I considered it unnecessary to do so as in my judgment the appellants arguments could properly be advanced without those references, but the local authority did raise the non-existence of these cases in its late written submissions. It may be these legal references were the product of AI generation as it is well known that AI ‘hallucinates’ the names of legal cases and legislation.” | ||
| 19 | 7-Jul-2025 | Various Leaseholders of Napier House v Assethold Ltd | TBC | “15. The Respondent included two cases within their grounds for appeal which have been cited as…[False Case names] Having performed a search on BAILLI, Westlaw and Find Case Law, it has not been possible to find …[False Case name]. It may be that this case is not authentic and AI may have been used to reference this case….”On another case, the court noted the decision concerned the circumstances in which a parole board should hold an oral hearing. “When reading the full judgment it is difficult to see why the tribunal has been referred to this case…..” | ||
| 18 | 20-Jun-2025 | Pro Health Solutions Ltd v ProHealth Inc (UKIPO, Appointed Person, BL O/0559/25) | LIP | "As identified in Ayinde (including in the Appendix setting out domestic and overseas examples of attempts to rely on fake citations), fabrication of citations can involve making up a case entirely, making up quotes and attributing them to a real case, and also making up a legal proposition and attributing it to a real case even though the case is not relevant to the legal proposition being made (for instance, it deals with a completely different issue or area of law). It is not, however, fabrication to make an honest mistake as to what a court held in a particular case or to be genuinely mistaken as to the effect of a court’s judgment. In any event, it does not matter whether fabrication was arrived at with or without the aid of generative artificial intelligence. I therefore need to consider what if any sanction is appropriate.” | ||
| 17 | 18-Jun-2025 | UB v SoS for Home Department | Lawyer | “…recognised this seriousness of this issue and has taken commendable steps to ensure it will not be repeated including (i) meeting with the caseworker who drafted the Grounds; (ii) holding a partners’ meeting to discuss adopting an AI policy and assigning the task of finalising an AI policy to a colleague in consultation with an AI professional; (iii) conducting relevant in-house training and issuing interim AI Guidance and (iv) planning for comprehensive staff training by an AI professional….” | ||
| 16 | 6-Jun-2025 | Alharoun v Qatar National Bank and QNB | Lawyer | "In CL-2024-000435, it appears from the Order of Mrs Justice Dias that correspondence was sent to the court, and witness statements were filed, citing authorities that do not exist and claiming that other authorities contained passages that they do not contain" Rt Hon. Dame Victoria Sharp | ||
| 15 | 6-Jun-2025 | R (Ayinde) v Haringey | Lawyer | “ It is such a professional shame. The submission was a good one. The medical evidence was strong. The ground was potentially good. Why put a fake case in?”“I should say it is the responsibility of the legal team, including the solicitors, to see that the statement of facts and grounds are correct.”“…I consider that it would have been negligent for this barrister, if she used AI and did not check it, to put that text into her pleading.”Mr Justice Richie | ||
| 14 | 25-Apr-2025 | A County Court case refered to at para 55 of the Ayinde v LBB judgment before HHJ Holmes | Lawyer | Judge wrote to Head of Chambers | “That was a case before the County Court … That counsel drew attention to the fact that the application before the judge contained false material: specifically the grounds of appeal and the skeleton argument settled … contained references to a number of cases that do not exist….” | |
| 13 | 22-Apr-2025 | Goshen v Accuro (2304373/2024) | LIP | N/A | "...I cannot find such a case, and I am left wondering whether this case is aninvention by the claimant or perhaps an artificial intelligence platform. As I explainedin the hearing, I cannot apply authority which I have not seen. " | |
| 12 | 3-Apr-2025 | Bandla v SRA | Lawyer | Abuse of process and indemnity costs | “I asked the Appellant why, in the light of this citation of non-existent authorities, the Court should not of its own motion strike out the grounds of appeal in this case, as being an abuse of the process of the Court. His answer was as follows. He claimed that the substance of the points which were being put forward in the grounds of appeal were sound, even if the authority which was being cited for those points did not exist. He was saying, on that basis, that the citation of non-existent (fake) authorities would not be a sufficient basis to concern the Court, at least to the extent of taking that course. I was wholly unpersuaded by that answer. In my judgment, the Court needs to take decisive action to protect the integrity of its processes against any citation of fake authority. There have been multiple examples of fake authorities cited by the Appellant to the Court, in these proceedings. They are non-existent cases. Here, moreover, they have been put forward by someone who was previously a practising solicitor. The citations were included, and maintained, in formal documents before the Court. They were never withdrawn. They were never explained. That, notwithstanding that they were pointed out by the SRA, well ahead of this hearing. This, in my judgment, constitutes a set of circumstances in which I should exercise – and so I will exercise – the power of the Court to strike out the grounds of appeal in this case as an abuse of process.” | |
| 11 | 3-Apr-2025 | ZZaman v Revenue & Customs | LIP | Warning | 29. However, our conclusion was that Mr Zzaman's statement of case, written with the assistance of AI, did not provide grounds for allowing his appeal. Although some of the case citations in Mr Zzaman's statement were inaccurate, the use of AI did not appear to have led to the citing of fictitious cases (in contrast to what had happened in Felicity Harber v HMRC [2023] UKFTT 1007 (TC) ). But our conclusion was that the cases cited did not provide authority for the propositions that were advanced. This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully "understand" what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber ) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal). There is no reliable way to stop this, but the dangers can be reduced by the use of clear prompts, asking the tool to cite specific paragraphs of authorities (so that it is easy to check if the paragraphs support the argument advanced), checking to see the tool has access to live internet data, asking the tool not to provide an answer if it is not sure and asking the tool for information on the shortcomings of the case being advanced. Otherwise there is a significant danger that the use of an AI tool may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court's time and that of opposing parties. | |
| 10 | 25-Jan-2025 | Olsen v Finansiel Stabilitet | LIP | Relevant to costs | "I have narrowly and somewhat reluctantly come to the conclusion that I shouldnot cause a summons for contempt of court to be issued to the appellants underCPR rule 81.6. I do not think it likely that a judge (whether myself or anotherjudge) could be sure, to the criminal standard of proof, that the appellants knewthe case summary was a fake. They may have known but they could not becompelled to answer questions about the identity of the person who supplied it."Mr Justice Kerr | |
| 9 | 7-Jan-2025 | Ms (Bangladesh) v SoS for Home Department | Lawyer | 13. We sought clarification regarding this citation and reference and asked for the relevant paragraph of the judgment being relied on. [counsel] was not able to specify this. [counsel] submitted that he understood, having used ChatGBT, that the Court of Appeal in Y (China) [2010] EWCA Civ 116 was presided by Pill LJ, LJ Sullivan LJ and Sir Paul Kennedy. However, the citation [2010] EWCA Civ 116 did not point to the case of Y (China) but to R (on the application of YH) v SSHD. We raised concern about this and referred [counsel] to the recent decision of the President of King’s Bench Division in Ayinde [2025] EWHC 1383 (Admin) on the use of Artificial Intelligence and fictitious cases, and directed him to make separate representations in writing.14. In his subsequent written representations, [counsel] clarified that Y(China) was a typological error and he sought to rely on R (on the application of YH) v SSHD [2010] EWCA Civ 116 where, when discussing the meaning of ‘anxious scrutiny’ in asylum claims…” | ||
| 8 | 06-Dec-2024 | Crypto Open Patent Alliance v Dr. Craig Steven Wright | LIP | “…referred to a series of authorities in support of arguments that reasonable adjustments should be made to enable a vulnerable litigant or witness to participate fairly in court proceedings. As COPA pointed High Court Approved Judgment COPA v Wright Contempt CMC Page 6 out by reference to a series of examples, most of the authorities he has cited do not contain the passages attributed to them (or anything like those passages), and indeed most have nothing to do with adjustments for vulnerable witnesses. COPA suggested that it seems likely that they are AI “hallucinations” by ChatGPT (i.e. made-up references) rather than deliberately misleading inventions by Dr Wright. However, since the principles are clear and not in doubt, as set out above, it is not necessary to engage with his false citations any further.” | ||
| 7 | 4-Dec-2023 | Harber v HMRC | LIP | “But that does not mean that citing invented judgments is harmless. It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents, and this is important, because the use of precedent is "a cornerstone of our legal system" and "an indispensable foundation upon which to decide what is the law and its application to individual cases" | ||
| 6 | 29-May-2023 | TBC (Lawgazette note) | LIP | N/A | ||
| 5 | Onyinye Udokporo v Enrich International Ltd | LIP | Litigants-in-person who put their name to a document before the registrar or the Appointed Person must be able to provide all the material cited by them and that material must relate to what they are saying, and likewise any quotation they rely upon must be accurate (albeit I accept that innocent transcription or typographical errors are not representative of improper conduct). If a party cannot provide the authorities they rely upon, their conduct is unreasonable within the meaning of Tribunal Practice Notice 1/2023 and “off-scale” costs are usually appropriate: see Pro Health Solutions (O/559/25), [23]-[24]. | |||
| 4 | An Concrete Ltd v Wasserman Boxing Limited: MISFITS BOXING consolidated oppositions | LIP | “70 Whilst there have been recent cases in which the courts and the Appointed Person have warned parties of the risks of using artificial intelligence for legal research, drafting skeleton arguments or written submissions, and I sense that [Redacted] might have used some sort of artificial intelligence tool to draft its submissions in lieu, I accept that it might have misunderstood the references, the lack of experience being a mitigation. Hence, I do not think it would be appropriate to apply any sanction in this case” | |||
| 3 | OscarTech UK Ltd v Orthofix S.R.L. | LIP | “106… explained in the hearing that, he had used an AI tool to help him prepare for the hearing. 107. On the matter of the use of AI in legal research, it is appropriate to note that Mr Phillip Johnson, sitting as the Appointed Person in his recent decision BL O/0559/25, underlined the risks associated with AI tools; in particular, the fictitious case references that can be generated; and incorrect authorities cited in support of legal arguments. Mr Johnson made it clear that even litigants-in-person have a duty not to mislead the court [or tribunal] and, in observing that duty, they are urged to be alert to the risks associated with the use of ‘ChatGPT’ and the like.” | |||
| 2 | Warwick Econometrics Ltd v The University of Warwick | TBC | “I then asked [Representative] if artificial intelligence or any form of artificial intelligence, such as Chat GPT, was used to produce the skeleton arguments of both the 3 September 2025 and the corrected skeleton argument of 4 September 2025. [Representative] denied this by simply stating “no”.” [paragraph 30] | |||
| 1 | Robert Sulić v Antonio Nuno Correia Ramos Marques | LIP | “6. The Grounds of Appeal in this case are entirely unsatisfactory. They make statements which mix up legal and factual matters, rely on irrelevant things, and the sole reference to a statutory provision is to one which does not exist. It does not matter whether these were drafted by the Appellant without any assistance or some form of generative AI was used. It is not acceptable, even from a litigant in person, to provide grounds which makes the task of appellate review so difficult. Indeed, the difficulty was made more profound because neither side provided any written submissions and both declined a hearing. This means this appeal has to be determined based on the Appellant’s confused set of criticisms of the Hearing Officer’s decision.7. Nevertheless, I will address the Appellant’s challenges to the decision as far as is possible from the Grounds of Appeal as drafted.” | |||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| | ||||||
| |
