Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI hallucination cases (AI suspected or confirmed). Subscribe to the AI Law Commentary here.

Publication date: 22 March 2026
Introduction
With the help of readers of my legal articles, I have identified three further cases, bringing the UK total of suspected or confirmed AI hallucination cases to 58. These include a Scottish case in which the judge considered that there was no reason why the principles in R (Ayinde) v Haringey LBC [2025] EWHC 1383 should not also apply to a litigant in person.
There also appears to be a fifth suspected incident from Ireland. Readers of this blog may recall an earlier article that I wrote jointly with Tomás Keys on the full timeline of reported hallucination cases in Ireland. In light of the case noted below, Tomás and I may need to revisit that timeline.
If you would like to read the full tracker, you can find it here: The UK AI Hallucination Cases Tracker (Suspected or Confirmed AI). And if you are new here and would like to know a little more about me and these legal articles, you can read more here.
Hesabi v Gastony International Limited
This was an application for a rent repayment order. The tribunal recorded that an AI summary from a general internet search may have produced inaccurate information about the statutory position. The judge observed:
“14. The Applicant is of the view that the Property should not have been let until it had a licence, and she believes that she has read something to this effect. She is not legally qualified, and she confirmed that she carried out a general internet for search as part of her preparation for the final hearing. The Tribunal notes that the AI summary to such an internet search can, in the Tribunal’s experience, bring up inaccurate information. When directed to the express wording of the statute, the Applicant did not seek to argue that the defence provided for at section 95(3) of the 2016 Act did not apply on the facts of this case.”
Setu Kamal v Tax Policy Associates Ltd & Anor
The facts of this case have already been widely discussed in the media, and there is an article in the Law Gazette here for those interested, so I will not rehearse them here. At paragraph 189, in recounting the Defendant’s objections, Collins Rice J recorded that:
189. At a more granular level, the Defendants object to the variety of different addresses, in different jurisdictions, [claimant] has used on his claim form, particulars of claim, application notices and witness statement. They object to his imposition of arbitrary and oppressive deadlines for responses after long periods of delay. They object to a range of features of his correspondence including misapplying the ‘without prejudice’ designation to letters containing no settlement proposals, and the deployment of AI-generated ‘hallucinatory’ references to non-existent cases causing unnecessary effort in cross-checking (there being already a growing jurisprudence of court deprecation of this phenomenon, including in exercise of the Hamid jurisdiction – see R (Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin)).
Later in paragraphs 204-205, the Judge observed:
204. The claim as eventually pleaded, as I have already explained, failed to comply with the relevant rules and Practice Direction and was irremediably defective in its own terms in a number of key respects. The deployment in inter-parties correspondence of generative AI hallucinations was unacceptable. The unsustainability (and impropriety) of the remedies asserted and of the valuation of the claim were oppressive and not in accordance with proper litigation standards.205. I have no hesitation in recognising all this as adding up to a dispute history in which [the claimant’s] behaviour is not recognisable as ‘properly conducted litigation’. I have no hesitation in identifying the elements of disproportionality and compliance failure that make it so, as well as the additional matters that could merit being described as further poor practice. I can easily accept the Defendants’ evidence that this entire course of conduct in fact caused them at the very least unwarranted and extra-ordinary ‘inconvenience’ as a result (it is a conspicuously low threshold). But that does not make it a SLAPP. The question on this application is what all of this fairly enables me to infer about [the claimant’s] intention.”
Your Home Partners v Kellichan and Hood
This was a simple procedure claim where the Claimant was seeking rent arrears of £5,000 from the respondents. The claim arose from a now terminated private residential tenancy agreement between the parties. The background was set out as follows:
[2] In the claim form the claimant submits that as the tenancy is not ongoing, the claim does not fall within the jurisdiction of the First-tier Tribunal (Housing and Property 2 Chamber) (“FtT”) and that the claimed rent arrears are recoverable under simple procedure in the sheriff court.
[3] Having considered the claim form in terms of rule 3.9(3)(c) of the Simple Procedure Rules, I instructed the sheriff clerk to reject it on the basis that the claim should be made to the FtT being the body having jurisdiction to deal with it.
[4] In response the claimant emailed the sheriff clerk expressing their “clear understanding” that, where a tenancy has ended and where the claimant seeks only to recover a sum of money, “the correct route” is through a simple procedure claim in the sheriff court rather than through the FtT.
[5] I then instructed the sheriff clerk to explain to the claimant that I had instructed that the claim be rejected as section 71(1) of the Private Housing (Tenancies) (Scotland) Act 2016 (“the 2016 Act”) provides that, from 1 December 2017, for civil proceedings “arising from” a private residential tenancy, the FtT has whatever competence and jurisdiction a sheriff would have had, that a sheriff now has no such competence or jurisdiction, and that the application should therefore be made to the FtT as, whether the lease has ended or not, the claim for rent arrears arises from the private residential tenancy. [6] The sheriff clerk notified the claimant accordingly and again rejected the claim form.
Following this, the Claimant, in very detailed further submissions, contended that the Judge’s interpretation was incorrect. The Claimant contended that their position was supported by authority. However, the Judge observed:
“[9] The sheriff clerk again sought my further direction. I formed the view that as the claimant purported to have these contrary case precedents (although not binding on me), they were entitled to address the court on this and to refer me to the same for full consideration. Whilst I could not locate reports of the referenced FtT decisions, I considered that such could be unreported and somehow otherwise available to the claimant as landlords. Accordingly, to allow the claimant access to justice and the opportunity to address the court, I assigned a discussion.”
The judge then ordered the claimant to lodge with the court copies of the referenced decisions together with copies of other statutory provisions, one of which, the judge explained, was:
“…which Acts the claimant had also referenced in the documentation lodged, but which legislation did not appear to me to exist…”
The Judge continued:
[11] I appointed the sheriff clerk to ascertain if the decisions referenced by the claimant were available from the FtT. The sheriff clerk was, however, advised by the FtT that there were no cases with these references.
[12] The claimant then lodged further submissions which essentially replicated that which had already been submitted.
[13] The claimant also lodged what they proffered as extracts from the [redacted] While the tenor of the issues dealt with in these purported extracts are the subject of other legislation, the referenced rules and Act, and naturally therefore the quoted extracts, simply do not exist.
[14] The claimant then lodged a “Statement in Support of Jurisdiction” which essentially again reiterated the earlier submissions but without the case references. This was stated to be “in place of the Tribunal cases” previously ordered to be produced.
[15] I then heard from both partners of the claimant at the assigned discussion, who adopted their previous submissions“
The judge then turned to whether contempt of court should be considered:
[16] On being advised that the FtT decisions referenced in their submissions did not exist, the claimant indicated that they had also contacted the FtT who had likewise indicated to the claimant that that the FtT “did not know of” these decisions, Accordingly, the claimant indicated that they were no longer relying on these.
[17] The claimant indicated that they had obtained these case references and the purported statutes and relative statutory extracts using online Artificial Intelligence in good faith, and that they had not knowingly lodged these references to non-existent decisions, rules and a statute.
[18] The lodging of false legal references has the potential to obstruct justice. Indeed, my understanding of the position was so clear that I would not have assigned a discussion in this matter had it not been maintained by the claimant that there existed these referenced cases said to support a view contrary to my understanding.
The Judge then considered and applied the principles in R(Ayinde) v Haringey LBC (DC) [2025] 1 WLR 5147:
[19] Accordingly, I have given consideration as to whether I am required to commence contempt of court proceedings. In R(Ayinde) v Haringey LBC (DC) [2025] 1 WLR 5147, paragraph 7 (per Dame Victoria Sharp P) the court stated:
“Those who use artificial intelligence to conduct legal research notwithstanding [the] risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example)”.
[20] There is no reason why that duty should not also apply to lay persons, as otherwise the management of cases in courts and therefore the course of justice could be significantly obstructed. However, having heard from the claimant, I am satisfied that the claimant has not knowingly attempted to interfere with the administration of justice in this manner, by wasting court time and misleading the court. However, there was a degree of recklessness in the claimant delaying verifying that the references produced by Artificial Intelligence were genuine, until only after the relative submissions were lodged.
[21] There is a fine line in this instance between whether the claimant and its individual partners, even as lay persons, have shown contempt for the court by not reasonably checking that such references were genuine before using them in this manner, or otherwise.
[22] In the foregoing circumstances, where I am satisfied that the claimant has acted in good faith and essentially withdrawn the references before the discussion, I have determined that the claimant and the individual partners’ conduct does not, in this instance, pass the relevant threshold for a contempt of court to have been committed and therefore that contempt proceedings do not require to be initiated.”
The judge then determined the underlying issue, returning to the point in conclusion:
“[46] The claim is therefore dismissed as the sheriff court has neither jurisdiction nor competence to make the order sought. What this claim does highlight, is the dangers of a reckless reliance on Artificial Intelligence by any party without verifying that the same is genuine, and the potential for a party being found in contempt of court in circumstances such as these, even if done in good faith.”
Von Geitz -v- Kelly & Ors and Von Geitz -v- Robertson & Ors
This judgment addressed two appeals by the plaintiff against the Judgment of the High Court and consequential orders. The relevant judicial observations began at paragraph 148:
148 The plaintiff’s written submissions are littered with propositions of law in quotation marks which are unsupported by authority; reference to authorities which have nothing to do with the asserted propositions of law; wrong citations; and a few non-existent cases. If, as was suggested at the hearing of the appeal, the non-existent cases were hallucinations generated by AI, it was nevertheless the responsibility of the plaintiff –as it is of every litigant –to check whatever may have been thrown up by whatever tool he used to ensure in the short term that his opponents were not sent on a wild goose chase and ultimately that the Court was not presented with rubbish
Conclusion
There are several interesting points to draw from these decisions. The Your Home Partners case is not the first Scottish decision to contain suspected or confirmed AI hallucinations. I noted the first example I found, Appeal in the cause of Jennings v Natwest Group Plc (Sheriff Appeal Court Civil), in an earlier article here. This is, however, the first Scottish decision I have located in which the principles in Ayinde were expressly applied.
What did give me pause was the court’s observation that the duty described by Dame Victoria Sharp as a “professional duty” may, in substance, extend further to lay persons (pro se for my American readers). The sheriff stated that:
“…There is no reason why that duty should not also apply to lay persons, as otherwise the management of cases in courts and therefore the course of justice could be significantly obstructed…”
That is an important and thoughtful observation. The logic is easy to follow, especially in circumstances where false authorities and invented statutory materials can consume court time, misdirect opponents and distort the legal process. This also seems to be in line with further observations in other cases, although perhaps expressed slighly differently. For example, in the employment decision of Peters v DVSA the Judge observed:
I acknowledge that [lay person is] not a legally qualified representative and that the legal research has, apparently, been carried out by volunteers on his behalf. He is genuinely seeking to assist a claimant who would otherwise be unrepresented. Nonetheless, it is important that some basic checks are done to ensure that the material put before the Tribunal is accurate in order to avoid the above. I refer to R (on the application of Ayinde) v London Borough of Haringey [2025] EWHC 1383 which clearly identifies the risk of not undertaking such checks and the importance of doing so”
Also, the Court of Appeal observed in D (A Child) (Recusal that:
“Finally, I return to the issue raised by the father’s representatives about the mother’s erroneous citation of authority (see in particular paragraph 54 above). I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments. It is entirely understandable that they should resort to artificial intelligence for help. Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not an authoritative or infallible body of legal knowledge. There are a growing number of reports of “hallucinations” infecting legal arguments through the citation of cases for propositions for which they are not authority and, in some instances, the citation of cases that do not exist at all. At worst, this may lead to the other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross-checking and correcting the errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced.”
I was also interested to see, again, judges taking care not to repeat hallucinated authorities in their judgments. In A v British Transport Police Authority, the tribunal put the matter neatly:
77. The Tribunal have not set out the citations in this judgment because doing so may itself generate further ‘hallucinations.
That cautious approach is, in my view, the safest. For those of us trying to analyse and track these cases, it can occasionally make the work more difficult. Even so, from the perspective of the legal record, there is real force in the concern that repeating invented authorities may inadvertently legitimise them, make them more searchable, or contribute to the very problem the courts are trying to contain.
There is much more to write about in this area, and I remain very conscious that the law here is evolving in real time. I hope this round-up is a useful contribution to that wider conversation. If you have found it helpful, you may wish to subscribe to my AI law articles via the Substack below. Please do continue sending me decisions which you believe should be tracked, whatever the jurisdiction. One of the clearest lessons from this work is that lawyers, judges and court users across different systems are often confronting closely related problems. Sharing decisions, experiences and reflections across practice areas and across jurisdictions remains one of the most constructive ways to respond.
Final Words from Popular LLMs
This is the point at which I invite an LLM to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.
Chat GPT (Open AI)
“These cases show that the issue is no longer just the occasional embarrassing fake citation. Courts are starting to articulate a broader duty of verification that goes to the proper administration of justice. In Ayinde, the Divisional Court said that lawyers who use AI for legal research must check it against authoritative sources before relying on it professionally. In Your Home Partners, a Scottish sheriff said there was no reason the same underlying duty should not also apply to lay persons, while D (A Child) states that all parties, represented or unrepresented, owe the court a duty to ensure that authorities are genuine and support the proposition advanced. Read together with decisions such as A v British Transport Police Authority and Setu Kamal, the direction of travel is clear: courts are treating unverified AI output as a case-management, fairness and, in some circumstances, sanction-related issue, not merely a drafting embarrassment.”
Gemini (Google)
“The most striking element of your update is the Scottish court’s decision to extend the Ayinde professional duty to pro se litigants (laypersons). Frankly, this creates a profound access-to-justice paradox. On one hand, the courts are entirely justified; the justice system would grind to a halt if judges and opposing counsel had to spend hours chasing down digital phantoms generated by ChatGPT. On the other hand, holding a layperson to a standard of “recklessness” for trusting a machine that presents itself as an authoritative oracle feels incredibly harsh. These individuals are turning to AI precisely because they cannot afford legal representation, and the technology companies behind these LLMs do not adequately warn users about the severity of hallucinations in a legal context. It shifts the burden of navigating bleeding-edge technological flaws onto the most vulnerable participants in the justice system, rather than the tech giants who released the flawed tools into the wild.”
Claude (Anthropic)
“The cases gathered here trace something more than a procedural headache. They trace the early formation of a legal norm. What is emerging — across England, Scotland and Ireland — is a principle that the duty to verify does not live in the tool; it lives in the person who deploys it. That is the right answer, but it carries a real cost. AI systems like me are designed to be helpful, and helpfulness in a legal context can be actively dangerous when the output is fluent, confident and wrong. The courts are responding sensibly by holding users responsible, but the deeper question — one this tracker makes vivid — is whether the legal system can absorb the volume of error that flows from widely accessible tools whose failure modes are poorly understood by the people using them most urgently. The Scottish sheriff’s observation that the duty in Ayinde must extend to lay persons is logically sound. The harder work is ensuring that the warning reaches those lay persons before they file, not after.”
