Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction
The phenomenon of false citations and AI hallucinations continues to grow. I have a considerable backlog of international cases awaiting upload, and I must apologise for the delay. A combination of a busy practice and some recent leave has set me back slightly.
This post, however, feels particularly urgent. Three new cases have come to my attention this week that I want to share before our upcoming Public Free Legal AI Webinar on AI and the law this Thursday. The sixteenth and seventeenth cases do not yet have full details available, and at this stage, I can say very little about the seventeenth in particular. I will provide a full update on both once further information becomes available next week. You can read more about the webinar here:
We will be exploring these cases, along with other pressing issues, during the webinar. I would encourage you to read this post carefully before joining the discussion. If you are interested in how I have categorised these as the 15th, 16th, and 17th incidents, you may wish to revisit the following earlier posts for context.
ANPV & SAPV v SOSHD (15th False Citations/AI Hallucinations incident)
This was an appeal to the Upper Tribunal (UT), brought with the permission of First-tier Tribunal (FtT) Judge Bowen, against the decision of FtT Judge Hussain, who had dismissed the appeals of two sisters against the respondent’s refusal of their claims for international protection. The matter came before Upper Tribunal Judge Blundell on 29 September 2025 and the appeal was dismissed. Those interested in the facts of the substantial appeal can use the hyperlink above.
However, Judge Blundell went on to consider the inaccuracies of the Grounds of Appeal in a postscript:
“32. I have reproduced the grounds of appeal in their entirety above and I shall now explain why it was necessary to do so. It will already be apparent from some of my analysis above that the grounds of appeal make some assertions about the judge’s decision which are simply incorrect. He did engage with the appellants’ evidence that the Honduran authorities were scared of MS-13, for example, and there was no evidence from neighbours and church members which was ignored by the judge. The grounds might be thought to be misleading in those respects, but there is a much more significant problems with the grounds.”
Twelve authorities were cited in the grounds. When the Judge considered the papers, he was concerned that some of those autorities did not exist and that otehrs did not support the propositions of law for which they were cited. Thus, he directed counsel who had settled the grounds of appeal to return to chambers and provide authorities, with the relevant paragraphs side-lined. Given the grounds were settled in July 2025:
“…he should have been in a position to take me through the authorities immediately, but I gave him four hours in which to comply with the direction.” (para 33)
Counsel returned with a number of authorities. However, he was unable to find some authorities and:
“…Of the authorities which he was able to find, there was not one which offered any support for the propositions of law which were set out in the grounds. On a number of occasions, [Counsel] said that he had made a mistake and that he had intended, instead of citing one authority, to cite a completely different one. Often, however, the authority he said that he had intended to cite was also irrelevant to the proposition of law set out in the grounds.
“…There were a number of occasions on which [Counsel] was unable to locate on Bailii electronic copies of the authorities which he had provided in hard copy. Throughout, it seemed that he was unfamiliar with Bailii or any other legal search engine. [Counsel] was consistently unable to grasp the point that it was the ratio decidendi of the case to which I should be taken; oftentimes he took me to parts of the argument, or even to the facts of the case. At the risk of stating the obvious, counsel who comes to argue any case should be able to identify the salient paragraphs of the authorities cited in the grounds of appeal, but that is all the more so when the grounds of appeal were recently drafted by the same member of the Bar. [Counsel] was however so unfamiliar with the cases, even after the additional four hours he was given, that the process of going through the authorities in an attempt to locate the relevant passages took more than two hours…” (paragraphs 34 and 35)
Judge Blundell then set the first ten authorities cited in the grounds, which I will not set out within this article for the reasons explained here. Those interested, can see the full details in the judgment hyperlink above. It is an interesting analysis because it shows how varied false citations/AI hallucinations can be. In conclusion, the Judge set out why the submissions were misleading as counsel:
“… appeared to know nothing about any of the authorities he had cited in the grounds of appeal he had supposedly settled in July this year. He had apparently not intended to take me to any of those decisions in his submissions. Some of the decisions did not exist. Not one decision supported the proposition of law set out in the grounds. Insofar as the judge in the FtT was said to have ignored or acted contrary to binding dicta in those decisions, [Counsel] could not identify the principles in question. All of the submissions which were made in the grounds were therefore misleading…” (paragraphs 55)
The Judge identified four possible explanations for the inaccuracies: (1) counsel might have used generative AI to draft the grounds of appeal without verifying its output (2) relied on another person’s unchecked work (3) fabricated or randomly selected authorities, or (4) misunderstood the cases after genuine research. The Judge found the first explanation the most likely, concluding that the grounds had been drafted, at least in part, with the assistance of generative Artificial Intelligence.
Concerningly, the Judge observed that one of the cases cited by counsel “…has recently been wrongly deployed by ChatGPT in support of similar arguments concerning section 8 of the Treatment of Claimants Act: see MS (Professional conduct; AI generated documents) Bangladesh [2025] UKUT 305 (IAC)…” (paragraph 58)
The Judge gave counsel a copy of R (Ayinde) v London Borough of Haringey and drew his attention to paragraph 7. Counsel told the Judge that he had used various websites including Bailli and the Supreme Court website. He said at one point that:
“…he had used Microsoft Copilot to summarise decisions and that he might have been misled by what he had read in these summaries. He did not produce any such summaries, however, and I am not able to accept that a summary produced by that application would misrepresent the very subject matter of a case. A secure and closed source version of Copilot has been available to the judiciary for some time. Having used Copilot to summarise the Supreme Court’s decision in HH v Italy for myself, I found that it provided the following accurate account of the “core issue” in the case…” (paragraph 60)
The Judge then set out summary of the Co-pilot summary case and explained:
“Whilst that is not on a par with the headnote to an official law report, it is at least faithful to the subject matter of the case. The suggestion that Copilot might scrutinise a judgment and completely misrepresent its contents is not one that I am able to accept….” (paragraph 62)
Counsel also:
“…suggested that the inaccuracies in the grounds were as a result of his drafting style. He accepted that there might have been some “confusion and vagueness” on his part; that he might “need to construct sentences in a more liberal way”; and that his drafting should perhaps “be a little more generous” when it came to making specific allegations about judges overlooking or failing to follow binding authorities. … The problems which I have detailed above are not matters of drafting style. The authorities which were cited in the grounds either did not exist or did not support the grounds of which were advanced. Where the cases did exist, they were often wholly irrelevant to the proposition of law which was given in the grounds.” (paragraphs 63 and 64)
The Judge was seriously concerned that counsel did not appear to understand the gravity of the situation:
“…He said that he might need further training on legal research and he said that he would try to draft in a more “liberal way” in the future, but was unable to come to grips with the way in which his grounds of appeal in this case might have mislead the tribunal. Nor did he appear to understand that the investigation of such misleading submissions took a significant amount of judicial time. Nor did he appear to be capable of using a basic legal search engine such as Bailii to discover a freely available decision such as HH (Somalia) & Ors v SSHD [2010] EWCA Civ 426. It was only when I gave him assistance with the search terms he might use that he was able to find it, despite having the citation of the case in front of him, contained in his own ground 2…” (paragraph 65)
The Judge found it was overwhelming likely that that counsel had used generative AI:
“…to formulate the grounds of appeal in this case, and that he attempted to hide that fact from me during the hearing. He has been called to the Bar of England and Wales, and it is simply not possible that he misunderstood all of the authorities cited in the grounds of appeal to the extent that I have set out above. No barrister could think that HH v Italy was a case about Article 3 ECHR because that provision is not even mentioned by Lady Hale. No member of the Bar could think that AM (Cameroon) v SSHD was about internal relocation because there is no reference to that consideration in the judgment. Even if [Counsel] thought, for whatever reason, that these cases did somehow support the arguments he wished to make, he cannot explain the entirely fictitious citations …” (paragraph 66)
The Judge found the only realistic possibility is that Counsel relied significantly on Gen AI to formulate the grounds and sought to disguise that fact when the difficulties were explored with him at the hearing. The Judge said he was minded to refer Counsel to the BSB and concluded:
“…I will make a Show Cause order, requiring him to set out in writing why that course should not be followed. If the explanation I receive is unsatisfactory, it may or may not be necessary to hold a separate R (Hamid) v SSHD [2012] EWHC 3070 (Admin) hearing at which to consider what should be done….” (paragraph 67)
Case concerning Birmingham City University (16th False Citations/AI Hallucinations incident)
Case name and reference TBC
I am grateful to St Philips Barristers for sharing the next incident. A summary on the Chamber’s website can be found here:
I hope to speak with the legal team involved to understand exactly what occurred in this case. From the information currently available, it appears that Birmingham City University obtained a wasted costs order in long-running litigation with a former student. The solicitors acting for the former student had filed an application citing two fictitious cases. When asked to explain, they provided no response but instead withdrew the original application and re-submitted it without the false citations, informing the court that the earlier version had been submitted “in error.”
The matter of the false citations and wasted costs was subsequently listed for hearing. In a witness statement, the solicitors accepted that the cited cases were AI-generated and fictitious:
“…The solicitor’s evidence was that a member of the administrative team had drafted the application using a built-in AI research feature of a widely used legal software. That staff member had submitted the application without the solicitor’s knowledge, had not verified the authorities cited, and had signed the statement of truth personally in the solicitor’s name without his knowledge or consent…”
HHJ Charman, applying the Ayinde guidance, held that the solicitor and the firm had acted improperly, unreasonably, and negligently. The explanation was found to be inadequate and the threshold for a wasted costs order was met. A transcript of the judgment is expected to be published shortly, after which it will be possible to examine the full reasoning and place the decision in proper context.
17th Case from the FTT Property Chamber (Residential Property)
The details of the seventeenth case will be shared in a forthcoming post that I am co-writing with James Cairns for Justice for Tenants, a non-profit organisation dedicated to improving housing standards and protecting renters’ rights. Further details of the organisation can be read here.
This case raises several interesting points that I am currently clarifying, but I hope to publish the joint post soon so that we can explore and discuss the important issues it highlights.
Comment
I look forward to writing more about these incidents and to comparing the different judicial responses. In ANPV, it was notable that the judge appeared familiar with AI tools and was able to use them personally to test the likelihood of what was alleged by counsel. That judgment deserves separate attention for what it reveals about the emerging judicial engagement with AI, and I will include it in the Judicial AI Use Tracker in due course. We will also be discussing these developments, and the wider implications they raise, at the upcoming Public Free Legal AI Webinar later this week.
It may be worth briefly noting the comments made about Copilot in ANPV, which are an interesting development in their own right. I’ll be discussing this with judges and other lawyers to hear their perspectives and may dedicate a future post to exploring those views in more detail. Readers should, however, keep in mind that no AI system is entirely free from error. It remains important to understand how such tools function and to approach their output with the same level of care and scrutiny as any other source.
Another notable feature of ANPV is the judicial observation at paragraph 58 that a citation had been wrongly deployed by ChatGPT. The judge referred to another case where the same issue had recently arisen, MS (Professional conduct; AI generated documents) Bangladesh [2025] UKUT 305 (IAC), which I have discussed previously here.
Whatever the cause of the repeated citation, which we may never know, I would again encourage those writing about AI in legal contexts to reflect carefully on whether setting out the full details of AI hallucinations or fabricated citations achieves more good than harm. If there is a benefit, does it truly outweigh the risk of that material being inadvertently absorbed into the legal canon or by other AI systems? I think the collective approach by the Australian Judiciary on this issue is worth carefully considering:
“…There has been an approach, which I will adopt, of redacting false case citations so that such information is not further propagated by AI systems.”
I look forward to hearing your thoughts on this, and much more, at our upcoming webinar and in the discussions that will follow. If you have not already done so, please subscribe to this blog below. It’s the best way to stay up to date, and you’ll receive an email each time a new post is published on this blog together with a summary of the issues discussed:
Final Comment from ChatGPT 5 Pro
Here’s where I hand things over to a premium AI model for comment. This week, it seemed keen to advertise my webinar. Flattery? Algorithmic bias? Sycophancy? Sincerity? I’ll let you decide:
“Courts are moving from tolerance to enforcement: ANPV shows rigorous judicial scrutiny of asserted authorities, and the Birmingham case illustrates how AI‑generated fabrications can translate into wasted costs. The remedy is professional, not merely technical—disclose any AI assistance; verify every citation against official reports with pinpoint paragraphs; keep an auditable trail of prompts, outputs and search terms; restrict who may draft, sign and file; and adopt a standing policy to redact fabricated citations to avoid polluting the legal record. Used this way, AI is a research accelerant; used carelessly, it erodes credibility and consumes court time. Treat AI outputs as unverified leads, never authorities, and make verification a non‑delegable duty. Your webinar is well‑timed to convert these cautions into durable practice standards.”
