Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.
Introduction
I’m grateful to everyone who shared this Hamid judgment on AI hallucinations/fabricated citations with me. I haven’t had the chance to address it sooner as trial commitments have kept me occupied, but it’s certainly an important one. At first, I thought it might be a fresh incident of AI hallucinations or false citations. Having now read the judgment carefully, I can see it is in fact a Hamid judgment following on from a decision I considered earlier, which I explain in more detail below.
Ms (Bangladesh) v Secretary of State for the Home Department: The First Hearing
I addressed the first hearing in my post 12 False Citations/AI Hallucination Incidents in UK Courts: The Complete Legal Timeline Before and After Ayinde and How Pervasive is the Problem?
By my count, this was the eleventh alleged incident of AI hallucination cases in the UK and the second from the Upper Tribunal (Immigration and Asylum Chamber) (“UT”), heard on 1 July 2025. At that stage, it was not clear what steps or sanctions the court had considered in response. However, the court did note:
“…We raised concern about this and referred [counsel] to the recent decision of the President of the King’s Bench Division in Ayinde [2025] EWHC 1383 (Admin) on the use of Artificial Intelligence and fictitious cases, and directed him to make separate representations in writing…”
Hamid Jurisdiction
To understand the next hearing, it is important to revisit the expressions “Hamid jurisdiction”, “Hamid hearing”, “Hamid decision” and “Hamid judgment”, which appear throughout this judgment and in related commentary.
These expressions all stem from R (Hamid) v Secretary of State for the Home Department [2012] EWHC 3070 (Admin). In that case, the court considered its inherent powers to intervene and sanction those misusing or abusing the judicial process. For more detail, see my commentary here, where I discussed this jurisdiction at length in relation to R (Ayinde) v London Borough of Haringey, Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383, where the same jurisdiction was invoked.
For my US audience, this is broadly similar to the “sanctions hearings” held in many US jurisdictions when lawyers present concerning material to the court.
Ms (Bangladesh) v Secretary of State for the Home Department: The Hamid Hearing
UT Neutral Citation Number: [2025] UKUT 00305 (IAC)
This issue came before Mr Justice Dove and Upper Tribunal Judge Lindsley. Mr Justice Dove is the President of the Immigration and Asylum Chamber in the UT. The Hamid judgment begins with a brief introduction on AI and professional obligations:
“1. AI large language models such as ChatGPT can produce misinformation including fabricated judgments complete with false citations.
2. The Divisional Court has provided guidance in the case of R (Ayinde) v London Borough of Haringey, Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin) that the consequence of using AI large language models in a way which results in false authorities being cited is likely to be referral to a professional regulator, such as the BSB or SRA, as it is a lawyer’s professional responsibility to ensure that checks on the accuracy of citation of authority or quotations are carried out using reputable sources of legal information. Where there is evidence of the deliberate placing of false material before the Court police investigation or contempt proceedings may also be appropriate.
3. Taking unprofessional short-cuts which will very likely mislead the Tribunal is never excusable.”
The court then introduced the Hamid jurisdiction, explaining the importance of lawyers conducting themselves with proper professional standards:
“The Upper Tribunal cannot afford to have its limited resources absorbed by representatives who place false information before the Tribunal. Further, time spent on applications containing false legal information also risks a loss of public confidence in the processes of the Upper Tribunal. We are also aware that the immigration client group can be particularly vulnerable. We emphasise that the primary duty of solicitors and barristers is to the Court and Upper Tribunal, and to the cause of truth and justice.” [para 1]
The court explained that it was guiding itself in accordance with Hamid and subsequent relevant judgments, including Ayinde:
“The purpose of the present hearing is to decide whether it is appropriate to refer [named counsel], of Counsel, to the Bar Standards Board (BSB) for investigation. What is said in this Hamid decision is not binding on the BSB but may assist them to explore this matter further and consider what, if any action, is appropriate.” [para 3]
The court then briefly set out what had happened on the previous occasion:
“These proceedings have come about as a result of a false citation placed by [named counsel] in the grounds of appeal in the appellant’s case which he drafted on 14th March 2025. In the grounds it was said as follows at paragraph 14: “The Tribunal or decision maker placed undue weight on delay in isolation, contrary to [false case], which requires consideration of personal circumstances, mental health, and overall context.” The case [false case] does not exist. However permission was granted to appeal on limited grounds by the First-tier Tribunal including on the ground that undue weight had been placed on delay.” [para 4]
The matter proceeded to the UT, where counsel was recorded as the representative using correspondence from a Gmail account that contained “chambers” in the name. Counsel claimed this was an error, stating that while the email was operational he was in fact instructed by solicitors. He maintained that evidence of solicitors being on record had been sent to the UT. However, solicitors were not on the record. Counsel pointed to errors made by the solicitor, and the court was presented with a confusing and, at times, contradictory account of who was instructed, when, and in what capacity.
Turning back to the subject of the appeal, the court noted that the matter proceeded to an error of law hearing, where counsel was:
“asked by Upper Tribunal Judge Blundell to take him to the relevant paragraph of [false case] as it was noted that the citation was actually for YH (Iraq), a judgment of the Court of Appeal which is about s.94 certificates and paragraph 353 fresh claims and not delay.” [para 7]
Counsel responded that he did not wish to rely on YH (Iraq):
“He then said he meant to cite Beatson J’s judgment in R (WJ) v SSHD [2010] EWHC 776 (Admin), although he was again unable to take the Panel to anything in that case which bore on credibility assessments or section 8 of the 2004 Act. He then suggested that what he should have cited was Bensaid v UK [2001] ECHR 82, although he accepted that the ECtHR was unlikely to have said anything about a 2004 UK statutory provision in a decision which was made in 2001…” [para 7]
The Panel then decided to take a break and provided counsel with a copy of Ayinde, asking him to consider his position over lunch. After lunch, counsel said:
“he had undertaken ChatGPT research during the lunch break and the citation for [false case] was correct, and it was a decision made by Pill and Sullivan LJJ and Sir Paul Kennedy.” [para ]
Unsurprisingly, the Panel directed counsel to provide a copy of [false citation] by 4pm on 24 June or explain what had happened. While the Panel moved on to the next case, counsel then:
“provided the Tribunal clerk with nine stapled pages which were not a judgment of the Court of Appeal but an internet print out with misleading statements including references to the fictitious [fake case ] case with the citation for YH (Iraq). The notes contained no mention of the key case on delay JT (Cameroon) v SSHD [2008] EWCA Civ 878.” [para 7]
Counsel then wrote to the UT stating:
“…he had in fact meant to cite YH (Iraq) (and in particular paragraph 24 of that decision which he cites as saying all factors in an applicant’s favour must be taken into account) and apologising for his failure to cite the full and correct name of the case. He blamed this on having suffered from “acute illness” before drafting the grounds; and on having been on a visit to Bangladesh between 10th and 18th June 2025, and the fact that he had been hospitalised in Bangladesh due to diabetes, cholesterol problems and high blood pressure. He also argued that we should not penalise him for this error as he has five family members (wife and four children) depending on him…” [para 8]
On 23 July 2025, counsel sent a further letter:
“In the letter he provides an analysis of Ayinde in which he acknowledges that it is a breach of professional duties to rely upon citations produced using AI tools without checking their veracity using reputable legal search engines such as Westlaw, EIN or Bailii.” [para 9]
In this document, and in subsequent answers to questions from the court, counsel accepted that he had used ChatGPT to draft the grounds of appeal and to create the document he handed up. He explained that he had returned from Bangladesh unwell and felt under time pressure. He said he needed to eat lunch due to his diabetes and thought ChatGPT would be a quick way to find the (false) judgment:
“He argues that he was misled by the search engine and is thus also a victim. He accepts in his letter of 23rd July that paragraph 24 of YH (Iraq) is not to do with delay but relates to the principle of anxious scrutiny, and so was not relevant. He says however that he has now undertaken some further professional training on immigration law which includes a presentation on AI and the Ayinde case. He offers an apology for his conduct and says that he should not be referred to the BSB as he now has a proper understanding, he has been honest, he will act with integrity in the future and is unwell and concerned as to how he will support his family.” [para 9]
Ms (Bangladesh) v SoS for Home Department: The Hamid Judgment
The court raised several concerns. First, it was worried that counsel was conducting litigation without the proper licence and noted he had already been referred to the BSB in another appeal. It then considered counsel’s use of AI.
Counsel had “…accepted that he used ChatGPT on two occasions: when he drafted the original grounds of appeal and when he appeared before the Upper Tribunal Panel. He accepts that on neither occasion did he conduct any checks from a reputable source of legal information, such as Bailii, West Law, EIN or Lexis Nexis, to discover if the case [fake citation] was genuine…” [para 10]
Then, importantly:
“…It would appear that the authority may have swayed the Judge of the First-tier Tribunal to grant permission on the delay ground…” [para 10]
When challenged by UT Judge Blundell, counsel was:
“…very fairly given time over lunch and a copy of Ayinde, so that he ought to have been aware of the gravity of his situation. However [named counsel] did not immediately admit to his unprofessional use of ChatGPT but astonishingly maintained that the case was genuine because of it having been evidenced by this AI large language model [named counsel]’s attempt to explain his behaviour by reference to his state of health and tiredness is not, we find, a valid excuse. If unwell counsel must refuse work and where they are due to appear before the Upper Tribunal they must inform the Tribunal at the earliest possible time, and if there is sufficient time the appellant must instruct alternative counsel. Taking unprofessional short-cuts which will very likely mislead the Tribunal is never excusable for these or any other reasons. Further in his written response to the Upper Tribunal Panel drafted on 24th June 2025 [counsel] inconsistently and dishonestly pretended that he had intended to rely upon the genuine case of YH (Iraq) which he now accepts was not the case, and indeed that the passage he identified in this previous letter was to do with anxious scrutiny and not delay.” [para 11]
The court summarised the shifting explanations as follows:
“…[named counsel] therefore has moved from an acceptance of the use of ChatGPT but with a defence of the research and a defence of the fake case of [fake case] on the day of the Panel error of law hearing; to a claim that it was a regrettable oversight and he did in fact want to rely upon an irrelevant but genuine case in his letter of 24th June 2025; to an acceptance before us that he used ChatGPT to assist in formulating the original grounds and in production of the document he handed to the Panel on 20th June 2024, and that the case of [fake case] is fake. We find therefore that [named counsel] has directly attempted to mislead the Tribunal through reliance on [fake case], and has only made a full admission of this fact in his third explanation to the Upper Tribunal. He has not therefore acted with integrity and honesty in dealing with this issue, as well as having attempted to mislead the Tribunal in the grounds through the use of an AI generated fake authority…” [para 12]
The court did accept that counsel may not have understood the limitations of large language models such as ChatGPT until he read Ayinde and attended recent training, but emphasised:
“…this is clearly not an excuse. The BSB issued guidance on AI and ChatGPT in October 2023 which identified the danger that it could produce misinformation including fabricated judgments complete with false citations, as is set out at paragraph 14 of Ayinde…”
The court found that counsel:
“…misused artificial intelligence and attempted to mislead the Tribunal contrary to the obligations as set out in the BSB regulatory framework which require him to observe his duty to the Court, to act with honesty and integrity, not to behave in a way which diminishes trust and confidence in the profession, and to provide a competent standard of work to clients.” [para 13]
Ms (Bangladesh) v SoS for Home Department: The Next Steps
In considering next steps, the court referred to Ayinde and determined it would not refer the matter to the police or initiate contempt proceedings:
“…We do not consider that this is a case where there is evidence that there has been a deliberate placing of false material before the Tribunal with the intention that the Tribunal will treat it as genuine. This is because we find that [named counsel] did not know that AI large language models, and ChatGPT in particular, were capable of producing false authorities…” [para 16]
However, the court did consider the case should be referred to the BSB:
“At paragraph 29 of Ayinde it is clearly stated that where false citations are placed before the Court, because of the lack of proper checks or otherwise, that referral to a regulator is likely to be appropriate. [named counsel] undoubtedly carried out no proper checks. In this respect what is said at paragraph 81 of Ayinde is also relevant: “A lawyer is not entitled to rely on their lay client for the accuracy of citations of authority or quotations that are contained in documents put before the court by the lawyer. It is a lawyer’s professional responsibility to ensure the accuracy of such material.” [named counsel] did not ensure the accuracy of what was placed before the Upper Tribunal. We are also guided in making this referral by the factors listed at paragraph 24 of Ayinde. In particular we find a referral is appropriate so that proper standards are set to stop false material coming before the Tribunal, which, in this case, we find contributed to the grant of permission, and thus potentially to the wrongful prolonging of the litigation, and has led to considerable public expense in addressing it through this hearing. We also find that it is relevant to the making of this referral that there was no immediate, full and truthful explanation given by [named counsel] when he was first challenged by the Panel, and in particular that his letter written on 24th June 2025 was, we find, a less than honest attempt to pretend that he had made a simple typographical error and had not relied upon ChatGPT to do research and thereby on a false citation.” [para 17 ]
Comment
This judgment has already attracted attention from several well-regarded and widely read legal blogs. Giles Peaker, writing on Nearly Legal in his article “The boundless capacity for stupidity” (available here), makes some observations that are well worth reading.
The judgment is an important one, and a two features particularly struck me:
Firstly, regular readers will know that I have significant concerns about judges citing AI hallucinations cases or fabricated authorities in their judgments. For those interested, I have set out the reasons why here, and in my article for Bar Counsel Magazine (available here), where I wrote:
“Well-intentioned judges often cite hallucinated cases and their erroneous legal principles in full within official judgments to show the extent of the problem to those reading. However, judges may be inadvertently exacerbating the issue because those AI-generated inaccuracies are being integrated into the established legal canon indirectly.”
In this case, the court referred several times to the false citation and its principle, and I do hope this does not inadvertently find its way into legal databases. Out of caution, I suggest that the Australian practice of redacting false case citations, to prevent their further propagation by AI systems, may be the better approach.
Secondly, it is extremely troubling that a fabricated authority seems to have influenced the grant of permission in this case. That should concern us all. Permission is the filter which decides whether a claim proceeds, and if it is obtained on the strength of a non-existent case, the process is undermined at its very foundation. It risks prolonging litigation unnecessarily, consuming precious judicial time, and adding to public expense. More importantly, it damages confidence in the system itself.
This is unlikely to be the end of the matter. The BSB will now have to decide what steps are appropriate, and further developments are likely.
For those waiting on my weekly newsletter, thank you for your patience, it is on its way, and there have been several international incidents worth covering.
I would be very interested to hear your thoughts on AI hallucinations cases/fabricated citations , wherever you practise, particularly if you are aware of any reported incidents I may have missed. If you have found this article useful, please consider subscribing to my FREE Substack newsletter:
Final Comment from o5 Pro
This is where I invite a premium AI model to comment on or critique the preceding discussion. This week o5 states:
“Hallucinated authorities are not a curiosity but a direct threat to procedural integrity, particularly at permission stage where a single bogus citation can distort outcomes and waste public resources; the solution is discipline, not technophobia. Lawyers must verify every authority against canonical databases, with courts encouraging compliance through simple verification certificates on skeletons and swift regulatory referrals where checks were skipped. Judgments should balance transparency with containment by describing fakes in neutral terms or redacting specific identifiers to prevent polluting legal databases and machine models. The objective is to normalise careful, auditable use of modern tools while preserving accuracy, efficiency, and public confidence in adjudication.”




