Is Your Firm or Chambers Ready? Could Your Heads of Chambers, Partners, or Others Face Sanctions? How Far Does AI Liability Reach? 10 Crucial AI Supervision Questions from Landmark AI Hallucinations Case.

"We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled." Dame Victoria Sharp

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns UK AI Hallucination cases.

AI Hallucinations

Introduction

The case of Ayinde v Haringey and Al-Haroun v Qatar [2025] EWHC 1383 marks a seminal moment in the evolution of AI law within our courts. I would have liked to discuss it sooner, but balancing a demanding practice alongside developing this blog has been challenging, though infinitely rewarding. I’m deeply grateful to everyone who has supported this project by sending judgments and insightful articles on AI hallucinations and other issues, as well as those who’ve contributed pieces that I look forward to sharing.

Due to its complexity and significance, this judgment warrants consideration across several posts. In this initial post, rather than delving deeply into the facts of the case, I will focus on a clear and serious warning issued by the court. I’ll outline ten critical questions that all legal organisations should urgently consider to address their supervisory responsibilities regarding AI usage effectively. This judgment goes beyond merely AI hallucinations.

As a pupil supervisor myself, I will need to carefully reflect on whether any changes to my approach to supervision are required, in light of these clear and important messages from Rt Hon. Dame Victoria Sharp and Mr Justice Johnson. Please take the time to read the judgment in full in addition to my comments below.

The Hamid Jurisdiction

These cases appear to be the first instances where the Hamid jurisdiction has been invoked specifically to scrutinise the alleged use of AI. I discussed the Hamid jurisdiction in more detail here and linked to the case for those interested. In brief, the Hamid jurisdiction refers to the High Court’s inherent power to regulate its own procedures and uphold the professional standards expected of lawyers appearing before it, particularly addressing concerns about competence or conduct when professional obligations to the court may have been breached.

The is important because:

“The court has a range of powers to ensure that lawyers comply with their duties to the court. Where those duties are not complied with, the court’s powers include public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and referral to the police.”

Old Rules, New Tools

In a particularly notable analogy, the judgment states:

“This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search”

Observations like these are extremely helpful for those of us navigating the complexities of new technologies. It is easy to lose sight of well-established principles and seek entirely new guidance when, in fact, the relevant principles already exist.

That said, I’ve been reflecting on why issues surrounding AI hallucinations crosses jurisdictions and international boundaries. It is clearly not limited to isolated incidents. The AI Hallucinations Cases Tracker captures reported cases, but how many incidents remain unreported? Internationally, how often have lawyers unwittingly relied upon incorrect research provided by those they supervise? I don’t think there is any reliable data on this point.

The central question I am grappling with is this: is the issue arising because AI-generated work is fundamentally different from work produced by a trainee solicitor or pupil barrister, or are some supervisors failing to perform supervisory duties adequately and those failings are now being highlighted by these AI-related cases? I don’t know the answer, but I do wonder if these questions are on the minds of courts dealing with these issues.

Clear Warning to Legal Leaders Including Heads of Chambers and Managing Partners

Arguably, this is one of the most important sections of the judgment and should be shared as widely as possible. If further AI hallucinations, or other AI misuse, comes to the court’s attention, I anticipate this will be quoted verbatim:

“We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.”

The 10 Questions

In light of the significant implications of this judgment, I have drafted some questions that might help Chambers and Law Firms think clearly about their approach to AI supervision. These questions are designed to prompt and assist with how your organisation currently manages AI use and to ensure you’re prepared if the court ever reviews your supervisory practices.

  1. What exactly constitutes AI misuse in legal practice? (note: I suggest broader considerations than just AI hallucinations)
  2. Who in your organisation specifically falls within the category of those in the “legal profession with individual leadership responsibilities”? (note: it is not just heads of chambers and managing partners)
  3. Who in your organisation provides “Legal services” within this jurisdiction?
  4. What responsibilities do regulators have in relation to supervising AI use by lawyers, what guidance have they produced and how has this guidance been provided to all persons providing legal services in your organisation?
  5. How can you effectively ensure that all persons providing legal services in your organisation, regardless of their qualifications or experience, “understand” their professional and ethical obligations and their duties to the court if using AI?
  6. How can you effectively ensure that all persons providing legal services in your organisation, regardless of their qualifications or experience, “comply” with their professional and ethical obligations and their duties to the court if using AI?
  7. What practical and effective measures “must” be taken to ensure appropriate AI use?
  8. How would your organisation demonstrate to a court that those with leadership responsibilities have adequately fulfilled their supervisory duties regarding AI usage?
  9. What could be the consequences for leadership if they fail in their duties regarding AI oversight and are they aware? (note: the range of powers set out above)
  10. What processes does your organisation have in place to clearly document and record decisions, considerations, and compliance measures relating to AI supervision, so they can be presented effectively if scrutinised by a court?

The above questions are not intended to be exhaustive, but rather a good starting point for considering your organisation’s approach to AI use in legal practice. As you engage thoughtfully with these questions, it is likely that additional, more specific questions will arise. I will likely be revisiting these questions as I speak with Chambers and Law Firms who are considering these points and the broader points in the Judgement.

Conclusion

As I mentioned at the outset, this is just the first part of my analysis of this important judgment. There are many further issues to explore, which I’ll address in future posts. It is always worth remembering that AI hallucinations do not always arise from obvious mistakes in case citations. They may also arise from more subtle distortions, such as the invention of plausible-sounding legal principles, misrepresentation of procedural context, or the conflation of multiple authorities into a single, fictional source.

In the meantime, I’d like to hear what steps your organisation has already taken in response to these critical issues. Are there additional challenges or perspectives you think deserve more attention? The discussion continues on LinkedIn and my Substack and don’t forget to subscribe to my newsletter here for more insights and updates on emerging AI legal challenges: