Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns the International AI Deepfake Database Tracker and AI hallucination cases (AI suspected or confirmed). Subscribe to the AI Law Commentary here.

Publication date: 22 February 2026
Quotes
“It will be possible with AI to create– you know, a video, easily. Where it could be Scott saying something, or me saying something, and we never said that,…And it could look accurate. But you know, on a societal scale, you know, it can cause a lot of harm.”
Sundar Pichai, CEO of Google (remarks in a 60 Minutes interview, reported by Fox Business
“Society has to deal with this problem more generally, but people are going to have to change the way they interact. They are going to have to change the way they verify, like this person calling me. Right now it is a voice call. Soon it is going to be a video FaceTime. It will be indistinguishable from reality. Teaching people how to authenticate in a world like that, how to think about the fraud; this is a huge deal.”
Sam Altman (CEO, OpenAI), fireside chat with Vice Chair for Supervision Michelle W. Bowman, Integrated Review of the Capital Framework for Large Banks Conference (Federal Reserve), 22 July 2025 (official transcript, pp 9–10)
Introduction
I have attended several recent presentations where AI hallucinations continue to dominate discussion in AI law circles. They remain the issue I am most frequently asked to speak on and advise about. However, I continue to hold the view that, serious though hallucinations are, they do not come close to the evidential challenges posed by deepfake material. Deepfake evidence may prove to be one of the most significant issues we will need to confront as a profession and I remain concerned about how any jurisdiction will respond once the problem becomes more acute.
So what are deepfakes? The Artificial Intelligence (AI) – Judicial Guidance (October 2025) provides a helpful explanation:
“AI tools are now being used to produce fake material, including text, images and video. Courts and tribunals have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology.”
The question of prevalence is more difficult. Deepfakes are becoming increasingly realistic and increasingly accessible. That is part of the difficulty. If a fabrication is not obvious, how will a court, lawyers or litigants detect it? In an earlier article I discussed the importance of CPR 32.19 as a procedural safeguard for documentary authenticity. That rule remains significant, but we will need to think more broadly across other areas of law. As we all must remain alert to the risks.
To assist with that aim, I have created an International AI Deepfake Database and Tracker to record how deepfake material is being used and cited in court proceedings across the world. At present there are eleven entries, although I am aware of others that will be added. If you come across any cases that are not yet included on the tracker, I would be very grateful if you would send them to me. In the same way that colleagues have generously shared hallucination decisions, which has strengthened the accuracy and usefulness of that database, contributions here will benefit us all. The more complete the picture, the better equipped we are as a profession to respond thoughtfully and proportionately.
When I began tracking hallucination cases, the numbers were modest. They grew quickly. I anticipate a similar trajectory here and I suspect that by the end of the year the volume internationally may be considerable, if it is not already.
In relation to hallucinations in the UK, my research is ongoing. I have identified several additional UK cases, all arising in the Employment Tribunal. That brings my current tally to forty seven. The issue is not confined to fictional authorities. The Tribunal appears to be grappling with expanding bundles, increasingly lengthy AI generated documents, the strain on judicial time and resources, and in at least one case the risk that AI assisted drafting may have contributed to confusion about a litigant’s intended procedural position. These developments merit careful and measured attention from all of us working in this space.
Green v Imprint Creative Print Solutions LTD
In this case, Employment Judge Armstrong was determining various complaints. A key issue concerned time limits. The respondent submitted that the reasonable adjustment and detriment claims were out of time. As time limits are jurisdictional, the Tribunal addressed the issue without requiring a formal amendment application. The claimant was questioned about this in evidence. At paragraph 12 the Tribunal recorded:
“The claimant responded to the respondent’s email of 2 December 2025 raising the time limit issue, by email. The case law she referred to therein could not be identified by the Tribunal or counsel for the respondent. The claimant readily accepted that she had used AI to generate the submissions. I am satisfied that the authorities referred to do not exist and should be disregarded by this Tribunal.”
Peiu v Hywel Dda University Local Health Board
Case Nos: 1602474/2024 and 1604381/2024
Employment Judge Brace was considering a range of claims including discrimination, victimisation, unfair dismissal and unlawful deduction from wages. At paragraph 266, ChatGPT was briefly mentioned:
“Within her written submissions, the Claimant had provided a number of cases but was unable to provide citations or references for such cases and had not provided copies. Despite some efforts by Respondent counsel and the Tribunal to find some of the cases, this proved unfruitful. The Claimant was unable to provide references and indicated that she had researched through an AI applications, such as ChatGPT and was unable to provide references or give an indication of how they were relevant. She was informed that unless she was able to do so, they would not be relied upon and were not.”
Ferreira v Magic Life Ltd and Others
Employment Judge Anstis was considering amendment, interim relief and related procedural questions. At paragraph 32, under the heading “Use of AI”, the Tribunal stated:
“[Claimant’s Rep] produced during this hearing a skeleton argument in response to the skeleton argument produced by Miss Martin. Amongst other things, this contained the following: “The Respondents’ assertion that the ET1 was premature is a red herring. The principle in [redacted case] provides that a technically premature unfair dismissal claim is treated as presented on the Effective Date of Termination (EDT).” This reference to [redacted case] seemed to me to be potentially relevant to some of the matters I had to decide, but on seeking the case at the reference given (or indeed any other reference) I could not find it. I understand Miss Martin embarked on a similar search without success. I asked [Claimant’s Rep] if she could clarify this reference or provide a copy of the case she had in mind. She could not. I suggested to her that this was sometimes the kind of problem that arose where AI was used in the production of a document, and she accepted that in the limited time she had had she had used AI in the production of the document. In principle there is no objection to the use of AI by litigants, but there is a problem where such AI use produces material that may mislead the tribunal. I urge the claimant and [Claimant’s Rep] to check and verify any material that they use in this case that has been produced by AI, and in particular any references to case law or statute that is produced by AI. The claimant and/or his representative remain accountable for materials they submit to the court, whether prepared with the assistance of AI or not.”
Harrison v Mr May t/a Leeds Gymnastics Academy
This was a costs decision concerning an application for a preparation time order. The Tribunal considered whether the respondent had acted unreasonably in the conduct of proceedings. AI arose because one party said his submissions were “scripted by chat GPT”. At paragraph 49, the Tribunal noted:
“Those written representations from [Respondent] contained numerous case law references. At least half of those references were non-existent and [Respondent] admitted that he had just used Chat GPT to produce his representations without checking any of the results. He said he was reasonably entitled to conclude that everything that Chat GPT said was reliable, and in fact, he said there was no reason to fact check the internet at all.”
Sullivan v Capita plc
Case Nos: 6039320/2025 and 6035838/2025
The Claimant had a live claim and brought a second claim which included an application for interim relief. A hearing was set down to determine the application for interim relief but at the start of the Hearing the Respondent said that it needed to make submissions on whether the claims were still live as its position was that they had been withdrawn and the claims should be dismissed on withdrawal. What appears to have happened is the Claimant wrote to the Tribunal asking the Tribunal to:
“Please record my withdrawal and, if appropriate, issue a judgment dismissing both the claims.”
However at the end the letter stated:
“At the time of this withdrawal of both claims, I express my wish to reserve the right to bring such a further claim in the future, if reasonable for me to raise such a further claim.”
This seems to have been a misunderstanding as the Claimant explained in a subsequent letter:
“7. The withdrawal was therefore not a genuine or effective indication of my settled intention to discontinue the claims.
8. Due to my migraine condition, I used AI software to construct the main wording within my withdrawal applications, and I made the withdrawal in circumstances where I did not fully understand its legal effect. Namely, that it would bring the claims to an end unless the Tribunal allows reinstatement”
The Tribunal approached the withdrawal matter objectively. The withdrawal had to be assessed by reference to the words used at the time. On that basis, the proceedings had been brought to an end. However, the Tribunal considered AI to be relevant to dismissal:
“28. The Claimant was relying on AI to help with correspondence and I do not consider that she appreciated the implications of suggesting that it might be appropriate to dismiss both claims on withdrawal. I consider that the final sentence, even if also generated with the assistance of AI, because it is the concluding comment, sets out the Claimant’s clearest intent (especially since she acted consistently with it a short time later on 10 November 2025). As such the circumstances of this case are different to those in Campbell.”
Mr J Murphy v Secretary of State for Work and Pensions
This was a preliminary decision addressing strike out, deposit applications and case management in wide-ranging claims including harassment and discrimination.
AI featured in several respects. The Tribunal noted that, with the assistance of AI, the claimant had produced numerous documents intended to clarify and expand his claims, which contributed to confusion.
“…As a result of the vast amount of information provided by the claimant, his various applications and the confusion caused by what the claimant terms clarifications of his claims, my deliberations have taken in excess of 2 days, with some of the time spent was during non-working days given the pressures of the Employment Tribunal The claimant would do well to think about his future conduct in this litigation and take legal advice before firing off AI generated documents and applications which do not assist him and undermine his obligation to meet the overriding objective. Employment Tribunals can and do make costs orders against parties and will strike out claims due to the manner in which proceedings are being conducted under rule 38(1)(b) and the claimant should keep this in mind.” (para 11)
The claimant later accepted certain legal limits in the context of strike out and deposit applications. The Tribunal also observed that he had used AI to produce claims, responses and applications, and urged him to seek specialist legal advice “rather than rely on AI” when dealing with financial exposure and deposit orders.
Comment
The more Employment Tribunal decisions I read, the more difficult it becomes to sustain the assumption that AI is necessarily making litigation simpler for litigants or securing outcomes that might otherwise have been out of reach. In a number of cases, it appears to absorb Tribunal time and, at moments, to complicate rather than clarify the issues before the court. The account in Sullivan in particular gave me pause as to the potential role AI may play in significant procedural steps within litigation.
That said, I am acutely aware that my perspective is shaped by what is recorded in judgments and what judges consider sufficiently important to address in their reasons. It is entirely possible that there are hearings in which AI has assisted responsibly and effectively, without incident and therefore without comment. I remain open to that possibility and will continue to reassess my view as the body of evidence develops.
I also sense an increasing judicial familiarity with the reality that litigants in person may be using AI tools, and that hallucinated authorities may occasionally find their way into submissions. Although hallucinations are clearly not endorsed by the judges, the responses, at least in the cases I have read, appears measured and pragmatic. Non-existent authorities are set aside. Appropriate warnings are given. Proceedings continue.
I do find myself reflecting on whether this approach will emerge in other jurisdictions, or whether it reflects the Employment Tribunal’s particular experience, training and long-standing engagement with litigants in person and the practical challenges that characterise that field. It may be that the Tribunal’s institutional familiarity with unrepresented parties has positioned it to adapt more readily to this development. I’ll be following this closely.
Before closing, there are three things that would greatly assist me. First, please do continue to send me cases and reflections on AI-related litigation, whatever your area of practice or jurisdiction. I make every effort to read everything that is sent to me. If I am not always able to respond individually, it is simply a function of balancing a busy practice with ongoing research and commentary, but every message is valuable. Please remember that the only reliable method of contact with me is through the clerks in Chambers and those details can be found on the contact page.
Secondly, I encourage readers to consider the Civil Justice Council’s consultation on the “Use of AI in preparing court documents”. I am discussing the issues raised in the consultation with colleagues and hope to put forward constructive proposals in due course. In a recent LinkedIn post, I invited views on one particular question, namely whether lawyers who use AI to prepare court documents should be required to make a specific declaration that AI has been used. The responses so far have been thoughtful and varied. They have been extremely helpful in shaping my own thinking. If you are able, I would very much welcome you reading the discussion and adding your perspective to the conversation. That can be read here.
Thirdly, Doughty Street Chambers are running an AI Seminar Series, including “Artificial Intelligence in the Criminal Justice System”, “Public Law and AI: Immigration Control, Sexbots, and Misinformation”, and “AI in Legal Practice: The View from the Bar”. I will be attending all three and speaking at the third. If you are there, please do come and say hello. Each event is followed by a drinks reception.
There is much more to write and discuss. If you have found this useful, please consider subscribing to my AI Legal Articles via the Substack below and continue sending me any decisions you think are relevant, whatever the jurisdiction. One thing I have learnt is that we are all facing similar, urgent, and rapidly changing issues. Sharing experiences and ideas across practice areas and borders remains one of the most constructive ways to address the professional and legal challenges ahead.
Final Words from Popular LLMs
This is the point at which I invite an LLM to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.
Chat GPT 5.2 Pro
“Deepfakes and AI-assisted drafting raise a cluster of evidence and procedure risks that courts are already flagging: authentication disputes become harder (and potentially more expensive) as synthetic media becomes more accessible and realistic ; parties may try to exploit that uncertainty by disputing genuine material as “AI-generated” (the so‑called “liar’s dividend” / “deepfake defence”), which regulators and researchers discuss as a growing challenge for trust and adjudication. At the same time, tribunal decisions show practical case-management pressure points when AI is used for legal content: hallucinated or unsupported citations waste time and can trigger findings of unreasonable conduct, and judges emphasise that accountability stays with the human party/representative even when AI is used for drafting or research”
Gemini 3 Pro
“While hallucinations—as seen in the Employment Tribunal cases—clog the system and waste judicial resources, they are ultimately verifiable errors. A citation either exists or it does not. Deepfakes, however, attack the foundational premise of digital evidence. We are approaching a threshold where the cost and technical complexity of forensic authentication will outpace the resources of the average litigant or court.
The most insidious risk is not just the admission of fabricated media, but the “liar’s dividend”. As the public and the judiciary become hyper-aware of deepfakes, bad actors will increasingly deploy the deepfake defence to cast doubt on entirely authentic audio or video evidence. If a litigant can baselessly claim that a legitimate recording is an AI fabrication, and the court lacks the immediate technical capacity to disprove that claim, the fundamental reliability of all digital evidence is compromised.”
Claude Sonnet 4.6
“What strikes me most about the cases collected here is not the frequency of hallucinations, troubling as that is, but the quiet dignity with which the Employment Tribunal has responded — setting aside phantom authorities, offering measured warnings, and pressing on. That institutional steadiness is reassuring. But I share the author’s instinct that deepfakes present a categorically different challenge. A hallucinated case citation is an error that can be checked; a fabricated video of someone saying something they never said attacks the very idea of evidence. The “liar’s dividend” is perhaps the most destabilising consequence of all — the possibility that authentic recordings become deniable simply because the technology to fake them exists. Courts have always had to weigh credibility. What is new is that the tools for manufacturing false credibility are now widely accessible, and the tools for detecting it are not. The profession’s response to hallucinations — verification, accountability, transparency about AI use — is a reasonable starting point for deepfakes too. But it will not be sufficient on its own, and the gap between what fabrication technology can do and what courtroom procedure can catch is widening.”
Relevant Trackers to the Above Conversation
Below are the links to the International Deepfake Case Law Database & Litigation Tracker and The UK AI Hallucination Cases Tracker (Suspected or Confirmed AI)




