Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Kuzniar v General Dental Council (ET 6009997/2024)
Introduction
The phenomenon of AI-generated hallucinations in litigation continues to surface. Before reading this article, it may be worth briefly revisiting previous articles on false citations/AI hallucinations in the UK:
Summarising the 12 False Citations/AI Hallucinations in the UK Courts and Tribunals
The 13th False Citations/AI Hallucinations in the UK Courts and Tribunals
The 8 Most Common Types of False Citations/AI Hallucinations in Case Law
Or the weekly news letter I write on the international position here.
This latest case, from the Employment Tribunal, adds another example. In Kuzniar v GDC, heard on 14-15 August 2025 and sent to parties on 20 August 2025, a litigant in person (“LIP”) relied on AI for legal research, leading to significant difficulties in the proceedings.
Facts
The claimant, a dentist, was referred to the General Dental Council (“GDC”) by her former employer, Roxdent Limited, following concerns about her clinical practice. The GDC initiated fitness to practise proceedings, placing conditions on her ability to work as a dentist. The Tribunal summarised the background as follows:
“The Claimant was employed by a company called Roxdent Limited but was dismissed in March on 31/5/2023 Roxdent Limited referred the Claimant to the Respondent which started an investigation, and, following a review, recorded that significantly failures were apparent in the Claimant’s treatment planning and outcomes, record keeping, duty of all candour and radiographic practice, potentially including over-exposure of patients to radiation as outlined. The Respondent started fitness to practice proceedings against the Claimant, and ultimately placed conditions on her right to practice dentistry in the UK, which conditions continue, which she is unhappy about and which is the motivation for the instant claim…” (para 2)
Her earlier proceedings against Roxdent Ltd had already been dismissed. The present claim was for: “(i) whistleblowing detriment (ie a section 47B ERA 1996 claim) and (ii) victimisation contrary to section 27 Equality Act 2010 against the current Respondent (the General Dental Council).” (para 4)
For the purposes of hearing, Employment Judge Burns succinctly set out the a key issue:
“… the question today remained whether the Claimant, not being or having been an employee or worker of the Respondent, could bring a valid section 43B ERA 1996 claim against the Respondent.” (para 6)
How AI Featured
In the reasons for judgment, Employment Judge Burns noted:
“The Claimant produced lengthy written submissions and citations for the hearing today especially on the jurisdiction issue, which she explained (in the course of her oral submissions) she had created or obtained using AI (ChatGPT), and many of which she accepted were incorrect or non-existent. Most of the real authorities she cited were irrelevant. I asked her in her oral submissions to focus on and summarise on her main arguments and she did so and these I have dealt with below:”
This became relevant to the Claimant’s conduct:
“The Respondent applied for costs against the Claimant in the sum of £2804.40 on the basis that the Claimant had acted vexatiously and unreasonably in run up to the OPH causing considerable problems and extra work for the Respondent’s solicitors in a manner which (I find) is accurately summarised in the following extract from the Respondent’s supplementary skeleton argument:
“On 07 August 2025 at around 3pm, C provided to R’s representative a number of documents, including a skeleton argument, a skeleton argument summary, and an appendix, listing the authorities upon which she relied.
Upon receipt of C’s documents, R’s representatives became concerned that some of the cited authorities were inaccurate. As such, R’s representatives embarked on preparing a schedule of those authorities which is included alongside this application …… In preparing this schedule R’s representatives conducted a detailed search by way of an initial search via Google (mindful that this is often the means by which Litigants in Person identify relevant cases) of both the full case name and citation provided, then the case name alone, and then the citation alone. R’s representatives then conducted the same search via the Westlaw “case search” function, and where appropriate also via the Employment Appeal Tribunal Government database.
R’s representatives findings are set out in the schedule and fall into two broad categories:
- Cases which do not exist [“Non-Existent Authorities”];
- Cases which do exist, but do not support the proposition C asserts [“Inaccurate Authorities”].
As explained in relation to each case, those cases which fall into the category of NonExistent Authorities consist of varying combinations of: - A real case name;
- A non-existent case name;
- A real citation;
- A non-existent citation.
Of those cases which fall into the category of Inaccurate Authorities, the following apply: - In some cases there is a real case, but with a slight different citation;
- In some cases the case name and citation are correct;
- In all cases, the authority does not support C’s proposition, and could not reasonably be read to do so, for example because C asserts that the case related to Regulatory Bodies when it patently does not, or where C asserts that the case relates to whistleblowing, when no such claim was brought;
In respect of one of the cases, Yerrakalva v Barnsley MBC [2012] EWCA Civ 1399, C cites a quotation which is not found within the judgment.
In total R has identified 28 problematic authorities, consisting of 15 Non-Existent Authorities and 13 Inaccurate Authorities. No issue is taken with 9 of the authorities.
On 08 August 2025 R’s representatives informed C that they were unable to find a number of the authorities she cited, and asked that C provide copies of the same by 4pm on Monday 11 August 2025.
In response, the Claimant submitted “Claimant’s Expanded Skeleton Argument”, “Authorities Bundle_correct_names” [534 pages and 20 authorities] and “Authorities_Bundle_Index_no nr” [982 pages, circa 38 authorities].. Due to time constraints and considering proportionality, the Respondent has not checked all of the authorities cited in the Expanded Skeleton.
At paragraph 1.2 of her Expanded Skeleton C explains she had identified 4 authorities to be substituted and explained that any “earlier imprecision, which arose from reliance on secondary resources and the practical difficulty of obtaining some older judgments via freely available legal databases”. No explanation is given as to the remaining 11 nonexistent authorities. The majority of the Expanded Skeleton seeks to reply to the Respondent’s Skeleton Argument.
The additional authorities provided do not include copies of any of the cases specifically raised by R in the correspondence of 08 August 2025. C has abandoned 4 of the authorities, and has sought to adduce copies of the remaining 10; these are still not the authorities cited. (para 42)
The Respondent submitted that dealing with the above and trying to respond to it had cost the Respondent additional legal costs estimated for purposes of the cost’s application at £2337.00 (£2804.40 inc VAT) reflecting 5 hours counsel’s time 4.5 hours trainee’s time and 0.5 hrs partner’s time:
“The Claimant explained that the problems arose from her using AI to carry out research. She had previously used AI/ChatGP to carry out research without problems in her litigation against Roxdent Ltd and so she expected to be able to do so again successfully in the instant case. She did not know about the problems with the citations when she told the Respondent’s solicitors about them, and when she found out about them, she did her best within the short time available to mitigate or reduce the problem. She did not act in bad faith or with any intent to place false information before the Tribunal. I accept this explanation.”
Outcome
The claimant’s whistleblowing and victimisation claims were struck out for lack of jurisdiction and coherence. On the reliance of false citations/AI hallucinations, the Judge concluded:
“The Claimant conducted the claim unreasonably as described above by referring to the Respondent a large number of nonsensical and in many cases non-existent citations without taking any or sufficient care to check them first. By not doing so she passed the work of checking them to the Respondent to have to do at short notice. My discretion to award costs is engaged.
Furthermore, although I did not make any formal enquiry into her financial means, she told me that she has only £2000 in the bank and is struggling to find work as a dentist because of the conditions imposed by the Respondent.
However, I decline to award costs because AI is a relatively new tool which the public is still getting used to, the Claimant acted honestly (and furthermore has presented her case honestly to me over the last two days), and she tried to her best to rectify the situation as soon as she became aware of her mistake.” (paras 45-47)
Comment
This case illustrates the practical challenges created when litigants rely on AI-generated legal content without adequate verification. The Tribunal carefully balanced the Respondent’s wasted costs against the Claimant’s honest but misguided use of AI, ultimately declining to award costs. The judgment reflects some tribunals’ willingness to treat such mistakes with leniency, while still recognising the additional burdens placed on opposing parties.
As commentators, we must also acknowledge that judgments rarely provide the full factual context, and our understanding is necessarily shaped by what appears on the face of the decision. Much happens in litigation that we simply do not know. Moving from the specific context of this judgment to the broader question that has concerned me for some time:
When false citations/AI hallucinations lead to large volumes of irrelevant or fabricated material, who should ultimately bear the costs of responding?
It would be unwise to impose rigid criteria, as each case must be determined on its own facts. However, some general guidance may be necessary to avoid some respondents being exposed to serious injustice. This case is just one example of an issue that tribunals and courts will increasingly need to address as the use of AI in litigation expands.
I’d be very interested to hear your thoughts on false citations/AI hallucinations in the UK especially if you are aware of any reported incidents I may have missed. If you’ve found this article useful please consider subscribing to my Substack newsletter, where I regularly share broader legal commentary. Many of you regularly read these articles, which is great, but comparatively few subscribe, subscribe here.
Final Word from o5 Pro
This is where I offer a premium model the chance to comment or critique the preceding discussion. Here is its response:
“…cost asymmetry when a litigant in person uses AI to file invented authorities, since the respondent, often a regulator funded by registrants, must bankroll verification while Employment Tribunal costs remain exceptional. Once unreasonableness is found, the central question is when the burden should shift. A fair model is initial tolerance with explicit warning, then targeted, summary assessed costs for the time reasonably spent checking defective citations after notice, moderated for means to protect access to justice. Case management can curb spend through page and authority limits, certification that authorities are verified, deposit orders where prospects are weak, and strike out for abuse. The aim is to internalise avoidable verification costs without chilling honest LIPs.“




