Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction
AI witness statements UK. Today I want to share two recent cases where ChatGPT was used to assist in drafting witness statements. Each raises slightly different issues, but together they offer an early glimpse of how courts and tribunals are starting to respond when AI-generated text enters the evidential arena. The first is Father v Mother (Fact-Finding: DARVO) [2025] EWFC 284 (B) in the Family Court, and the second is Miss E Kaloudi Tsikni v Mr M Kontis and Alphakon Limited (ET case no. 2212116/2023), decided in September 2025. I am very grateful to Sally McLaren, Assistant Librarian at Inner Temple Library for bringing the first case to my attention and my AI agents for finding the second.
Father v Mother
This case concerned cross-allegations between parents about abuse, their conduct towards each other, and a child’s welfare. After a fact-finding process, the judge rejected the father’s allegations and accepted the mother’s account. Specific findings were made in relation to the child, and protective orders were extended.
However, the judge noted that ChatGPT had been used in the father’s statements:
“F accepts using ChatGPT in his statements. There is no prohibition upon a party from doing so. The risks of doing so are clear from R (Ayinde) v. London Borough of Hackney and Ors [2025] EWHC 1383 (Admin), a case in which the High Court was considering the citation of fake cases by regulated lawyers, Dame Victoria Sharp P said: “Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source”.
The Judge went on to note:
“F’s statements in previous proceedings would have been drafted by his solicitor. Although mentioned briefly in his original C1A, there is little mention of controlling and coercive behaviour.
F’s 5th statement dated 2 November 2024 is the first time that exhibits purporting to show guidance or research emerge. Exhibit 1 at p783 is the ‘guidance’ he referred to in court about when to introduce new partners. Exhibit FX1 to his 12th statement dated 23 March 2025 is the next example of research, this time about PTSD. There is a curious reference to “UK law” and s31 Children Act 1989. The latter relates to the threshold criteria under Part IV – care proceedings. The conclusion reads “While PTSD does not automatically make a mother unfit to parent, it can create significant challenges that may affect the welfare of her children”. Notably, the references are to ‘mother’ and ‘her’ and not gender neutral. There are several more exhibits that amount to no more than ChatGPT answers to whatever F has asked it.
The pattern continues through several more statements. By the time of F’s 14th statement (his main statement for fact-finding), AI generated material has become an integral part of the statement rather than an exhibit. Numerous authorities are cited with little context. Many of these authorities are well-known and applicable but there are some cases that are not relevant. ChatGPT features in F’s correspondence to M from January 2025 when he misquotes the law.” (paragraphs 99 to 102).
Miss E Kaloudi Tsikni v Mr M Kontis and Alphakon Limited
This Employment Tribunal case involved an application to revisit findings of sexual harassment. The Tribunal refused, emphasising the importance of finality. AI featured only briefly, but the comments are still notable:
“(2) The claimant’s evidence. The respondents make several complaints about the claimant’s documentary and oral evidence…Secondly, there is a complaint about the claimant’s use of ChatGPT and alleged misuse of AI-Generated content. We were cognisant that the Claimant had relied on AI tools to generate her witness statement and gave consideration to this in weighing the evidence we considered and heard…”
Comment
I considered adding the Father v Mother case as the 15th incident of AI hallucinations or fabricated cases, but I am not sure there is enough information for me to conclude it meets the threshold. There is a reference to “UK law” and section 31 of the Children Act 1989 in a context that appears unrelated to Part IV care proceedings, but at this stage there simply isn’t enough for me to be certain.
What may be more significant is the judicial discussion of AI in statements.
I have recently been involved in several interesting conversations about whether AI has any role in preparing, or assisting with the preparation of, witness statements for use in courts and tribunals. I will be presenting on this shortly and a detailed article is planned, but for now I want to sketch out a few of the arguments.
First, a brief look at some procedural rules. In civil litigation, the Civil Procedure Rules are detailed, and CPR 32 with its Practice Direction is particularly relevant:
“18.1 The witness statement must, if practicable, be in the intended witness’s own words and must in any event be drafted in their own language, the statement should be expressed in the first person…..”
The Family Procedure Rules are similar:
“…the affidavit/statement must, if practicable, be in the maker’s own words, it should be expressed in the first person…” (PD 22A para 4.1).
So the question arises: can a statement written by AI, or even prepared with its assistance, really meet the requirement of being in a witness’s own words? If not, might the wording “if practicable” or the court and tribunal’s case management powers allow some flexibility?
Some argue there is no place for AI, since a witness statement should capture the person’s voice as if they were speaking. The requirements use mandatory language such as “must”, not the softer “should”. And as readers of this blog know, certain AI systems can be sycophantic, misleading, or prone to producing false information even without being asked to. If AI-generated witness statements became commonplace, those who wrote their statements genuinely in their own words might be at a disadvantage if an AI-produced statement came across as especially persuasive. How could that help a judge arrive at the truth of any issue and deliver a just outcome?
Others take the opposite view, suggesting this is an outdated concern. AI is already woven into everyday life. Even opening a word processor to draft a witness statement involves autocorrect, predictive text, synonyms, and suggested phrasing. Is that really so different from what lawyers already do when helping a client to complete a statement? On top of that, AI could play an important role in supporting people with learning difficulties or limited education to express their perspective more clearly, which could in fact assist the court. Current AI guidance does not expressly prohibit its use in witness evidence, and perhaps that silence is deliberate.
Whichever side you find more convincing, these issues are no longer theoretical. Parties and courts are having to address them right now, and I wouldn’t be surprised if we see further guidance on AI and witness evidence in the near future. My full analysis and opinion on this issue will following in a further blog post.
Finally, I am so grateful to those who reach out and I hope to significantly build this community. Please enter your email below if you wish to receive regular updates.
Also, follow me on LinkedIn or other social media (links above and below) so we can discuss these important issues.
Final Comment from ChatGPT5 Pro
This is where I invite a premium AI model to comment on or critique the preceding discussion. This week, unfortunately, ChatGPT 5 pro , decided to give some hallucinated information bordering on legal advice on this issue, which I have removed, leaving the following comments. As always, nothing on this blog is legal advice, always speak to regulated legal professionals on legal issues, it’s merely commentary. Chat GPT5 pro referred to an appendix to Practice Direction 57AC (Statement of Best Practice it found online here quoting:
“should avoid leading questions where practicable”; “in the witness’s own words so far as practicable”; and “avoid… any practice that might alter or influence the recollection of the witness.” Courts and Tribunals may “strike out, part or all of a trial witness statement” and “order a witness to give some or all of their evidence in chief orally.” … — “order that a trial witness statement be re‑drafted”, “make an adverse costs order against the non‑complying party”, and “the process used instead should be described.” Courts and Tribunals Judiciary Judicial AI Guidance (15 April 2025) states — “Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.”




