Can ChatGPT Discussions be Evidence? Observations from the Employment Tribunal

“The claimant sought to rely on a conversation with ChatGPT as expert evidence that the respondent’s explanation regarding the difficulty and time involved in extracting the data was false. He had been informed by REJ Davies on 11 June 2024 that he did not have permission to adduce AI generated documents… I would add that even if there had been, a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable.” Employment Judge S Moore

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction

In Mr D Rollo v Marstons Trading Ltd: 1600833/2022, Employment Judge S Moore considered whether a COT3 agreement from June 2023 should be set aside on grounds of alleged fraudulent misrepresentation by the Respondent’s legal representatives. The Claimant, representing himself, challenged the integrity of certain documents and sought to rely on statements generated by artificial intelligence as evidence.

What the Case Was About

The dispute centred on the Claimant’s attempt to overturn a COT3 settlement. He contended that fraudulent misrepresentations by the Respondent’s legal team induced him to accept terms he otherwise would not have agreed to. Specifically, Mr Rollo alleged:

  1. That a key disclosure schedule was falsified or non-compliant with Article 28 of the GDPR.
  2. That the Respondent’s solicitors incorrectly referred to the Respondent as a ‘data processor’ rather than a ‘data controller’.
  3. That the Respondent failed to recognise Flooid as the correct data processor, thereby withholding relevant documents.

The Tribunal ultimately found no grounds to set aside the COT3. The judge determined that there was no persuasive evidence of fraud, nor had any proven misrepresentation materially induced Mr Rollo to enter the settlement.

How AI Featured

One standout aspect of the case was the Claimant’s reliance on ChatGPT statements. He attempted to present ChatGPT’s output as ‘expert evidence’ to demonstrate that extracting data from the Respondent’s systems should not have been as complex or time-consuming as alleged. Judge S Moore declined to admit these AI-generated statements, highlighting that they lacked reliability and did not constitute valid expert testimony.

Commentary

This judgment underscores how courts and tribunals remain cautious about claims that rely on novel arguments, particularly involving technical processes, unless supported by robust evidence. Where allegations of fraud are made, the evidential threshold in civil proceedings is significant, reflecting the seriousness of such claims.

The judgment also demonstrates a clear distinction between using AI-generated content purely as a reference point and presenting it as expert evidence Employment Judge S Moore explained:

“25 …The claimant sought to rely on a conversation with ChatGPT as expert evidence that the respondent’s explanation regarding the difficulty and time involved in extracting the data was false. He had been informed by REJ Davies on 11 June 2024 that he did not have permission to adduce AI generated documents… I would add that even if there had been, a record of a ChatGPT discussion would not in my judgment be evidence that could sensibly be described as expert evidence nor could it be deemed reliable.”

Followers of the blog may notice that this approach differs from one I discussed in an earlier post. In that earlier consideration, the court appeared more open to the idea that AI tools, such as ChatGPT, could be comparable to expert witnesses, provided the methodology and sources were fully disclosed and subjected to scrutiny. However, the Employment Judge’s stance here was unambiguous.

These two approaches underscore an ongoing tension: while AI can assist in gleaning insights or simplifying processes, UK courts require transparency and robust foundations to attribute any significant weight to such material. We can expect UK judges and practitioners to proceed with caution, demanding rigorous documentation of how AI outputs are generated before according them status akin to expert evidence. It is likely that, over time, guidelines will crystallise around evidential standards for AI tools. Nonetheless, the overarching message is clear, until AI’s underlying reasoning and sources can be verified, its direct impact in courtrooms will remain limited.

Have we now reached a time for the courts to develop formal guidelines for admitting AI-assisted evidence?