Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Oakley v Information Commissioner [2024] UKFTT 315 (GRC)
Summary
This is an interesting case, I should have spotted sooner. Trevor Lee Oakley v Information Commissioner puts AI in the spotlight, albeit somewhat awkwardly…
Trevor Lee Oakley wanted clarity from the Department for Work and Pensions (DWP) about their policies regarding filming evidence at Jobcentres, specifically in situations where force might be used by security staff to prevent this filming. Although the DWP provided some policy documents, Oakley was convinced they had more detailed internal guidance. Dissatisfied with the Information Commissioner’s agreement with the DWP, Oakley took his case to the First-tier Tribunal.
To strengthen his argument that the DWP’s document searches had been inadequate, Oakley enlisted ChatGPT’s assistance. He asked ChatGPT to suggest better search terms that DWP could have used, and ChatGPT produced several relevant keywords, including “Evidential collection policy” and “Use of force guidelines.”
However, the Tribunal panel was cautious about relying on AI-generated evidence. They gave limited weight to ChatGPT’s contribution, expressing concerns about transparency in AI methodology and sources. They were essentially asking for a more robust explanation of how ChatGPT arrived at its suggestions.
Comment
This case provides insights into two significant areas:
- The reliability of ChatGPT Evidence. AI-generated suggestions, although potentially valuable, need clearer standards for admissibility in court. Tribunals and courts are understandably cautious and require clarity about how AI systems generate their outputs.
- Transparency from Public Bodies. Oakley’s appeal raises broader questions about the transparency obligations of public authorities. It highlights the need for clear, comprehensive policies especially regarding sensitive matters like security and evidence collection.
I’m not surprised the tribunal did not explicitly set rules or standards for assessing AI evidence moving forward. However, it’s interesting, and perhaps concerning, that they did arguably provide indirect guidance about handling future FOI requests involving AI evidence. The tribunal underscored the necessity of transparency and clear methodologies behind such evidence, conditions that could prove challenging, if not impossible, given the often opaque nature and proprietary restrictions surrounding data used to train large language models (LLMs). A subject which requires its own post.
Admitting AI evidence, even under such transparency conditions, could present other challenges. One concern is that reliance on AI-generated evidence might encourage litigants to introduce AI outputs without fully understanding their reliability or limitations, potentially confusing legal arguments. Well-documented AI biases could also enter proceedings, with differing LLMs presenting conflicting viewpoints, making it extremely difficult for courts to determine credibility or reliability.
Additionally, over-reliance on AI-generated inputs could undermine the rigorous evidential standards traditionally upheld by courts, inadvertently allowing biased or less credible evidence to affect judicial outcomes adversely.
We certainly haven’t seen the last of these issues. As AI grows increasingly intelligent and efficient, we will likely see litigants attempt to introduce it as expert or supporting evidence more frequently. This emerging area may soon require clearer judicial guidance or possibly legislative intervention to maintain evidential integrity in legal proceedings.




