Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction
I am concerned by how quickly artificial intelligence is reshaping expert evidence in the legal profession, introducing both significant opportunities and serious challenges. Among these, AI hallucinations, instances where AI generates seemingly credible yet entirely fabricated information, are particularly troubling. However, hallucinations are not the only AI-related issue already making their way into the courts regarding expert evidence.
In this article, I will examine the role of AI in expert evidence more broadly, considering whether large language models (LLMs) such as ChatGPT could themselves qualify as expert evidence or reliably support traditional expert opinions. I will also explore how AI hallucinations specifically have, or might, manifest within expert evidence in litigation. Finally, I will link these developments to recent regulatory concerns raised in housing disrepair litigation, an area currently under significant scrutiny.
Hallucinations: A Deeper Problem?
Hallucinations are currently one of the most debated AI-related issues among legal professionals and followers of this blog. While blatant AI-generated inaccuracies are generally easy to identify upon careful inspection, more subtle hallucinations embedded within expert documents, in my opinion, pose a greater challenge. Expert reports, typically regarded as reliable sources, could unwittingly become vehicles for misinformation as demonstrated by some of the cases below. However, could an LLM itself be expert evidence?
ChatGPT as Expert Evidence
Previously, I discussed various ways courts have approached the use of ChatGPT as evidence, especially as expert evidence. One notable case demonstrated judicial openness towards AI-generated expert evidence, provided that the underlying methodology and sources were transparent and rigorously examined. However, Employment Judge S Moore took a different approach:
However, AI is being utilised in other ways, which, some may argue, come quite close to expert evidence or other evidence in the court room. For example, in Ross v USA, my post here, where judges consulted LLMs like ChatGPT to assess common knowledge about extreme heat conditions or Ferlito v Harbor, where an expert consulted ChatGPT to validate his expert findings regarding securing a maul’s head to its handle. The court explained:
“Here, there is little risk that [expert’s] use of ChatGPT impaired his judgment regarding proper methods for securing the maul’s head to its handle. The record from the hearing reflects that [expert] used ChatGPT after he had written his report to confirm his findings…which were based on his decades of experience joining dissimilar materials, …During the hearing, [expert] professed to being “quite amazed” that the “ChatGPT search confirmed what [he] had already opined.”..[Expert] reiterated that he did not rely on ChatGPT…
“…There is no indication that [Expert] used ChatGPT to generate a report with false authority or that his use of AI would render his testimony less reliable. Accordingly, the Court finds no issue with [Expert’s] use of ChatGPT in this instance.”
The court concluded the expert’s use of AI did not undermine his credibility. For my part, I question the added value of AI corroborating conclusions already reached by human experts. If an expert with “extensive experience” gives his view, why would ChatGPT confirming that view add anything to the evidential weight?
Kohls v. Ellison
In Kohls v Ellison, a Minnesota statute criminalising electoral deepfake content has sparked significant debate regarding free speech protections. Claimants Mr Reagan, an online political commentator, and Representative Mary Franson argue that this law violates fundamental free speech rights equivalent to Articles 10 and 11 of the European Convention on Human Rights.
Attorney General Keith Ellison’s defence featured expert evidence from a renowned expert who discussed deepfake threats and psychological impacts. However, the judgment revealed an important flaw:
“[Expert] included citations to two non-existent academic articles and incorrectly cited the authors of a third….admits that he used GPT-4o to assist in drafting…failed to discern that GPT-4o generated fake citations to academic articles…. The irony. [expert] a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that at revolves around the dangers of AI, no less…..”
The court struck out this expert’s declaration and highlighted the AG’s nondelegable responsibility to validate submissions and emphasised that reliance on AI-generated inaccuracies severely undermined credibility.
Deep Fake Experts and Journalism
It has been reported that “Journalists are adding extra checks to keep ahead of the fake experts” after it was reported that one of the UK’s most widely-quoted psychologists does not exist. I would suggest reading the full article to see what happened and how this may impact litigation. Fortunately, it appears journalists are now implementing stricter verification measures, a development legal professionals must equally consider when sourcing or citing expert evidence or, dare I say, articles written by journalists (purported or otherwise).
Housing Disrepair Expert Evidence
Although not directly related to AI, recent concerns highlighted by the RICS Practice Alert on Expert Witnesses in Housing Disrepair (April 2025) significantly impact expert evidence credibility. The following were given as examples of poor behaviours:
- Claims managers forcing experts into using pre-populated templates and unverified copy/paste reports.
- Solicitors repeatedly instructing the same experts, creating financial dependencies and conflicts of interest.
- Misrepresentation of expert qualifications or expertise due to templating, inappropriate RICS logo usage, or altered reports.
According to the alert:
“These behaviours do not comply with our standards for members and are likely to lead to serious consequences for RICS members, including regulatory sanctions and legal consequences. The contents of an expert witness report, including information about the qualifications and expertise of the witness may be subject to robust cross-examination in any hearing, and an expert may be found to be in contempt of court if they make a false statement in their report.”
The introduction of AI in expert evidence may exacerbate these existing concerns and introduce additional complexities. For instance, AI-generated reports risk creating further opacity concerning the source and validity of information, given the ease with which plausible yet inaccurate data can be produced. Unless reports are scrutinised carefully, we now potentially face:
- increased reliance on AI-generated templates or reports;
- hallucinated (fabricated or erroneous) information;
- deepfakes or biased content embedded within reports; and/or
- potentially delegating important expert functions to AI
Moreover, I fear the increased use of AI in expert evidence will amplify questions of accountability, as it may become increasingly challenging to pinpoint responsibility for inaccuracies or misrepresentations within AI-generated expert evidence.
Comment
In light of these developments, I’ve started approaching expert reports with a new level of caution.
Admittedly, I am not an expert in the technical fields underlying many of these reports, so verifying their accuracy can be quite challenging for me. It can genuinely be difficult, sometimes even impossible, to fully and independently verify all the content. Additionally, I have noticed that some sources used by experts themselves rely on further sources, making it even more difficult to determine whether AI was involved at any stage. I think we will, to some extent, need to consider carefully where we draw the line.
Despite these complexities, my initial checks now routinely include at least three basic steps: (1) confirming whether the report was generated wholly or partially by AI; (2) ensuring that any use of AI has been transparently disclosed; and (3) verifying that the expert has checked the report thoroughly to prevent any AI-generated inaccuracies or fabricated information from going unnoticed.
As illustrated by Kohls v Ellison, it’s clear that nobody, not even the most respected experts, is immune to the pitfalls of AI-generated hallucinations. The AI Hallucination Case Tracker further emphasises this unsettling trend, revealing that even highly experienced legal professionals regularly fall victim to convincingly false AI-generated information.
For those that have suggested to me that we should stop using AI altogether as the risks are too great, I’m afraid that’s highly unlikely, especially here in the UK. The senior judiciary has consistently recognised AI as an integral part of the legal profession’s future. Even in the a landmark case setting out the associated risks, the High Court confirmed (subject to important caveats):
“…Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts…Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.” See further discussion here
Ultimately, while the integration of AI in expert evidence raises genuine and pressing concerns, outright rejection of AI technology is neither realistic nor beneficial. It seems we will need to have these issues litigated, and I am concerned that much satellite litigation is inevitable as courts grapple with the boundaries, standards, and accountability of AI in expert evidence.
What are your thoughts on this issue? The discussion continues on LinkedIn and my Substack and don’t forget to subscribe to my newsletter here for more insights and updates on emerging AI legal challenges.




