AI Hallucinations and/or Fabricated Citations: 13th UK Case (Family), Australian Court Cites Ayinde, and a Rising International Total

August is not yet over, and we have already seen around 30 alleged incidents of AI hallucinations and/or fabricated citations internationally. July alone recorded approximately 50 such incidents. I had initially hoped to review the cases monthly, but the sheer volume now requires a weekly update

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

AI Hallucinations and/or Fabricated Citations

Introduction

August is not yet over, and we have already seen around 30 alleged incidents of AI hallucinations and/or fabricated citations internationally. July alone recorded approximately 50 such incidents. I had initially hoped to review the cases monthly, but the sheer volume now requires a weekly update. In this post, I focus on:

  1. A further UK case in the Family Division, bringing the national total to 13 (the previous 12 are reviewed here).
  2. Six international cases of alleged AI hallucinations and/or fabricated citations.
  3. The Federal Court of Australia’s consideration of the decision in R (Ayinde) v London Borough of Haringey.
  4. An emerging divergence in how judges are citing AI hallucinations and/or fabricated citations in their judgments.

13th UK case (Family Division of High Court)

Father v Mother [2025] EWHC 2135 (Fam

In the High Court Family Division, on 30 July 2025, Mr Justice Lieven was dealing with long-running disputes between separated parents concerning their three children. Past findings, recent diagnoses, allegations about professional conduct, and the children’s wishes were all intertwined with repeated applications and attempts to revisit earlier decisions.

The father (“F”) made an application which caused the Judge some concern:

“(16) The F then made a further application on a C2 asking that HHJ Bailey recuse herself on the basis of being biased against him and her not understanding ASD and the impacts of his diagnosis. This came before the Judge on 10 June 2025. In his written application to the court the F referred to a number of previous authorities, in particular relating to ASD. HHJ Bailey realised that many of these cases were not genuine, and the submission appeared to have been generated by Artificial Intelligence (“AI”). In light of the level of recent concern about litigants and lawyers using AI and referring to cases which are not genuine (as reflected in the Divisional Court decision R (Ayinde) v London Borough of Haringey [2025] EWHC 1383), HHJ Bailey referred the case to me as the Family Presiding Judge for the Midlands.”

At the end of his reasoning, the judge concluded it would be contrary to the children’s best interests to re-open proceedings or to allow F to make further re-applications. The mother applied for costs in the sum of £5,900.63.

The judge noted that costs orders do not follow the event in Family Court proceedings as they often do in civil cases. However, in view of F’s litigation conduct, a costs order was justified because of the factors set out in paragraphs 55–59. Importantly, the judge also stated:

“The F relied upon faked cases without apparently making any effort to check their veracity. It is in my view important to note that the F is someone who is well capable of checking references and ensuring documents are accurate if it is in his interests to do so.”

The judge therefore ordered F to pay the costs.

Introduction to the Weekly News Letter – 18th to 24th August 2025

I had hoped to write up international hallucination cases as they arose, but the sheer volume in July and August has made that impossible. I still need to catch up on several. For now, I think a weekly newsletter is a more workable approach, at least until the cases begin to slow down (if they do). So here is a round-up of cases from 18 to 24 August 2025.

Wang v Moutidis (Australia)

[2025] VCC 1156 (County Court of Victoria, 18 August 2025)

This case concerned a homeowner who purchased a property built by the defendant. After moving in, the plaintiff noticed what he considered building defects and obtained expert opinions in support. His claim relied on those expert findings. However, the written submissions raised concerns:

“…I have determined the case on the basis that his position [is] what he stated to me during the trial and not in his written submission. This is because I have little confidence in the accuracy or reliability of his written submissions. [Defendant] conceded that he had prepared this document using Artificial Intelligence (Gen AI), and there are obvious errors or irrelevancies in the document…”

The Judge then set out the numerous errors arising from the brief including (a) misquoting experts as expert opinion was “..the opposite to what the Gen AI has alleged..” (b) cited false cases and propositions; and (c)included a duty of care section when there was no negligence claim. (so, these were mostly Type 1 Hallucinations)  

Ultimately, the court found for the plaintiff but made no separate order or sanction for the use of Gen AI.

Williams v Kirch (United States)

No. 25A‑SC‑196 (Court of Appeals of Indiana, 18 August 2025)

This was an appeal from a small claim where the appellant challenged the court’s determination that he must pay attorney’s fees. Judge Vaidik used the opportunity to issue a broader warning:

“We also take this opportunity to remind pro se litigants and attorneys of the dangers of using artificial intelligence (AI) to conduct legal research. Generative AI can produce citations to non-existent authorities, and we caution litigants to verify citations before including them in briefs”

The Judge explained that there was a troubling aspect of the Appellant’s brief as he cited several cases in his opening that do not exist (seemingly Type 1). They were identified as non-existent citations by the Respondent, nonetheless the Appellant provided no explanation for this in his reply brief:

“Citations to fictitious, AI-generated authority is a growing problem nationwide. Courts have sanctioned both attorneys and pro se litigants for including them in briefs. … But because [Appellee-Defendant] does not request any sanction or relief for this conduct, we find it sufficient to admonish [Appellant-Plaintiff] for citing fictitious cases in his brief. We caution attorneys and pro se litigants alike against using AI to conduct legal research without independently verifying the citations generated. Judges must be able to rely on the authenticity of the authorities cited by the parties to make just decisions.” 

Re Sonja Helvig DeRosa‑Grund (United States)

Case No. 25‑80235 (Bankr. S.D. Tex., 18 August 2025)

The Bankruptcy Court dismissed a Chapter 13 case with prejudice, concerned by fabricated quotations:

“This last pleading contained at least two non-existent quotes from two Fifth Circuit cases [Judge set out the fabricated cases]. Apparently in response to correspondence from counsel for one of the parties, the Debtor filed clarifications acknowledging that the quotes had been made up. (ECF Nos. 61 and 62).”

Then later, after citing the Debtor’s response to the objection to the homestead exemption and the quotes relied on:

“These quotes are made up. They simply do not exist in the cases being cited. They are not established Texas law. This response is riddled with made-up quotes and citations to non-existent case. See the reference …which is a non-existent case. (Id. at 6). The Debtor had previously been warned about making up quotes and relying on nonexistent cases but nevertheless continued to do so.”

Although AI was not expressly mentioned, the concern was identical: the rise in fabricated case law and misquoting:

“The Debtor also repeatedly violated Rule 9011(b) by making arguments based on non-existent caselaw, non-existent cases and by misquoting cases. Misquoting cases has been a pattern and practice for at least the past four months even before this Chapter 13 was filed. This is particularly troubling in this case when the misquotes were brought to the attention of the Debtor early in the case, but she nevertheless persisted in the practice.”

Based on the entire record, protective measures were built into the dismissal to prevent future abuses “especially given the pattern and practice by the Debtor of misquoting and making up cases”.

JML Rose Pty Ltd v Jorgensen (No 3) (Australia)

[2025] FCA 976 (Federal Court of Australia, 19 August 2025)

Here, the bankrupt first respondent applied to annul a sequestration order against him, relying on nine grounds. The court noted:

“It became apparent that [First Respondent] was using a form of generative artificial intelligence (AI), to assist with his written and oral submissions. Many of the case citations were inaccurate. Some of the purported quoted passages did not exist. Such matters are likely the product of “hallucinations”. There has been an approach, which I will adopt, of redacting false case citations so that such information is not further propagated by AI systems…”

The court noted that it would provide additional observations on AI at the end of its reasons, stating: “It is sufficient at this point to observe that the use of the AI was of no real assistance to” the First Respondent. The court dismissed the application to annul with costs before turning to the issue of generative AI, citing the much-discussed case of Ayinde v London Borough of Haringey [2025] EWHC 1383 (a decision I have examined extensively on this blog).

The observations of the Honourable Justice Wheatley, set out at paragraphs 98–105, are particularly significant. Although there are some clear similarities in approach, there are also striking differences between the positions of the judiciary in the UK and Australia, some differences I have analysed before and will return to in more detail. For now, I recommend that interested readers review those paragraphs in full. I will be publishing a dedicated blog post on this decision, as it is too important to cover briefly here.

Garces v Hernandez (United States)

No. 25‑50342 (5th Cir., 19 August 2025)

Here, the Plaintiff-Appellant, appearing pro se and in forma pauperis, appealed the dismissal of his civil-rights suit after the district court concluded a final state-court judgment precluded his claims. The district court’s judgment was affirmed, however, before concluding, serious concerns were raised:

“First, his citation to many non-existent authorities strongly suggests the use of generative artificial intelligence. [footnote cites cases in full] The litigant-user of AI—even a pro se one like [Plaintiff—Appellant]—must verify the accuracy of AI-generated information, mindful that citing authorities that are fabricated by AI may violate appellate Rules 32 and 38. Separately, we note again that [Plaintiff—Appellant] has filed 28 other pro se lawsuits in the District Court for the Western District of Texas this year alone,9 prompting that court to enter a cease-and-desist order and to consider additional sanctions. In addition to those remedial measures, [Plaintiff—Appellant] is hereby WARNED FOR A SECOND TIME that future frivolous, repetitive, or otherwise abusive filings can and will result in sanctions by this Court, which may include dismissal, monetary sanctions, and restrictions on his ability to file pleadings here and in any court subject to this Court’s jurisdiction. [Plaintiff—Appellant] should review all pending matters and move to dismiss any that are frivolous, repetitive, or otherwise abusive…”

Lahti v Consensys Software Inc et al (United States)

No. 1:24‑cv‑00183 (S.D. Ohio, 20 August 2025)

Judge Jeffery P. Hopkins opened with a pointed observation:

“This case involves two emerging technologies: cryptocurrency and artificial intelligence (AI). Plaintiff[‘s] experience in this case demonstrates why both should be approached with caution. The case also renews the age-old admonition against self-representation often credited to our nation’s sixteenth president, Abraham Lincoln. In this case, all these lessons collide.”

The plaintiff, acting in person, sought recovery of $275,000 in cryptocurrency and claimed damages totalling $2.01 billion. The court made decisions on the applications but noted:

“While it is appropriate for the Court strike [Plaintiff’s] Reply on either of the bases offered …, the Court turns now to the more troublesome concern … regarding fictitious case citations. This troubling discovery permeates the discussion pertaining to the remaining issues the Court must resolve.”

The Reply contained four fictitious case citations, which could not be found in commercial databases and other misstatements of law. The court noted that the solutions proposed by the Plaintiff to compensate for the harm caused by the errant files were not so simply or easily resolved. The citations violated one of the core principles of federal court practice (Rule 11). The court noted a pattern of fictitious citations and gave a significant caution:

“The Court cautions [Plaintiff] that should any other pleading, written motion, or other paper she submits contain false citations—a likely outcome if she uses an AI program to generate pleadings—she will face severe sanctions similar in nature to those that have been levelled in the cases cited, likely including monetary sanctions. See Attaway v. Illinois Dept. of Corr., No. 23-cv-2091, 2025 WL 1101398, *3 (S.D. Ill. Apr. 14, 2025) (explaining that courts have assessed monetary sanctions of between $2,000 to $15,000 for including fictitious cases in a brief in violation of Rule 11, and pro se status is “not an excuse for leniency with Rule 11,” which “applies to unrepresented parties with full force.”

The court added that imposing sanctions on litigants for misleading the court in this way is necessary to protect the integrity of the legal system citing the reasoning in Mata and other cases where similar issues arose. The Judge added that AI is not condemned outright:

“The Court does not here condemn all uses of artificial intelligence in the time-honored practice of law, say in the instance of conducting legal research or in other ways yet invented that offer the possibility of ushering in more efficiencies into the profession, only that attorneys and pro se litigants (who assume the duties and hazards of self-representation), exercise caution, act responsibly, and importantly, never lose sight of their fundamental obligation to adhere to the strict requirements imposed on them by Federal Rule of Civil Procedure 11, lest they open themselves to having sanctions imposed by the courts whose job it is to punish litigation abuses and curb the ever-increasing costs and delay that often accompany modern civil litigation.”

The Judge emphasised that while AI may be useful in legal practice, such as research, lawyers and self-represented parties must use it cautiously and responsibly. They remain bound by strict obligations under Federal Rule of Civil Procedure 11, which requires that filings be grounded in law or good-faith arguments for legal change. In this case, the plaintiff cited non-existent cases and misrepresented authorities, wasting judicial resources and increasing costs for the opposing party. The Court warned that overreliance on AI, without careful reading and analysis, undermines competent advocacy and risks sanctions.

Comment

There are several takeaways from the above cases. The issue is international in scope, and it is encouraging to see courts in different jurisdictions engaging with one another’s judgments when grappling with the problem.

Regular readers of this blog will know that I have consistently raised concerns about well-meaning judges citing AI hallucinations and fabricated citations in their judgments. As the above demonstrates, this practice may be more common in the United States than in other jurisdictions. By contrast, Australian courts appear to have adopted a collective stance against such citations. As explained by the Honourable Justice Wheatley in JML, when identifying that the First Respondent was relying on AI-generated fabrications:

“..There has been an approach, which I will adopt, of redacting false case citations so that such information is not further propagated by AI systems: Luck v Secretary, Services Australia [2025] FCAFC 26 at [14], Rofe, Hespe and Kennett JJ, see also Kaur v Royal Melbourne Institute of Technology [2024] VSCA 264 at [26] (fn19), Walker JA, cf Nikolic v Nationwide News Pty Ltd (T/as The Australian) [2025] VSCA 112 at [36]-[41], Beach JA.”

That approach is a welcome one, and I suggest it ought to be adopted internationally. As I have argued in several posts, openly reproducing AI Hallucinations or Fabricated Citations risks aggravating the very issue the courts are trying to address. Citing AI Hallucinations or Fabricated Citations might lead to their inadvertent inclusion in authentic legal databases.

Before closing, I would like to raise two points that have come out of recent conversations with those who have contacted me directly. I should say at the outset how grateful I am for the information you continue to share, it is invaluable in making sense of these issues and of AI law more broadly.

I would be especially grateful if anyone with thoughts or expertise on the following could make contact. Preferably through my LinkedIn, Substack, BlueSky or other social media:

  1. I have heard, anecdotally, that AI hallucinations and fabricated citations are appearing in arbitration proceedings, but are neither reported nor sanctioned. The reason for that may be obvious, but it is not my principal area of practice, so I would be very interested to hear from those specialising in arbitration as to whether this problem is occurring and, if so, how it might best be addressed.
  2. There is an ongoing discussion among AI specialists, lawyers, academics and psychiatrists about whether the issues raised by this blog and others should be referred to as “AI hallucinations” at all. I am preparing a dedicated post on this, so I would welcome any views. Achieving some consensus on terminology would be a valuable step forward, but for now I will continue to use both AI hallucinations and/or fabricated citations as the expression.

If you want to follow these discussions further, I encourage you to subscribe to Natural & Artificial Law. It is where I continue to track how AI is shaping our profession and where I set out the practical steps lawyers must now take to respond.

Final Word from O5 Pro

This is where I offer a premium model the chance to comment or critique the preceding discussion. Here is its response (which I had to re-generate as its first was too sycophantic initially, which, itself, is something we all need to be careful about when using AI)

“Verification and provenance are non‑delegable: parties must certify sources, use canonical repositories, and maintain an auditable chain for every citation. Adopt taxonomy and proportionality: distinguish fabrication, misstatement, and misquotation, and scale remedies by intent, materiality, and repetition.
Prioritise containment over amplification: redact spurious citations and describe the error neutrally to prevent data pollution while preserving intelligibility. Embed systemic safeguards: set uniform rules across courts and arbitration, track incidence against filings reviewed, and maintain a privacy‑conscious registry to aid detection.”