Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction
It appears there were around 45 alleged incidents of AI hallucinations and/or fabricated citations in August, only a slight reduction from the 50 or so reported in July. That figure may still change, as further incidents could emerge retrospectively in legal databases. Nevertheless, it represents a significant number, and this week has again seen no shortage of international cases involving AI hallucinations and/or fabricated citations. Last week’s newsletter can be read here. here.
I should briefly set out my research method for those interested in how I arrive at this information. I begin with a basic search using recognised, publicly available legal databases and blogs. I then send out AI agents to pick up any further cases I may have missed. After that, I read through the links and write about anything I think readers of this blog may find useful. This process takes time, but I enjoy the subject. That said, mistakes can slip through. Please check the sources directly. This blog offers commentary, not legal advice.
Orano Mining v Niger (2)
ICSID Tribunal (International Arbitration) 26 August 2025
I have not been able to read this article in full as it requires a subscription. However, this may be one of the first reported cases of AI hallucinations and/or fabricated citations in international arbitration. Damien Charlotin explains in the heading of his article:
“Niger’s proposal to disqualify Fernando Mantilla-Serrano from uranium mining arbitration is rejected; challenge procedure is marred by citations and authorities that couldn’t be borne out when scrutinized by co-arbitrators”.
I am currently hoping to co-author a piece soon on the position of AI hallucinations in arbitration, where I will address this case more fully. I also hope to speak with Damien directly to clarify elements of this decision, so I will not address it further here.
Helgen Industries d/b/a DeSantis Gunhide v Department of Justice (United States)
B-423635 (GAO, 26 August 2025)
Here, a small business manufacturer of holsters, protested the FBI’s award of a contract for concealment and tactical holsters to another business alleging that the awardee was an ineligible large business. The protest was dismissed as the small business was not an “interested party”. However the report notes:
“…The protester contends that these cases establish that a manufacturer may qualify as an interested party when it is the source of the proposed product, the manufacturer has a direct and substantial interest in the outcome, and the manufacturer is adversely affected by an award to an ineligible offeror…The agency advised our Office that it “was not able to find or verify either of those cases through research.” Req. for Dismissal at 3-4. Neither was our Office. In response to the agency’s request for dismissal [protester] reiterates …without responding to the FBI’s claim that it was unable to find the cases cited in the protest attachment [protester] referenced two more cases in support the proposition that as a manufacturer of the solicited products it is an interested party… Again, our Office was unable to locate these two cited decisions. … Neither of these decisions is in any way relevant to the question of …status as an interested party.”
In relation to any sanctions:
“To the extent that the faulty citations are the product of the protester’s reliance on artificial intelligence (AI) programs, we note that the use of AI programs to draft or assist in drafting legal filings can result in the citation of non-existent decisions, such that reliance on those programs without review for accuracy wastes the time of all parties and GAO. Raven Investigations & Sec. Consulting, LLC, B-423447, May 7, 2025, 2025 CPD ¶ 112 at 4. As we have explained, our Office necessarily reserves an inherent right to dismiss any protest and to impose sanctions against a protester, where a protester’s actions undermine the integrity and effectiveness of our process. Id. Here, because we dismiss this protest because the protester is not an interested party, we do not exercise our right to impose sanctions for submission of non-existent citations. The protester, however, is advised that any future submission of filings to our Office with citations to non-existent authority may, after a review of the totality of the circumstances, result in the imposition of sanctions.”
Richburg v Glyndon Square LLC (United States)
C/A No. 25-01297-EG; Adv. Pro. No. 25-80037-EG (Bankr. D.S.C., 27 August 2025)
Here, debtors represented by counsel, filed a motion for a preliminary injunction in an adversary proceeding against their landlord, Glyndon Square LLC. The motion cited two bankruptcy cases that did not exist. The Court discovered the false citations and issued a show-cause order:
“Debtors’ attorney (“Counsel”), a solo practitioner with approximately 40 years of experience, admitted that the fake citations were generated by AI, and out of haste and a naïve understanding of the technology, he did not independently verify the sources were real before including the citations in the motion filed with the Court seeking a preliminary injunction. Having heard the explanation and arguments Counsel provided and having considered the facts of this case, while considering the procedural context in which the issue arises and the limitations imposed by Federal Rule of Bankruptcy Procedure 9001(c)(4)(B)(ii), the Court imposes non-monetary sanctions. Not to diminish the gravity of Counsel’s actions in this case, the Court intends for this Order to also serve as a “lesson learned” for the bar in general of the potential consequences of uninformed reliance on generative AI in legal practice”
Although the adversary proceeding was voluntarily dismissed before the hearing, the Court concluded that the attorney’s reliance on unverified AI-generated citations warranted sanction. Because Rule 9011(c)(4)(B)(ii) restricted monetary sanctions in the circumstances, the Court imposed non-monetary sanctions instead. Specifically, the attorney was ordered to complete three additional hours of continuing legal education on ethics, including at least two hours on the ethical use of artificial intelligence in legal practice. For those interested, there’s an interesting article on AI hallucinations and/or fabricated citations referenced at footnote [3] after the judge discussed the term: “As the Court understands the term, “hallucinations” are false answers or non-existent content created by generative AI systems in response to user prompts. See [here]…”
Allbaugh v University of Scranton (United States)
No. 3:24-cv-02237 (M.D. Pa., 28 August 2025)
This case concerned a complaint against Defendant the University of Scranton alleging discrimination on the basis of sex. The motion to dismiss was granted with leave to amend. Furthermore, the court found the plaintiff:
“…violated this Court’s Local Rules, the undersigned’s preferences and orders on the use of generative artificial intelligence, and the Federal Rules of Civil Procedure by citing to non-existent legal authority generated by artificial intelligence…”
The Court acknowledged the plaintiff was proceeding pro se, however, the plaintiff “…admits that he is a former attorney. …Courts hold pro se litigants with substantial legal training to a higher standard than pro se litigants with no legal expertise… While [plaintiff] asserts he is retired, he nonetheless has a legal education and should be familiar with the Federal Rules of Civil Procedure…”
That said because the plaintiff was acting pro se, the court found that a fine on the lower end was appropriate and ordered $1,000 observing that “…A litigant who blindly trusts AI-generated citations does not conduct a reasonable inquiry into the reliability of his legal contentions…“
Multiphone Latin America Inc v Millicom International Cellular S.A. (United States)
No. 25-23249-CIV-ALTONAGA/Reid (S.D. Fla., 28 August 2025)
A Florida software developer sued a Luxembourg telecom over a 2016 agreement for a branded calling app across six Latin American markets. The judge allowed the basic contract claim to proceed but dismissed the other claims without prejudice, granting leave to amend by 10 September 2025. However:
“…One matter remains. In the Reply, Defendant identifies a pattern of serious deficiencies in Plaintiff’s Response — including fabricated quotations; reliance on authorities that do not support — or even discuss — the stated propositions; and, in two instances, citations to non-existent cases that appear to have been generated by an artificial intelligence (“AI”) tool…Based on a review of Defendant’s briefing, Plaintiff’s Response, and Plaintiff’s Proposed Sur-Reply — which addresses Defendant’s accusations (see generally Proposed Sur-Reply) — the Court finds Plaintiff’s counsel likely used AI to generate the Response and failed to ensure the accuracy of citations and legal arguments before signing and filing it…”
Further:
“Among other errors, Plaintiff cites two “cases” that appear not to exist. … Plaintiff also includes “quotes” that do not appear in the corresponding cited cases…and references several cases that do not even discuss the subject for which Plaintiff cites them…”
In the Proposed Sur-Reply, Plaintiff’s attorneys conceded there were “various citation errors” in the Response:
“…yet insist — without explanation — that “the propositions they stand for are grounded in supporting law.” …Plaintiff’s attorneys do not explicitly acknowledge their citations to nonexistent cases and quotes… Nor do they explain how nonexistent cases can “stand for” any proposition… Instead, they allude to “a cumulative effort amongst Plaintiff’s attorneys that ultimately resulted in miscommunication and misapplication of counsels’ research notes” (id.), and proceed to address substantive arguments Defendant makes in the Reply (see generally id.). Remarkably, in doing so, Plaintiff’s attorneys again misquote cases. In at least one instance, the Proposed Sur-Reply states that quoted language appears in a case… that does not contain the language… In other areas of the Proposed Sur-Reply, Plaintiff’s attorneys present quotes from Westlaw Keynotes as quotes from the text of cases… Considering this troubling pattern of errors, the Court is concerned that Plaintiff’s attorneys may be in violation of their ethical duties toward the Court and their clients.”
The judge referred two attorneys to the Southern District of Florida’s Ad Hoc Committee on Attorney Admissions, Peer Review, and Attorney Grievance, and to the Florida Bar for investigation. The motion for leave to file a sur-reply was denied as moot.
Parker v Costco Wholesale Corp. (United States)
No. C25-0519-SKV (W.D. Wash., 28 August 2025)
The plaintiff brought various claims against her former employer, the defendant. After the defendant moved for summary judgment, the plaintiff filed a response, a motion to strike with reply, and a motion for leave to amend. The court identified
“…material misstatements and misrepresentations in those filings, which contain hallucinated case and record citations and legal errors consistent with unverified generative artificial intelligence (“AI”) outputs…”
so ordered the plaintiff’s attorney to show cause as to why sanctions should not issue.
The court then set out the various errors in the filing with the fabricated citations/AI Hallucinations before considering the motion to strike and reply:
“Plaintiff’s Reply is otherwise notable in two respects. First, the text appears to have been copy-pasted from a generative AI program without any quality control by Counsel. Straight, as opposed to curly, apostrophes and quotation marks remain throughout, indicating the content was likely not typed into a word processor. At some point, the program appears to have experienced, and documented, an “[]artificial error[.]” …Second, Defendant’s Response put Counsel on notice that Plaintiff’s position relied on demonstrably inaccurate characterizations of the Local Civil Rules and Defendant’s filings. Yet Counsel opted to file a Reply that doubled down on Plaintiff’s position instead of withdrawing the motion. Together, the legal, factual, and typographical errors indicate to the Court that the Reply may have been generated without any meaningful attorney oversight and filed despite Counsel knowing, or having reason to know, the position taken was unfounded.”
Then later:
“The errors addressed above are only a sampling. Taken together, they form a pattern that indicates Counsel may have used AI to generate filings without confirming the accuracy of authority relied upon, existence of evidence cited, or defensibility of positions taken. If AI was not used, these filings indicate wholesale inability to identify and marshal applicable law and a degree of sloppiness that severely impaired the briefs’ utility…”
Accordingly, counsel was ordered to show cause by September 8, 2025, as to why sanctions should not issue, immediately inform the plaintiff of the Order and the relevant Docket Nos.
Thackston v Driscoll, Secretary of the Army (United States)
No. SA-24-CV-00276-FB-ESC (W.D. Tex., 28 August 2025)
Here, a magistrate judge recommended granting the Secretary’s renewed Rule 12(c) motion and dismissing the case for lack of Article III standing, finding the plaintiff’s requested injunctions either non-redressive, moot, or incompatible with federal statutes and regulations. Damages were already unavailable so the only surviving relief theory failed at the redressability stage.
The court also flagged:
“The undersigned notifies the District Court that Plaintiff’s Reply In Support Of Plaintiff’s Injunctive Relief Statement And Opposition To Defendant’s Renewed Motion For Judgment On The Pleading [#43] is rife with citations to cases that do not exist and mischaracterizations of other cases. More specifically, two citations are to cases that do not exist. And although many of the other cases he cites are real cases, he materially misrepresents the propositions these cases stand for, often providing “hallucinated” quotes. It therefore appears that Plaintiff’s counsel likely used generative artificial intelligence (“GenAI”) to assist in his research and, even more strikingly, did not bother to check whether the GenAI-generated content was accurate…”
The court listed the numerous examples of fabricated citations/AI hallucinations and noted:
“Beyond the extreme pattern of hallucinated legal citations summarized above, it appears that Plaintiff’s counsel may have utilized GenAI to write some or all of his brief. For instance, Plaintiff’s Reply uses the phrase “The Fifth Circuit, which includes the Western District of Texas, has recognized . . .” three separate times…The Court is well aware that it is within the Fifth Circuit. Such repetitive, redundant language makes the undersigned suspicious that Plaintiff used a genAI tool in an inappropriate manner. It strains credulity that an attorney would feel the need to inform judges in the Western District of Texas of the Circuit in which they sit—let alone three times…”
On this issue, the court decided that Rule 11(c) sanctions may be appropriate, and it formally notified the district judge to consider sanctions after objections, suggesting remedies such as a monetary penalty and mandatory continuing legal education on the ethical use of generative AI, while also pointing to Texas professional conduct guidance that failure to verify AI outputs can breach multiple duties.
Lothamer Tax Resolution, Inc v Kimmel (United States)
No. 1:25-cv-579 (W.D. Mich., 29 August 2025)
Here, the court partly granted interim relief, requiring removal of a social media post under a contract, set a hearing on retained software, and declined to deal with much else at this time, save for the Judge raised concerns about the Defendant, who was acting pro se’s, briefs:
“…The Court presumes that the fictitious citations in [Defendant’s] brief were the result of using generative artificial intelligence (“AI”). “It is no secret that generative AI programs are known to ‘hallucinate’ nonexistent cases, and with the advent of AI, courts have seen a rash of cases in which both counsel and pro se litigants have cited such fake, hallucinated cases in their briefs.” … “Without question, it is improper and unacceptable for litigants—including pro se litigants—to submit ‘nonexistent judicial opinions with fake quotes and citations.’” … Such fake citations waste the time and resources of the Court and opposing parties. … “Sanctions may be imposed for submitting false and nonexistent legal authority to the Court.”…” However because the Defendant “may not have recognized the risks of using AI, the Court will not impose sanctions at this time…” However, he is “…now on notice that future use of fictitious citations may result in sanctions…”
Clerk of the Court and Comptroller for the 13th Judicial Circuit v Rangel (United States)
No. 2D2024-1772 (Fla. 2d DCA, 29 August 2025)
After filing its initial brief, the appellant in reply pointed out multiple errors:
“[T]he answer brief also blatantly misquotes and otherwise misrepresents Florida case law. The [Appellee], through her counsel, misrepresents holdings of opinions no less than 9 times, quotes language from opinions that does not appear in the opinions 10 times, and cites a case that does not appear to exist…”
The appellee was directed to show cause why sanctions should not be imposed for filing a brief containing multiple misstatements and misquotes, and for failing to seek leave to correct them when pointed out by the appellant in its reply brief.
In response, counsel admitted that the brief included “non-existent authority and fictitious quotation blocks within the answer brief.” He acknowledged the grave errors, accepted full responsibility, and offered an apology. Counsel explained:
“…he was handling this appeal pro bono and that as he began preparing the brief, he recognized that he lacked experience in appellate law. He stated that at his own expense, he hired “an independent contractor paralegal to assist in drafting the answer brief.” He further explained that upon receipt of a draft brief from the paralegal, he read it, finalized it, and filed it with this court. He admitted that he “did not review the authority cited within the draft answer brief prior to filing.” He added “that had he attempted to review the authority prior to filing, he would have easily discovered the fictitious block quotations and non-existent case.” Although the appellant’s reply brief was filed on April 24, 2025, …acknowledged that he “did not review the reply brief filed by the Appellant prior to receiving this Court’s July 17, 2025, Order…”
Because of the seriousness of the conduct, the matter was referred to The Florida Bar for appropriate disciplinary proceedings and sanctions as deemed proper. The court concluded:
“Efficiency, expertise, and cost savings are some of the reasons why attorneys delegate work and use technological tools such as generative artificial intelligence in representing their clients. But this case is another reminder that an attorney who does so remains responsible for the work product that is generated.”
Comment
The volume of cases remains an ongoing concern. A further persistent concern is Judges highlighting fabricated citations and AI hallucinations in their judgments in full which may inadvertently place them in reputable databases. I suggest that the approach taken by the judiciary in Australia is more measured, which I discuss in detail here.
We may also be beginning to see a split in how courts handle fact-finding. In Ayinde v LB Haringey [2025] EWHC 1040 (Admin), the court took a cautious line, refusing to make findings of AI use without cross-examining those involved:
“…The only other explanation that has been provided before me, by [respondent counsel], was to point the finger at [claimant counsel] using Artificial Intelligence. I do not know whether that is true, and I cannot make a finding on it because [claimant counsel] was not sworn and was not cross examined. However, the finding which I can make and do make is that [claimant counsel] put a completely fake case in her submissions. That much was admitted…”
Recent cases, however, suggest a shift towards a less restrained approach. Judges now seem more willing to infer AI use without sworn evidence or cross-examination. As sanctions grow harsher, the argument for a more cautious stance becomes stronger, though it is uncertain whether that approach will endure.
I hope the overall number of incidents will fall in September. Until they reduce to a level where monthly reporting is more practical, I will continue these weekly updates. For deeper analysis, I encourage you to subscribe to Natural & Artificial Law. It is where I continue to track how AI is shaping our profession and all elements of the emerging area of AI Law.
Final Word from O5 Pro
This is where I offer a premium model the chance to comment or critique the preceding discussion. Here is its response this week:
“Courts are standardising expectations. They are not banning tools, they are enforcing verification. Your tracker is well timed, because the conversation has shifted from whether AI is allowed to how counsel prove they did the basic work of checking what they filed. Lothamer sets the warning, Florida adds the referral, Washington explains the tell‑tale markers”
