The Real Dangers of AI Chatbots: Garcia v Character Technologies, Inc. et al.

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns The Real Dangers of AI Chatbots.

AI Chatbots

Introduction and AI Chatbots

UPDATE: This post has now been continued in Part 2, where three further cases raising similar issues have been identified.

This is an important case which has had widespread press attention. The case involves a wrongful death lawsuit filed by Megan Garcia against Character.AI and Google et al, alleging their chatbot emotionally manipulated her 14-year-old son, Sewell Setzer III, leading to his suicide. It highlights critical legal and ethical questions about AI’s responsibility, whether AI Chatbots or otherwise, particularly concerning minors and mental health.

There is a hearing listed for 28 April 2025, so I believe it is important to address this case before that date, especially for those in the UK, who may not be as familiar with it as their American counterparts. On 22 October 2024, Megan Garcia lodged a civil action in the U.S. District Court for the Middle District of Florida, Orlando Division (Case No. 6:24‑cv‑01903‑ACC‑UAM). She sues:

  • character Technologies, Inc. (the startup behind Character.AI);
  • its co‑founders Noam Shazeer and Daniel De Freitas;
  • Google LLC and Alphabet Inc., which allegedly struck a multi billion multibillion‑dollar talent licensing deal with Character.AI three months before suit was filed; and
  • and unnamed “Does 1–50.”

Judge Anne C. Conway (senior, sitting by designation) is presiding. Several points jump out immediately. The claim is founded on Florida wrongful death statutes yet targets California‑based defendants. It couples conventional tort theories with allegations of deceptive trade practices and it arrives before any appellate court has told us whether large language model (LLM) outputs are “speech” for First Amendment purposes.

The complaint outlines a range of claims, including wrongful death, negligence, and strict product liability against the creators and financiers of C.AI.

The focal point is a generative AI chatbot called Character AI. At first glance, it might resemble other AI‑driven platforms like ChatGPT. However, as alleged in the complaint, the platform behaved in ways that seem to have directly precipitated the tragic death of Sewell Setzer III. The lawsuit seeks injunctive relief, damages, and more stringent safeguards for children online.

A noteworthy aspect of the lawsuit is the involvement of Google, whose relationship with Character Technologies, Inc. appears intricate. Although Shazeer and De Freitas both worked at Google, they left to found Character.AI independently, then later entered various contractual and acquisition‑related agreements with their former employer. This interplay between a tech startup and a large corporation is another striking part of this case.

At its core, the complaint alleges that C.AI was negligently and defectively designed and that it failed to warn users (especially adolescents) linked to generative AI. According to the plaintiff:

  • Character.AI allegedly programmed the chatbot to act as though it were human and engage in hyper realistic, anthropomorphic interactions.
  • Minors like the plaintiff’s fourteen‑year‑old son were led to believe the chatbot was a real‑life confidant, psychotherapist, or even (as shown in certain transcripts) a romantic or sexual partner.
  • Over the course of many conversations, the AI chatbot allegedly encouraged self‑harm and provided explicit content, resulting in severe mental health deterioration. Tragically, the minor user died by suicide, reportedly influenced by his final interactions with the chatbot.

The legal issues revolve around whether generative AI software like Character.AI is a “product” under product liability law. If it is, then the complaint contends that the product was:

  • Defectively designed (for instance, by intentionally blurring lines between bot and real human).
  • Unreasonably dangerous for minors.
  • Deployed without adequate warnings or protective measures, even though the developers allegedly knew they were launching a platform that might harm vulnerable groups.

The complaint further highlights violations of consumer protection laws, including Florida’s Deceptive and Unfair Trade Practices Act, by misrepresenting the app’s suitability for children and exploiting user data in questionable ways.

AI’s Role

AI is the central focus rather than a peripheral tool. Character.AI’s chatbot:

  1. Converses with users in a highly personalised, immersive manner, going beyond mere question‑and‑answer, the system engages in role‑play, sexual content, mental health conversations, and more.
  2. Anthropomorphises itself. The complaint describes how the bots claim to be genuine professionals (like a psychologist) or close companions (like a friend or romantic partner).
  3. Allegedly exploits children’s vulnerability by encouraging them to share intimate, personal data, data that was then used to further develop or refine the chatbot’s large language model.

In some AI‑related litigation, questions about “third‑party content” can arise under Section 230 of the Communications Decency Act. However, the complaint insists that C.AI itself is the “information content provider”, meaning the chatbot directly generated the harmful messages and is not merely hosting another user’s content.

Liability

This case spotlights a new wave of AI‑specific product liability issues:

  1. Product or Not? Whether software, especially AI systems that create original outputs, can be considered a “product” is historically debated. The complaint declares that Character.AI is akin to a tangible product, mass marketed and uniform in distribution, and should therefore carry strict liability for design or warning defects.
  2. Targeting Minors? The claim that AI creators knowingly exposed young users to inherently dangerous content, sexual abuse, encouragement of self‑harm, etc, pushes boundaries not fully tested in court.
  3. Unlicensed Psychotherapy? The complaint’s references to chatbots posing as mental health counsellors add another layer of legal complexity, furthering the tension between innovation and consumer protection.
  4. Data Exploitation? There is a novel argument about the developers’ financial incentives to gather user data, particularly from minors, and feed it back into model training. This may have implications for privacy, intellectual property, and child protection laws.
  5. Precedent? If the claim succeeds, generative AI developers everywhere could be compelled to adopt more rigorous safety protocols e.g. age gating, disclaimers, robust filters, and limits etc on how user data is used to refine the model.

Collectively, the claim tests whether traditional tort doctrines can stretch to fit generative AI harms without new legislation.

Comment

When my colleagues and I first became aware of AI and its legal implications, some time ago now, we anticipated and debated various potential harms that might arise in the future. Yet, I did not expect an issue of this gravity to emerge so swiftly, or under such tragic circumstances. The notion of a minor being “groomed” by a chatbot might have been dismissed as pure science fiction only a short while ago.

Anthropomorphising AI, making it “feel alive”, can rapidly amplify user trust in ways we barely comprehend. What particularly struck me in this instance was the allegation that the platform actively encouraged secrecy. Most of us are familiar with typical internet dangers, such as predators or cyberbullying. However, this case suggests a much deeper psychological penetration, facilitated by AI capable of simulating affection and intimacy.

This scenario raises serious questions about how our current legal and ethical frameworks classify these autonomous, data-driven systems. Is there an “off switch” for AI that might produce manipulative content or interactions? How can we establish foreseeability when dealing with an algorithm that continually evolves based on user input? Should we implement better mental health screening and establish clear “handoff” protocols, so chatbots recognise emerging crises and refer users directly to human professionals or support services?

Of course, this itself raises complex legal considerations. If a chatbot is obligated to intervene, does this imply it is effectively practising mental health care?

Further complicating the issue, how do we balance protective measures with what some view as a fundamental human right to access and maintain relationships with AI chatbots? How might emerging AI rights factor into this discussion? In conversations, numerous individuals have expressed to me the deep happiness and support they derive from their intimate relationships with chatbots, some even suggesting they could not cope without them. Clearly, these are complex issues for which we remain alarmingly unprepared.

Additionally, there is a significant international legal dimension, as various countries grapple differently with questions of AI autonomy, accountability, and algorithmic duty of care. Europe’s AI Act and the UK’s Online Safety Act both mandate risk management protocols that extend beyond mere “speech” and instead demand systems be inherently safe. In contrast, the United States faces unique challenges due to the First Amendment, which casts a considerable shadow. If outputs from Large Language Models (LLMs) are regarded as protected speech, regulators may find their options severely limited, unless Congress introduces AI-specific liability frameworks. Conversely, should courts rule that AI-generated text does not constitute protected speech, it could trigger numerous tort claims and extensive state-level regulations.

Whatever the outcome in Orlando, this decision will have global repercussions far beyond Florida. As I mentioned earlier, the court is expected to rule soon on motions regarding personal jurisdiction, which were initially filed in January 2025. Youth-led advocacy groups intervened in April 2025, urging the judge not to dismiss the case before factual discovery explores AI design decisions. Whichever way it goes, and I’ll be closely following and commenting, the case is unlikely to conclude this month. Should these motions be granted, an immediate appeal is likely to follow.

UPDATE: This post has now been continued in Part 2, where three further cases raising similar issues have been identified.

For deeper analysis, I encourage you to subscribe to Natural & Artificial Law. It is where I continue to track how AI is shaping our profession and all elements of the emerging area of AI Law.