Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

I recently came across an intriguing decision from the District of Columbia Court of Appeals, Ross v. United States (No. 23-CM-1067), decided on 20 February 2025. It concerns a conviction for animal cruelty, leaving a dog in a hot car, yet ends with an acquittal on appeal. I found it interesting how the court used generative AI in its decision making process.
The Details of the Case
On a scorching day in early September, the appellant, left her dog, named Cinnamon, inside her car. The outside temperature that day was recorded at around 98°F, and witnesses at the scene described it as the hottest 4 September on record in Washington, D.C. The car windows were cracked open a few inches, and the vehicle was parked near a tree that offered light shade. Still, the dog was locked inside for over an hour.
Alarmed by the dog’s barking, a passer-by called for assistance. Firefighters arrived first and forced the window open so Cinnamon could be rescued. Not too long after, an animal control officer arrived, noticing the dog panting but not obviously in severe medical distress. By the time the Appellant returned, about an hour and twenty minutes after leaving her pet, she was arrested for animal cruelty.
At trial, the government focused on the risk posed by a hot car, arguing that leaving a dog in these conditions caused (or certainly could have caused) the dog to suffer. The Appellant argued that her dog was actually safe, windows partly down, some shade overhead, and no proven harm done, thus challenging the legal standard that requires the government to prove the dog was likely to suffer “beyond a reasonable doubt.”
The trial court found the Appellant guilty, relying heavily on the notion that common sense makes it clear how dangerous hot cars are for animals. But on appeal, the Court of Appeals reversed. The majority concluded the government didn’t present the specific evidence necessary, like the precise temperature inside the car or clear signs of heat distress, to prove actual cruelty or harm beyond a reasonable doubt. Consequently, Ms. The Appellant’s conviction was overturned.
How AI Played a Role
Curiously, AI made a cameo in the opinions. Various references showed how judges discussed or even experimented with large language models, like ChatGPT, to outline likely dog responses to severe heat or to double-check “common knowledge” about how hot cars can get. Some judges used AI to illustrate the difference between unverified assumptions and admitted evidence.
Comment
What struck me about this case was how the court utilised ChatGPT in discussions about the appeal.
In previous blogs, I have discussed the reluctance in the UK to spend much time on ChatGPT in the court room. Here, by contrast, the court seemed quite confident with the model and was happy to discuss its generated answers amongst themselves. I suggest reading the full transcript for context, but below is a summary of the key points.
In dissent, one judge asked ChatGPT: “Is it harmful to leave a dog in a car, with the windows down a few inches, for an hour and twenty minutes when it’s 98 degrees outside?” ChatGPT’s answer was unequivocally “Yes,” highlighting that dogs are extremely vulnerable to heat, that car interiors can become hotter than the ambient temperature, and that prolonged exposure can cause heatstroke or even death. This answer bolstered the notion that “common sense” about a dog’s suffering in severe heat is hardly open to debate.
However, the majority took a different approach and tested ChatGPT on an unrelated question, referencing a previous case where the key issue was the market value of a 10-year-old Dodge Intrepid, specifically, whether it exceeded $1,000. They asked ChatGPT about how much such a car might be worth, then drew an analogy: just as “common knowledge” about a car’s value would need real-world data (like mileage or maintenance records), so here the prosecution needed more concrete evidence (like actual internal temperature readings or veterinary testimony) to secure a conviction.
The majority used ChatGPT’s uncertain, caveat-filled answer about the car’s value to illustrate how “common sense” alone cannot fill evidentiary gaps, particularly when crucial facts can vary widely.
This case shows how courts are starting to experiment with AI not simply as a research tool but as a rhetorical device. The opinions mention ChatGPT to highlight everyday knowledge, how we generally recognise that dogs suffer in extreme heat or that certain valuations aren’t straightforward.
From my perspective, it’s fascinating to see an appellate court’s majority and dissent each use ChatGPT in a legal argument. It signals a gradual (and sometimes experimental) embrace of technology. Whether courts in the UK or elsewhere will follow suit depends on comfort levels, privacy/security concerns, and the broader question of how reliable and neutral AI truly is.




