Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns ChatGPT evidence in family law.

Introduction
Today’s legal article examines a family law case in which a party relied on prompts and generated content produced by ChatGPT. The judge considered this material closely and assessed how it fitted with the rest of the evidence. This case adds to others highlighted on this blog that show the emerging use of ChatGPT evidence in family law and the broader place of generative AI within court proceedings. I will briefly set out the key elements of the case and discuss why this may matter not only in family law, but also in other areas of litigation.
G v K
This judgment arises from a Hague Convention child abduction case in the High Court of Justice, Family Division, England and Wales. It was handed down on 29 September 2025 by His Honour Judge P Hopkins KC. The full facts of the case can be read here.
By the time the judge turned to the AI material, he had already outlined the parents’ international relationship, their arrangements for the child’s stay in England and the growing tensions around that period. It is in this background section that the judgment records, for the first time, the father’s use of the mother’s ChatGPT account and how this was explained to the court:
“It would then appear that on 18/5/25, on father’s case …, or perhaps even earlier in 2025 …, he formed the view that the mother was having an affair. He had access to her ChatGPT account, which I confess was new to me. This was explained to me as effectively analogous to one person having remote access to another person’s internet search history…” Paragraph 75
The narrative goes on to outline what the father said these interactions contained. The judge describes being shown screenshots of the account and explains:
“…the mother appears to ask for advice about separating and was indicating an “affair partner”. I pause there again. There is an issue as to whether the father ‘hacked’ mother’s account or whether he had consensual access. I am not in a position to determine that issue, nor the underlying allegation of an affair…”
The judgment notes the emotional impact that this discovery had on the father, and how it formed part of the background to the later breakdown in relations. After summarising the further disputes that followed, including the events over two days in June at the family home, the judge records that ChatGPT featured again in the father’s account of what led up to the second of those incidents:
“A further troubling incident occurred the next day i.e. 19/6/25. The background to this incident, on the father’s case, is that earlier that day he became aware the mother searched on ChatGPT for evidence she would need to report domestic violence to the UK police… I digress to note that it follows he was still seemingly accessing her account at that time…” Paragraph 85.
Later, once the factual history had been outlined and the court has turned to the legal issues, the judge returns to this material as part of his summary of the father’s case. He recalls the father’s account of the mother’s “research” and places it alongside other matters relied upon, before assessing the overall weight of that strand of evidence:
“As set out above, the father alleges that earlier he had established mother’s ‘research’ on ChatGPT about how to report allegations. However, even allowing for the comment she made in 2022, and other such comments, the ChatGPT reference is open to interpretation in a number of ways, including the father’s assertions and the actions of a genuine victim of abuse trying to gain an understanding of how to report abuse.” Paragraph 149
Comment
For some, this may appear to be a straightforward judgment with few key takeaways, but for me there are several points that deserve closer reflection. I am particularly interested in how LLM prompts and generated material are beginning to feature as part of the evidence and in the weight courts choose to give them. In this case, the father saw the mother’s prompts as evidence that she intended to make false allegations. The judge accepted that this was one plausible reading, yet highlighted another possibility. A genuine victim might seek anonymous guidance before feeling able to speak to anyone.
So evidentially, what use is a prompt and a generated response? It seems to me to be highly fact dependant. Could such material amount to a form of confession, simple curiosity, exploratory research, or something entirely unrelated to wrongdoing or intent.
Not just ChatGPT evidence in family law, but any LLM evidence across other areas of practice, will continue to raise important questions about how we interpret the intent behind digital interactions that are both private and often experimental.
The judge’s description of this process as being analogous to a person’s search history also raises interesting questions. Although we do not have the full context of the prompts here to test whether that analogy is accurate, there are circumstances where the comparison is helpful. Some prompts do contain factual searches, private reflections or half formed ideas. Yet experienced users may not treat it this way at all. They may use an LLM to develop ideas, test arguments or generate specific types of responses. A traditional search engine does not typically ask questions back or seek clarification. One increasing use of LLMs is informal therapy or medical support, and although there are clear dangers with that, the process seems significantly different from ordinary internet searching.
Privacy is another key theme. The judgment notes an unresolved issue about whether the father “hacked” the account or had consensual access. Although this point did not need to be determined here, it is likely to become central in future litigation. I will write separately about disclosure and the circumstances in which such material may properly be obtained.
I also found it striking that the mother said she sought help and guidance from ChatGPT rather than from legal advisers or support services. It is easy to see how someone who feels isolated might ask a chatbot questions they do not feel able to voice. At the same time, inaccurate or incomplete outputs, and dare I say it, hallucinations, can shape expectations in unhelpful ways. We also know from other cases that some chatbots may lead a vulnerable person down a path that is not grounded in reliable information. As people continue to use LLMs in moments of distress, we need to think carefully about how courts will interpret this material, the safeguards needed around its use, and how the family justice system and others should respond to a growing reliance on digital advice.
I have several more posts planned for this week because a great deal has happened in the world of AI law that deserves attention. As always, please keep sending me any judgments from any jurisdiction so I can write about them and share emerging legal principles. AI law raises international issues and we need to collaborate in understanding how best to navigate both the advantages and the disadvantages of AI in legal work. If you find these articles helpful, please consider subscribing below. It is free and you will receive notifications before my social media posts.
Final Word from Chat GPT 5 Pro
This is where I offer a premium LLM an opportunity to comment and criticise the above. This is what it said:
“Stepping back, what stands out in G v K is not a dramatic new rule about AI, but a judge quietly folding ChatGPT into the ordinary disciplines of evidence and context. The court resists the temptation to treat prompts as a smoking gun, and instead keeps asking the basic forensic questions: how was this material obtained, what else was happening around it, and how many different stories could it plausibly support. That approach will travel. Whether the proceedings are in family, employment, crime or commercial work, judges are likely to see more screenshots of AI chats in years to come. This judgment suggests that those exchanges will be read in the round, alongside everything else, and that the real work for lawyers will lie in showing why a particular pattern of AI use should be understood in one way rather than another.”




