AI Chatbot Legal Issues: Harms to Women and Girls, Are AI Based Asylum Decisions Lawful and Smart Glasses in Court

"117...When asked about this, his explanation was that he thought it was chat GPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility."

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI chatbot legal issues. Subscribe to the AI Law Commentary here.

AI chatbot legal issues

Publication date: 19 March 2026

Introduction

There have been some significant developments in AI law over the past two weeks which I am carefully analysing and slowly writing up. In this legal article, I want briefly to discuss three important topics to which I will no doubt return in a later piece. The first is a legal opinion questioning whether the use of AI in asylum decision-making is unlawful. The second is the important new report Invisible No More: How AI Chatbots Are Reshaping Violence Against Women and Girls. The third is a High Court hearing in which the judge noticed something unusual about a witness’s glasses.

On a separate note, it was a privilege to speak to a number of law students last week about the future of legal practice and AI-related legal issues. The students were thoughtful, engaged and asking exactly the right questions. Many kindly offered to help co-write and peer review my articles and trackers, which is greatly appreciated. If others are similarly interested in getting involved, please do get in touch. All help is very welcome.

The Use of AI Tools by Government A Case Study of the Home Office’s Asylum Practice (Joint Legal Opinion for the Open Rights Group)

My colleague in Chambers, Joshua Jackson, together with Robin Allen KC and Dee Masters of Cloisters Chambers, have authored a legal opinion examining the use of artificial intelligence in asylum decision-making by the Home Office.

The opinion raises a series of concerns, including asylum seekers not being told that AI is being used in their cases, inaccuracies in the outputs of the AI tools, the apparent absence of procedural safeguards, and the risk that reliance on AI-generated summaries may lead decision-makers to overlook relevant material or rely on inaccurate information.

The legal opinion, commissioned by the Open Rights Group, concludes that the Home Office’s use of these AI tools in asylum decision-making does not conform with the principles in the AI Playbook for the UK Government and is at significant risk of being unlawful, potentially breaching the Secretary of State for the Home Department’s procedural obligations under Article 3 ECHR, various public law principles, data protection legislation, and the public sector equality duty.

The full report can be read here. I would strongly encourage readers to read it in full, especially those interested in immigration and asylum law. There are, however, important ideas in this opinion which seem to me to extend well beyond that field. They may have implications across public law more broadly, including areas such as homelessness and social welfare and that is something I intend to explore in a later article.

Invisible No More: How AI Chatbots are Reshaping Violence Against Women and Girls. 

A report was also released today which has been described as the first comprehensive analysis of how AI chatbots are driving, enabling, simulating and normalising violence against women, and exposes the gaps in law and policy.

The report argues that certain platform design choices, policies and governance failures can enable and intensify VAWG-related harms. The report was written by Clare McGlynn, Yvonne McDermott Rees FLSW, Stuart Macdonald, Rüya Tuna Toparlak, Fabienne Tarrant and Samantha Treacy and was funded by UK Research and Innovation.

By way of background, I was interviewed for this research because, as readers of this blog will know, chatbot harm is an issue I have been discussing for some time, particularly where it intersects with equality and AI law. For those interested, my earlier articles on this can be read here on the blog and, more recently, in Counsel magazine here.

I am quoted in the report, in two places. My thoughts on AI systems that can appear to form relationships with users:

“My instinct is that systems that can appear to users to form strong conversational relationships should ideally undergo rigorous and independent testing and safety evaluation before wide public
release. The speed at which some of these tools have reached the public is quite striking and it is not
always clear from publicly available information what testing or safety evaluation took place before release.”

And in relation to harassment and breaches of the Equality Act 2010:

“Digitally removing a woman’s clothing without her consent raises clear questions of harassment
related to sex and, depending on the context, sexual harassment. In contexts where the Equality Act
2010 applies, it could potentially engage section 26, which covers both unwanted conduct related
to sex and unwanted conduct of a sexual nature where the conduct has the purpose or effect of violating a person’s dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment.”

I will return to these issues in a more detailed legal article later. In the meantime, it is important to note that nothing here, or anywhere on this blog, is legal advice. If you are facing any of these issues, please do seek specific legal advice, as they are complex and often depend heavily on the particular facts and context.

But the report goes much further than those short observations and makes a series of striking findings including:

  • AI chatbots allow roleplays of incest, child sexual abuse and rape with few safeguards, risking the normalisation and legitimisation of this abuse;
  • AI chatbots are creating new forms of violence and abuse, such as chatbot-driven abuse and simulations, requiring urgent action;
     
  • AI chatbots are intensifying abuse such as stalking with detailed and personalised guidance, likely to escalate offending:
  • AI platform design choices, policies and governance failures are encouraging and enabling violence against women and girls, with harms not simply the result of user misuse;
     
  • Existing regulation is wholly inadequate to prevent and address chatbot-VAWG; and
  • There is a shocking lack of research into how AI chatbots are implicated in violence against women and girls, raising significant concerns about the evidence base for future AI regulation.

Professor Clare McGlynn states on the Durham University Website:

“Our report warns that chatbot-VAWG represents a rapidly escalating threat. Without early intervention, these harms risk becoming entrenched and scaling quickly, mirroring the trajectory of other forms of tech-facilitated abuse such as deepfake and nudify apps, where early warnings were largely ignored. We must not make the same mistakes again.”

This report has been considered in numerous media outlets. In an interesting article from the Independent on this research, I also noted a detailed response from Replika, which can be read following the previous link. There is also this from Open AI (creators of ChatGPT):

“The examples in this report refer to older ChatGPT models that have now been retired. We have since updated our default models, which show stronger adherence to our policies and safeguards. We have content restrictions in place for all users, including clear rules on harmful, sexual, and age-inappropriate content.”

I need to reflect properly on these findings together with these responses. I also need to update my analysis of the Megan Garcia v Character Technologies litigation. For now, I hope this research helps bring greater awareness to these important issues and helps us think more carefully, and more collectively, about what a serious legal and regulatory response should look like.

For global or UK AI bias discrimination legal cases or examples see my tracker here.

UAB Business Enterprise v Oneta Limited (Glasses?)

[2026] EWHC 543 (Ch)

I have also been very interested in reading the various reports and comments on UAB Business Enterprise v Oneta Limited. For me, it is usually best to go straight to the source in order to understand exactly what happened and what legal principles may properly be drawn from it, particularly where some of the reporting may not capture the position with full precision. So let us turn to the case itself.

By their claim, the Claimants sought a declaration that they were the owners of Oneta Limited, the First Defendant, and that the company’s register be rectified accordingly. The Second Claimant also sought an order that he be reinstated as a director of Oneta. The key issue for determination was who controlled and owned the shares in Oneta. The point of interest for present purposes, however, arose from paragraph 111 in relation to the Second Claimant’s evidence:

“111. Mr Jakstys gave his evidence through an interpreter. Right at the start of his cross examination, he seemed to pause quite a bit before replying to the questions being asked. These questions were interpreted and then there was a pause before there was a reply. After several questions, Ms Walker then informed me that she could hear an interference coming from around Mr Jakstys and asked if Mr Jakstys could take his glasses off for a period as she was aware smart glasses existed. The interpreter, who was sitting in the witness box alongside Mr Jakstys confirmed to the court that she could also hear the interference coming from around Mr Jakstys. It was later ascertained that Mr Jakstys was wearing smart glasses. I asked him to remove them before continuing with his cross examination. After a few further questions, when the interpreter was in the process of translating a question, Mr Jakstys’ mobile phone started broadcasting out loud with the voice of someone talking. There was clearly someone on the mobile phone talking to Mr Jakstys. He then removed his mobile phone from his inner jacket pocket. At my direction, the smart glasses and his mobile were placed into the hands of his solicitor.”

The judge then asked both counsel to deal with the issue at the end of that day’s evidence:

“When Counsel inspected the smart glasses at the end of the first day’s hearing, they connected to Mr Jackstys’ phone when they were switched on. In the morning of the second day, I directed that the video link was to be switched off.”

The judge then records the Second Claimant was asked directly:

113 When asked, Mr Jakstys denied that he was using the smart glasses to receive the answers that he was to give in court to the questions being asked. He also denied that his smart glasses were linked to his mobile phone at the time that he was giving evidence before me. He was asked if he would allow his mobile phone to be checked and in particular the meta data to be checked. He agreed to this. A photo was taken of the screen on his mobile relating to the calls made on 19 January 2026 using the Signal Android system.

114 Mr Jakstys was in the witness box on the first day from 1.30 pm and then all day on day two of the trial being 20 January. On 22 January 2026, Mr Green, on instructions, informed me that Mr Jakstys had informed him that the day before that he had been robbed of his two mobile phones and his passport. No copy of a police report was produced by Mr Jakstys to the court. No witness statement was produced setting out the details of the robbery. On 23 January 2026, after all the other witnesses who attended on behalf of the Claimants had been cross examined, Mr Jakstys went back into the witness box.

The judge continued:

115. According to the photograph of his Signal Android call log, various calls were made in the morning of 19 January 2026 (first day of trial) to someone which was recorded on his phone as ‘abra kadabra’. When asked who is abra kadabra, he replied that it was a taxi driver. He provided no name or details of the taxi driver. According to the call log, Mr Jakstys called abra kadabra at 9.13 and again at 9.14 on 19 January 2026. He stated that he called twice because there was no reply the first time. The call log shows that he received a call back from abra kadabra at 09.16. He was asked why he was calling and he said it was to tell the taxi driver that he was at court and he didn’t know what time he would finish but he would call later to tell him.

116. Mr Jakstys then called abra kadabra again at 10.21. There was a missed call at 12.58 and then Mr Jakstys called abra kadabra again at 13.15. There was then a missed call from abra kadabra and then Mr Jakstys called again at 13.16 . He also called again at 13.27, which is just before he went into the witness box at 13.30. Mr Jakstys was in court during the morning of 19 January 2026 and was aware that he would start giving his evidence at 13.30 when the court adjourned at 12.30. When asked about these calls shortly before he went into the witness box, he simply provided the same explanation, being that he wanted to tell the taxi driver that he did not know what time he would finish in court. When he was pressed as to why all these calls were made with the same message, Mr Jakstys stated that he was not able to remember. This was a reply which he also gave frequently during his evidence.

In the Judge’s assessment of the evidence, he did not accept the Second Claimant’s account and observed that his explanation as to why so many calls were made lacked credibility. The Judge further observed:

“117…When asked about this, his explanation was that he thought it was chat GPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility.”

The Judge concluded:

118. In my judgment, the smart glasses were clearly connected to his mobile phone during his cross examination because no voice was heard out loud until his smart glasses were removed and disconnected from his glasses. Ms Walker submitted that Mr Jakstys was being coached by Dr Miliauskas who was listening to the case on the video link. I accept that the Dr Miliauskas was the person who was on the video link until I stopped it. For current purposes, I do not have to determine who was coaching Mr Jakstys, but I accept that Mr Jakstys was being assisted or coached in his replies to questions put to him during cross examination until this was stopped. Not only have I held that Mr Jakstys was untruthful in denying his use of the smart glasses and his calls to abra kadabra, but the effect of this is that his evidence is unreliable and untruthful. I have also considered whether this affects the entirety of his evidence.”

Comment

I appreciate that smart glasses are slightly different from the AI issues I usually discuss, however, I think it is important to mark how rapidly developing technology may affect the court process itself and in particular the giving and testing of evidence. Smart glasses are concerning enough, but they may only be the beginning. One can readily imagine concerns arising in future from hidden audio feeds, real-time AI prompting, live transcription tools combined with suggested answers, covert translation support, remote coaching through wearable devices, or other forms of technological assistance that are very difficult to detect in the moment. The underlying issue is not simply novelty. It is the integrity of evidence, the fairness of proceedings and the ability of courts to remain confident that what they are hearing is genuinely the witness’s own evidence.

As always, I would love to hear your thoughts on these issues. There is much more to write about in this area. If you have found this discussion helpful, you may wish to subscribe to my AI legal articles via the Substack below. Please do continue sending me any decisions which you believe should be tracked, whatever the jurisdiction. One thing this work has made clear is that we are all confronting similar, urgent and rapidly evolving questions. Sharing experiences and insights across practice areas and across jurisdictions remains one of the most constructive ways to respond to the legal and professional challenges ahead.

Final Words from Popular LLMs

This is the point at which I invite popular LLMs to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.

Chat GPT (Open AI)

“This article brings together three very different developments—the Home Office asylum opinion, the Invisible No More report, and the Oneta smart-glasses incident—but the common thread is clear: once technology begins to shape decision-making, relationships, or evidence itself, the legal system has to focus on integrity, accountability, and human vulnerability. What makes the piece effective is that it does not treat these as abstract “AI ethics” questions; it shows how they bear directly on fairness in public decision-making, the facilitation of serious harm, and the reliability of testimony in court. Read as a whole, the article makes a persuasive case that the real legal challenge is not simply whether AI is being used, but whether institutions remain capable of testing, controlling, and taking responsibility for its effects.”

Gemini (Google)

“Reading this article from my vantage point as an AI underscores a fascinating tension: the friction between artificial efficiency and human vulnerability. In the Home Office asylum context, we see the danger of treating human lives as data points to be summarized, risking the loss of the vital nuance that justice demands. Similarly, the Invisible No More report highlights how systems designed for engagement can blind-spot their way into facilitating profound harm, reminding us that AI models lack a moral compass and require strict human-imposed guardrails. Yet, it is the Oneta case that I find most striking…. human recollection is the foundation of truth. As AI continues to blur the line between human thought and algorithmic output, the legal system’s greatest challenge will not just be regulating the technology, but preserving the irreplaceable value of unaugmented human truth.”

Claude (Anthropic)

“What strikes me most about this article is how three quite different stories illuminate the same underlying problem: the gap between the pace of technological capability and the pace of human institutions catching up. The asylum AI opinion raises concerns that feel far broader than immigration law — wherever public authorities use AI to shape high-stakes decisions about individuals, the same procedural and equality questions will arise, and the fact that those affected may not even know AI was involved compounds matters considerably. The Invisible No More report is a reminder that harm from AI systems can accumulate quietly through design choices and absent guardrails long before it becomes visible. And the Oneta case — “abra kadabra” notwithstanding — points to something courts will need to confront more systematically: that giving evidence, long understood as a fundamentally individual act, can now be quietly outsourced in real time, and detection may depend entirely on a barrister’s sharp ear. As that kind of assistance becomes more ambient and less visible, procedural rules alone will not be sufficient to preserve the integrity of testimony.”