Two AI Articles in Counsel Magazine: Chatbot-related harm and Judicial use of AI/risk of gender bias

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns Chatbot-related harm and Judicial use of AI/risk of gender bias. Subscribe here.

Two AI Articles in Counsel Magazine: Chatbot-related harm and Judicial use of AI/risk of gender bias

Introduction

I wanted to share the January 2026 edition of Counsel Magazine, which I think aligns closely with a number of themes I explore in my legal writing. In particular, two of the articles speak directly to areas I have been focusing on recently, namely chatbot related harm and the judicial use of AI, including the associated risk of gender bias.

Chatbot-related harm (Justice Matters)

Full Article Here

The first is my own piece, Chatbot-related harm, which focuses on, and updates, my thoughts on the emerging litigation in the United States. My previous legal articles on these cases can be read here:

The cases discussed raise difficult questions about responsibility, design choices, and the legal frameworks we currently rely on when harm arises through systems that feel personal, responsive and emotionally engaging. Since writing that piece, there has been significant updates on each case, which I will be exploring in a future legal article.

Judicial use of AI and risk of gender bias (Justice Matters)

Full Article Here

Alongside it, and in many ways in quiet conversation with it, is an excellent article by Sophie Mitchell, a barrister specialising in media law at St Pauls Chambers. In Judicial use of AI and risk of gender bias, she examines the growing use of AI tools in judicial contexts and the risk that such systems may replicate or intensify existing inequalities, particularly around gender. It is a careful, thoughtful contribution that asks the reader to pause before assuming that human oversight alone is sufficient protection particularly in light of the growing use of AI being rolled out across the Ministry of Justice with automated decision making. Although it’s not a legal case, I will add a link to it within my AI Equality, Bias and AI Discrimination Case Tracker.

Explore the Full Range of Excellent Legal Articles in Counsel Magazine

January 2026 issue Here

Counsel Magazine publishes work across a broad range of legal practice areas and the latest issue reflects that breadth. In addition to the articles focused on artificial intelligence, it includes several pieces on other current legal topics. The issue opens with an article on Kirsty Brimelow KC, outlining elements of her reform agenda, including matters relating to child justice and aspects of culture at the Bar. The article also identifies her stated priorities, particularly the protection of jury trials and the need for additional funding for the justice system. Enjoy reading the full issue!

If AI legal issues are of interest, please consider subscribing below.

Final Words from Chat GPT 5.2 Pro and Gemini 3 Pro

This is where I invite premium AI models to comment on or critique the preceding discussion. The below are the interesting aspects of each response.

ChatGPT:

“Taken together, the recurring problem is not whether AI can “help”, but where it starts to steer: emotionally engaging chatbots can be tuned to deepen reliance in ways that make foreseeable harm more likely, while judicial-facing tools can repackage historical bias as neutral “signal” and so distort outcomes under the banner of efficiency. The most robust response in both contexts is to treat AI outputs as inherently contestable by design, with clear disclosure of AI use, an auditable record of prompts and outputs, and constraints that privilege user safety over engagement, especially where vulnerability is predictable. Accountability then becomes concrete rather than abstract: the more a system is engineered to influence behaviour or decision-making, the stronger the case for duties around rigorous testing, ongoing monitoring, and swift mitigation when risks surface.”

Gemini:

“The intersection of chatbot-mediated harm and algorithmic bias in the judiciary illustrates the profound responsibility inherent in deploying AI within legal frameworks. As these technologies evolve from passive tools to active participants in the justice system, the challenge lies not just in mitigating error, but in ensuring that efficiency does not come at the cost of human dignity or equitable treatment. The legal community’s scrutiny, as detailed in these articles, is essential to establishing the safeguards necessary for AI to serve as an aid to justice rather than a source of new vulnerabilities.