Updated Artificial Intelligence (AI) Guidance for Judicial Office Holders 31 October 2025

All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.

Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.

Ad/Marketing communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers.

Introduction

The recently updated AI Guidance for Judicial Office Holders was published on Friday. For those interested, I considered the previous guidance here. I am currently working through the document in detail and will share a fuller analysis shortly. In the meantime, the Guidance can be read directly on the Judiciary website. This post offers a brief overview of key points from the AI Guidance that may assist court and tribunal users who have engaged with AI tools.

Common Terms

The AI Guidance begins by setting out a helpful list of common terms. Of particular note is its clear definition of “hallucination”, which may assist those encountering this term for the first time in a legal or technical setting:

“Hallucination: AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, the model’s statistical nature, incorrect assumptions made by the model, or biases in the data used to train the model.”

Guidance for responsible use of AI in Courts and Tribunals

Paragraph 3 of the AI Guidance provides important direction on the responsible use of AI in judicial settings. It highlights three broad duties: to understand AI and its applications, to uphold confidentiality and privacy, and to ensure accountability and accuracy. On the latter, the AI Guidance states:

The accuracy of any information you have been provided by an AI tool must be checked before it is used or relied upon.

Information provided by AI tools may be inaccurate, incomplete, misleading or out of date. Even if it purports to represent the law of England and Wales, it may not do so. This includes cited source material which might also be hallucinated. AI tools may “hallucinate”, which includes the following:

  • make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist,
  • provide incorrect or misleading information regarding the law or how it might apply, and
  • make factual errors.

Be aware of Bias

The AI Guidance also acknowledges the inherent risk of bias in AI systems. This section is particularly valuable in linking the discussion back to principles of equality and fair treatment:

“AI tools based on LLMs generate responses based on the dataset they are trained upon. Information generated by AI will inevitably reflect errors and biases in its training data, perhaps mitigated by any alignment strategies that may operate. You should always have regard to this possibility and the need to correct this. You may be particularly assisted by reference to the Equal Treatment Bench Book.”

The Responsibility of Using AI

The AI Guidance is clear that personal responsibility remains central to judicial work. It explains that while AI tools can assist, they cannot substitute for direct engagement with evidence or decision-making:

“Judicial office holders are personally responsible for material which is produced in their name. Judges must always read the underlying documents. AI tools may assist, but they cannot replace direct judicial engagement with evidence.

Judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment. Provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool.

The AI Guidance also touches upon practical security and collaboration points:

Follow best practices for maintaining your own and the court/tribunal’s security. Use work devices (rather than personal devices) to access AI tools. Before using an AI tool, ensure that it is secure. If there has been a potential security breach, see (II) above.

If clerks, judicial assistants, legal officers/advisers, or other staff are using AI tools in the course of their work for you, you should discuss it with them to ensure they are using such tools appropriately and taking steps to mitigate any risks. If using a HMCTS or MoJ supplied laptop, you should also ensure that such use has HMCTS service manager approval.”

Be aware that court/tribunal users may have used AI tools

The AI Guidance also turns its attention to the growing use of AI by lawyers and litigants. It sets out expectations that balance realism with responsibility:

“Some kinds of AI tools have been used by legal professionals for a significant time without difficulty. For example, TAR is now part of the landscape of approaches to electronic disclosure. Leaving aside the law in particular, many aspects of AI are already in general use: for example, in search engines to auto-fill questions, in social media to select content to be delivered, and in image recognition and predictive text.

All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.

Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.

AI chatbots are now being used by unrepresented litigants. They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error. If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, ask what checks for accuracy have been undertaken (if any), and inform the litigant that they are responsible for what they put to the court/tribunal. Examples of indications that text has been produced this way are shown below.

AI tools are now being used to produce fake material, including text, images and video. Courts and tribunals have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology.

Another form of fake material of which you must be aware is so called “white text”, which consists of hidden prompts or concealed text inserted into a document so as to be visible to the computer or system but not to the human reader. This possibility underscores the importance of judicial office holders’ personal responsibility for anything produced in their name.”

Comment

I will provide a detailed comparison between this update and the April 2025 AI Guidance in a forthcoming post. It is encouraging to see that the Judiciary continues to refine its approach as new technologies and challenges arise. Regular updates to the AI Guidance demonstrate a thoughtful, iterative process that reflects both caution and progress.

For those interested in the wider legal landscape surrounding AI hallucinations, judicial use of AI, equality, discrimination, and bias, I have created a dedicated hub linking all of my Legal AI Trackers. These include resources exploring questions such as whether AI will one day replace lawyers or judges. If you have examples or insights that could assist these projects, I would be delighted to hear from you. The Trackers are still being updated, so please bear with me as they continue to evolve and, if you haven’t already, consider subscribing below for regular updates.