Judicial Guidance on Artificial Intelligence: England & Wales v New South Wales (Australia)

"…All legal representatives are responsible for the material they present before the court or tribunal and have a professional obligation to ensure its accuracy and appropriateness. Provided AI is used responsibly, there is no requirement for legal representatives to disclose its use, although this depends on the context.
However, until the legal profession becomes fully acquainted with these new technologies, it may occasionally be necessary to remind lawyers of their obligations and to confirm that they have independently verified the accuracy of any research or case citations generated with the assistance of an AI chatbot…"

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

1. Introduction

We have an update to the Judicial Guidance on Artificial Intelligence which can be read in full here. It discusses how judges, clerks, and support staff should (and should not) utilise artificial intelligence tools in their daily work.

2. Before the Update

Readers of this blog will know that on 5 February 2025 the Master of the Rolls discussed the use of AI in court and legal services and compared the position to New South Wales, Australia. He observed:

“AI tools are not inherently problematic, so long as we understand what they are doing, and use them appropriately. For that reason, we published our Judicial Guidance for the use of AI last year. There are 3 simple messages in that guidance that apply as much to lawyers as to judges.”

 In brief, those 3 simple messages are:

  • Generative AI (GenAI) predicts likely word combinations from vast datasets without verifying accuracy against authoritative sources.
  • Avoid inputting confidential data into public Large Language Models (LLMs) to protect information privacy.
  • Always carefully review any GenAI-generated content, as users remain fully responsible for the output.

He then explained the rules in New South Wales in Australia, which state:

  • GenAI must not be used for affidavits, witness statements, or evidential material.
  • Users must verify accuracy of citations when using GenAI for written submissions.
  • GenAI may only be used to prepare expert reports with prior court permission.

He continued:

“It will be interesting to see how that more restrictive approach in New South Wales works out as compared to our approach. I would comment, though, that AI is already being used in many jurisdictions for some of the purposes that the NSW guidance says it should not be. I doubt we will be able to turn back the tide. Our guidance is within the grain of current usage, making clear that the lawyers are 100% responsible for all their output, AI generated or not.

So, to summarise, there are three excellent reasons why all lawyers and judges should embrace AI: those we serve are using it. It will make what we do available to more people, more cheaply, and allow us to do necessary things more quickly, and it will be at the centre of the future work of lawyers, when claims are all about when AI has been used for the wrong things, and AI ought to have been used but was not used.”

3. Guidance Now on Court Users who may have used AI tools

Following this, the Guidance was updated. I will not set out the differences here, but I will briefly comment on a particular section that caught my eye, which relates to court users “who may have used AI tools”:

Lawyers:

“…All legal representatives are responsible for the material they present before the court or tribunal and have a professional obligation to ensure its accuracy and appropriateness. Provided AI is used responsibly, there is no requirement for legal representatives to disclose its use, although this depends on the context.
However, until the legal profession becomes fully acquainted with these new technologies, it may occasionally be necessary to remind lawyers of their obligations and to confirm that they have independently verified the accuracy of any research or case citations generated with the assistance of an AI chatbot…”


Litigants in Person:

“…AI chatbots are increasingly being used by unrepresented litigants and may represent the only source of legal advice or assistance for some individuals. Litigants rarely have the necessary skills to independently verify legal information provided by AI chatbots and may not realise these tools are prone to error. If it appears that an AI chatbot has been used to prepare submissions or other documents, it is appropriate for judges to inquire about this, ask what accuracy checks (if any) have been performed, and remind litigants that they remain responsible for all material presented to the court or tribunal. Examples of indications that text has been generated by AI are provided below.

AI tools are also being used to create fake materials, including text, images, and videos. Courts and tribunals have always dealt with forgeries and allegations of forgery of varying sophistication levels. Judges should be aware of this emerging possibility and the potential challenges posed by deepfake technology…”

4. Comment

The updated guidance demonstrates a more flexible stance in England and Wales, emphasising that AI usage is allowed so long as lawyers remain entirely responsible for verifying accuracy and ensuring confidentiality. By contrast, the Australian approach, particularly in New South Wales, adopts a stricter, more conservative stance, disallowing GenAI for affidavits and witness statements, and requiring explicit permission for expert reports.

In these early days of AI, I am not sure which jurisdiction has this right. I understand the potential issues with adopting LLMs. Users may become too dependent on AI-generated text and overlook the need for independent verification. This risk is heightened when unrepresented litigants attempt to rely solely on AI tools.

There is also the danger in inputting sensitive or private information into public LLMs, which may not guarantee confidentiality. Further, while AI can streamline research, any inaccuracies or embedded biases in the underlying data can lead to flawed legal arguments.

That said, promoting AI use could foster innovation and embrace evolving technology. The stricter approach may protect against potential misuse or errors but could also slow down the adoption of beneficial AI processes, potentially affecting efficiency.

There is also the serious problem with vulnerable litigants and discrimination. On this blog, I have discussed cases where LLMs were used as reasonable adjustments for those with learning difficulties. Do the rules in New South Wales, Australia really assist the court in obtaining the best evidence, or do they leave vulnerable individuals who do not have the benefit of legal advice in significant difficulty? We’ll have to see.

In England and Wales, there is no ban; however, always remember to exercise caution when incorporating AI in legal settings. Verify all AI-generated content, protect sensitive data, and remain accountable for every submission made to the court or tribunal.

This article is part of my broader legal commentary available through my Substack newsletter.
Subscribing ensures you receive immediate updates, in-depth analysis, and exclusive legal insights as they are published.
➔ [Subscribe here to stay informed].