AI Judicial Assistants and AI Judges? Observations by Lord Justice Birss

“Looking further into the future, one could imagine that AI may very well be able to assimilate much larger quantities of data than a normal human judge. One could then be faced with the situation in which an AI system might be a better decision maker than a human being in those circumstances...”

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

I was very interested to read Lord Justice Birss’s speech at the Life Sciences Patent Network European Conference, his topic was intriguingly titled, “The impact and value of AI for IP and the courts so caught my attention. Especially in relation to AI Judges. The full speech can be read here.

Lord Justice Birss began by reflecting on AI’s rapid evolution, notably since the rise of ChatGPT two years ago. Yet, he rightly reminded us AI’s presence in specialised fields predates this mainstream recognition significantly. He highlighted DeepMind’s AlphaFold system from 2020 as a game-changer, dramatically improving protein-folding prediction accuracy (a breakthrough particularly impactful for the life sciences community).

What is AI?

One intriguing point was Lord Justice Birss raised concerns over the very definition of AI. Too often, AI is simplistically described as “machines performing tasks previously done by humans.” However, as he illustrated through Annie Jump-Cannon’s story, one of Harvard’s pioneering “computers”, the tasks people and machines perform evolve continuously. A definition reliant on comparing humans and machines thus becomes inherently flawed. Lord Justice Birss instead suggests a practical and contemporary approach, defining AI today as sophisticated machine-learning systems capable of digesting immense datasets to produce predictive models and probabilistic conclusions.

AI’s Practical Role in the Judiciary (AI Judges)

Firstly, Lord Justice Birss considered the impact of AI on the justice system. He explained that AI could significantly help with case consideration by summarising voluminous text into single page of what the case is about. For obvious reasons, that could save substantial court time.

Presently, Court of Appeal judges receive invaluable assistance from judicial assistants who summarise case material and I know many colleagues who thoroughly benefitted from that position before coming to the Bar. However, extending such support throughout the justice system to district judges, circuit judges, recorders, and deputy DJs is financially impractical. AI systems, Lord Justice Birss speculates, might democratise judicial support, making justice quicker and perhaps fairer without compromising the primacy of human decision-making.

Furthermore, Lord Justice Birss discussed Technology Assisted Review (TAR), already utilised in managing disclosures in complex litigation, demonstrating how AI-assisted reviews offer significant practical value.

Can and Will AI Judge Cases?

My particular interest peeked in this section of the speech. Lord Justice Birss observed:

“Looking further into the future, one could imagine that AI may very well be able to assimilate much larger quantities of data than a normal human judge. One could then be faced with the situation in which an AI system might be a better decision maker than a human being in those circumstances. I should make clear that I do not believe we’re anywhere near that yet, but from what I read in the literature of the capabilities of AI, I would not like to bet against the idea that this capability will arrive in a not-too-distant future.

The question will then become an important ethical and human rights based one – in which we need to decide whether there are decisions we are prepared to have made by AI, and which decisions should remain the preserve of human beings. One could imagine for example that a decision relating to children and whether someone had committed a crime might be one where we wish to maintain human decision making. On the other hand, one might imagine that a large number of small money claims or some other similar kinds of case, might be more efficiently done by AI, in the first instance. There could then be a right of appeal to human judges after the event.

The question whether a decision which could be made by AI should be, will be determined by ethics and human rights considerations, as I have said. At the moment, the capability is not there, but I rather think that might change in future.”

Accordingly, certain categories of decisions, particularly those involving fundamental rights, ethics, and moral judgements, such as child welfare, might forever remain within human jurisdiction. In contrast, routine, procedural matters might benefit significantly from initial AI adjudication, reserving human judicial review for appeals.

He observes, rightly in my view, that as AI capabilities grow, society will inevitably face complex ethical and human-rights-based deliberations.

Interestingly, he envisions a future in which AI systems not only adjudicate but also justify decisions comprehensively, potentially employing two distinct AIs: one making decisions, another generating coherent written judgements to accompany them. This concept introduces an intriguing layer of accountability, which may help tackle issues of transparency and explainability often cited as barriers to AI adoption in judicial decision-making.

AI and Intellectual Property: Key Legal Questions

Lord Justice Birss touches crucially upon the intellectual property dimension of AI:

  • AI and Inventorship: Citing the landmark Supreme Court decision in Thaler v Comptroller-General of Patents, Designs and Trademarks [2023] UKSC 49,  he confirms UK law currently prohibits recognising AI as a patent inventor. However, Lord Justice Birss insightfully anticipates future complications as AI becomes capable of genuinely independent invention. If there is no credible human inventor involved, how can intellectual property law fairly respond? Lord Justice Birss argues convincingly that merely designating a token human inventor will soon be inadequate, raising complex legal questions that require forward-looking, nuanced solutions.
  • Patentability and AI-driven Obviousness: Another intriguing concern is whether powerful AI systems might render all future inventions legally ‘obvious’ due to their immense data-processing capabilities. Lord Justice Birss points out the potential risk to patent incentives, suggesting we might need a thorough reassessment of the current patent system if this eventuality materialises.
  • Transparency and Disclosure: Lord Justice Birss intriguingly proposes an innovative concept akin to the Budapest Treaty’s microbe-deposit system—a dedicated “AI deposit system” to preserve training data or models for patent transparency, reproducibility, and sufficiency. This approach would offer a balanced, practical solution to emerging disclosure challenges within AI patent applications.

“Garfield”: Democratising Justice Through AI?

“Garfield” is a significant step in legal service, which needs its own blog post, so I will not go into the full details here. Lord Justice Birss succinctly explains what it is:

“It is in effect an AI law firm, in which a single solicitor is in overall charge of what this Garfield system does. It interacts with a litigant using natural language, spoken or written, and guides them through the process or bringing a debt claim in the county court. It prepares the Claim Form and drafts the Particulars of Claim for them. It will help them comply with the pre action protocol and, assuming the litigant wishes to bring proceedings, it will file the Claim Form and Particulars automatically at court, using the API system already available to bring claims electronically. If the defendant responds by e-mail, Garfield will manage that process. It can advise on the implications of what the defendant says. If the claim is not defended, it will obtain default judgment for you. Interestingly Garfield has insurance, and the startup behind it is in close contact with the Solicitors Regulatory Authority about regulation.”

Comment

Lord Justice Birss raised several interesting points in his speech, which I hope to consider in greater detail in later posts. It seems clear to me that the integration of AI into legal practice will occur rapidly and organically. As trainees and pupils enter the profession, they are likely to bring with them a stronger grasp of LLMs and technology generally, as these increasingly form a significant part of both their education and their social lives.

They will likely continue utilising these tools in practice, ensuring such technologies quickly become commonplace. More established practitioners are equally likely to adopt LLMs across various aspects of legal practice. Those who resist this shift may find themselves increasingly disadvantaged, particularly as the reasoning and analytical capabilities of LLMs continue to improve. Indeed, we have not yet begun to witness the full potential of these technologies when effectively integrated with quantum computing.

Perhaps controversially, I believe one of the most valuable skills a young lawyer can possess is the ability to safely and accurately prompt LLMs to produce well-reasoned analysis and argumentation. In time, lawyers adept at harnessing the capabilities of LLMs may prove more effective in serving their clients and assisting the court than those relying solely on traditional methods of legal training.

What I perchance didn’t fully appreciate before reflecting on Lord Justice Birss’s remarks was the extent to which judges already appear to be utilising LLMs in their deliberations. This practice may not necessarily be problematic, provided judges remain acutely aware of potential risks and rigorously maintain judicial oversight. However, there is undoubtedly a risk that reliance on LLMs could inadvertently diminish independent judicial reasoning, introduce unnoticed biases, or encourage passive acceptance of machine-generated outputs. Ensuring transparency, accountability, and the continuous cultivation of critical legal thought will therefore be essential as the profession navigates this transformative period.

In that regard, we may see an increased request from advocates following judgment for an explanation of what AI tools were used in judicial consideration. Requests may even be made for exact prompts and generated responses. It will be interesting to see the response to any requests.