England and Wales (Tax) Tribunal Judge uses AI to Make Judicial Decision and Explains Why: 5 essential takeaways from a tax decision

“I have used AI in the production of this decision… This decision has my name at the end. I am the decision-maker, and I am responsible for this material. The judgment applied - in the sense of the evaluative faculty, weighing-up the arguments, and framing the terms of the order - has been entirely mine.”

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Judge Uses AI

Introduction – Judge Uses AI

Tribunal Judge uses AI and what do we learn? I am extremely grateful to Julia Nelson and Avaia Williams of Parklane Plowden Chambers for bringing this judgment to my attention:

VP Evans (as executrix of HB Evans, deceased) & Ors v The Commissioners for HMRC

[2025] UKFTT 1112 (TC)

It is an important one. The Tribunal Judge expressly stated that he used AI in reaching his decision and showed exactly how. This may well be the first case of its kind, so let’s get straight into it.

VP Evans v HMRC: the Facts

This case was a tax appeal in the First Tier Tribunal Tax Chamber, which hears appeals against certain HMRC decisions. The underlying eleven appeals each challenged Closure Notices issued by HMRC concerning Capital Gains Tax liabilities arising from tax planning arrangements involving offshore trusts and the application of double taxation conventions (DTCs) between the UK and New Zealand, and the UK and Mauritius.

The Tribunal Judge was tasked with making an interlocutory case management decision on the appellants’ consolidated application for disclosure of documents by HMRC. The application was determined on the papers, pursuant to Rule 29(1)(b) of the Tribunal Procedure (First-tier Tribunal) (Tax Chamber) Rules 2009.

After setting out and analysing the parties’ respective positions, the Tribunal concluded that the application should be granted in part:

“Disclosure is to be effected by HMRC by way of an updated List of Documents, clearly identifying documents not hitherto disclosed, to be served by no later than 4pm 30 October 2025. By that same time and date, inspection of documents not hitherto disclosed is to be done by way of the provision of copy documents. HMRC shall inform that Tribunal that this has been done. I have directed 30 October of my own initiative, and on the footing that (i) HMRC is not dealing with this ab initio; (ii) a second-sweep is likelier to be quicker and easier than the first time round.

All other directions are stayed pending disclosure. Within 28 days of the date of disclosure, the parties shall contact the Tribunal informing it as to any further directions, or consequential timings, agreed if possible.” Paragraphs 40 and 41.

However, the Tribunal Judge went further, making additional observations in relation to “The Use of AI”.

The Tribunal Judge Uses AI and Why

At paragraph 42, the Tribunal Judge stated plainly and openly:

“I have used AI in the production of this decision.”

He continued:

“This application is well-suited to this approach. It is a discrete case-management matter, dealt with on the papers, and without a hearing. The parties’ respective positions on the issue which I must decide are contained entirely in their written submissions and the other materials placed before me. I have not heard any evidence; nor am I called upon to make any decision as to the honesty or credibility of any party.” Paragraphs 42-44.

The Tribunal Judge referred to the Practice Direction on Reasons for Decisions, released on 4 June 2024, in which the Senior President of Tribunals wrote:

“Modern ways of working, facilitated by digital processes, will generally enable greater efficiencies in the work of the tribunals, including the logistics of decision-making. Full use should be made of any tools and techniques that are available to assist in the swift production of decisions.”

The Tribunal Judge continued:

“I regard AI as such a tool, and this is the first decision in which I have grasped the nettle of using it. Although judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment, it seems to me appropriate, in this case, for me to say what I have done.”

The Tribunal Judge reminded us that the Senior President’s guidance has recently been endorsed by the Upper Tribunal: see Medpro Healthcare v HMRC [2025] UKUT 255 (TCC) at [40] et seq (Marcus Smith J and UTJ Jonathan Cannan).

The Judge then addressed the AI: Guidance for Judicial Office Holders:

“…It is available online. It updated and replaced a guidance document originally issued in December 2023. The stated aim of the guidance was to assist judicial office holders in relation to the use of AI. It emphasises that any use of AI by or on behalf of the judiciary must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice. The guidance mandated the use of a private AI tool, Microsoft’s ‘Copilot Chat’, available to judicial office holders through our platform, eJudiciary. As long as judicial office holders are logged into their eJudiciary accounts, the data they enter into Copilot remains secure and private. Unlike other large language models, it is not made public…

Principally, I have used AI to summarise the documents, but I have satisfied myself that the summaries – treated only as a first-draft – are accurate. I have not used the AI for legal research.

I am mindful that “the critical underlying principle is that it must be clear from a fair reading of the decision that the judge has brought their own independent judgment to bear in determining the issues before them”: see Medpro at [43]. This decision has my name at the end. I am the decision-maker, and I am responsible for this material. The judgment applied – in the sense of the evaluative faculty, weighing-up the arguments, and framing the terms of the order – has been entirely mine…” Paragraphs 47 to 49.

Comment

There is a great deal to discuss about this decision. For now, I will set out some initial thoughts so that readers can carefully review the judgment themselves and then share their observations with me. I intend to write a more detailed follow-up analysis on this in due course. This decision may carry wider implications.

Firstly, the Tribunal Judge deserves real recognition for taking this step. It is the first decision I have seen in any court or tribunal where such candid disclosure about the use of AI has been given. One could observe that there was no obligation on the Judge to be so open, and that having chosen to do so provides us with valuable food for thought.

Secondly, the Tribunal Judge makes an interesting point about why this application was particularly well suited to the approach. He noted that: (1) it was a discrete case-management matter; (2) it was dealt with on the papers, without a hearing; (3) the parties’ positions were set out entirely in their written submissions and supporting materials; (4) no evidence was heard; and (5) no assessment of honesty or credibility was required. That reasoning potentially opens a broader conversation about the contexts in which AI might be used by Judges.

Thirdly, the Tribunal Judge’s reference to the Practice Direction from the Senior President of Tribunals is noteworthy. The senior judiciary have often spoken about the benefits that technological innovation can bring to legal work and the administration of justice. I was reminded of my recent discussion with the Northern Circuit on AI adoption, and in particular a speech delivered by the Master of the Rolls at the LawtechUK Generative AI Event on 5 February 2025. He expressed a similar sentiment:

“As many of you will know, I am often asked to speak about AI and the Law. Only last week, I spoke at the launch of Justice’s ‘AI in Our Justice System’ Report. I was struck by the reactions of some of the lawyers in the audience: nodding vigorously when the risks of AI are mentioned and freezing when it was suggested that even lawyers might have to find ways to use AI to expedite and reduce the cost of both legal advice and dispute resolution.”

When lawyers suggest to me that AI should not be used at all because of its risks, I often point out that such a view may be overly simplistic. In practice, lawyers will need to become well acquainted with these tools. This is not only because, as the senior judiciary have suggested, lawyers “might have to find ways to use AI to expedite and reduce costs”, but also because it may soon be arguable that a failure to adopt or recognise AI could amount to negligence, particularly if a party, witness, opponent or judge has used such a tool in a way that could impact a client’s case. I will expand on that argument in a subsequent article.

Fourthly, there is a further point of interest concerning the security of information placed into Copilot:

“As long as judicial office holders are logged into their eJudiciary accounts, the data they enter into Copilot remains secure and private. Unlike other large language models, it is not made public”

At this stage, I will not express a firm view on that assurance but would welcome insights from the technical experts who follow this blog. This is clearly an important matter we should all be thinking about carefully.

Finally, the Judge explained that AI was used to produce a first-draft summary, which he then reviewed for accuracy. It was not used for legal research. Regular readers will know that I have often highlighted the dangers of AI in legal work. Hallucinations are not confined to fabricated citations; they can also appear in summaries, factual narratives, and other outputs. The key point, as the Judge himself emphasised, is careful oversight:

“This decision has my name at the end. I am the decision-maker, and I am responsible for this material. The judgment applied – in the sense of the evaluative faculty, weighing-up the arguments, and framing the terms of the order – has been entirely mine.”

I do not immediately see why that reasoning could not also apply to certain types of legal research, but would be interested to hear thoughts.

As many readers know, I maintain an AI Hallucination Cases Tracker alongside others. What is less well known is that I also maintain a Judicial AI Use Tracker (How are Judges Using AI?), which can be accessed here. You will see at least two cases where it was alleged that false citations appeared in judicial decisions, though the exact reasons were not confirmed. Please keep sending me anything you find relevant to the use of AI in law.

Finally, I am so grateful to those who reach out and I hope to significantly build this community. Please enter your email below if you wish to receive regular updates.

Also, follow me on LinkedIn or other social media (links above and below) so we can discuss these important issues.

Final Comment from ChatGPT 5 Pro

This is where I invite a premium AI model to comment on or critique the preceding discussion. This week GPT 5 pro states:

On AI in judging: This is calmly significant. The judge used AI only to summarise and made the methodology transparent. That squares neatly with the 2024 Practice Direction’s push for concise reasons and the 2025 AI Guidance’s emphasis on security, verification and judicial ownership. We should not fetishise the tool. What matters is the standard of justice. The test remains whether the reasons are adequate and whether the judge has applied independent judgment. The Medpro discussion on reasons reminds us that adequacy does not mean prolixity. Used like this, AI can help sift and structure materials in paper determinations, saving scarce judicial time while keeping the decision maker firmly in charge.