Live Facial Recognition, AI Bias and the Equality Questions in R (Thompson and Carlo) v Commissioner of Police of the Metropolis

"207. The Commission emphasises that the world has seen the rapid development of artificial intelligence very recently with risks that, unless constrained by domestic and international law, States may use digital technologies to suppress dissent. In the domestic context, the uses to which LFR may be put, and the technology which supports it, will doubtless evolve but this claim is concerned with the terms of the Policy under challenge and (as we have said) its lawfulness now. We see no reason to take a different approach to Bridges."

live facial recognition

Key Takeaway

Matthew Lee, Barrister UK, considers the artificial intelligence legal issues arising in the case of R (Thompson and Carlo) v Commissioner of Police of the Metropolis (EHRC intervening) [2026] EWHC 915 (Admin) and how that may apply to bias, discrimination and equality.

Status: Active/Monitoring
Publication Date: 25 April 2026
Last Updated: 25 April 2026

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns facial recognition, AI Equality, Bias and AI Discrimination.

Introduction

For me, one of the biggest AI challenges facing those who adopt it, whether in the private or public sector, is the potential for bias and discrimination. It was for that reason that, almost a year ago, I decided to start documenting every incident I could find concerning AI bias and discrimination in the AI Equality, Bias and AI Discrimination Cases tracker.

Today, I am considering a case that discussed live facial recognition technology (“LFR”) in London.

In simple terms, when deployed, LFR cameras capture live CCTV within a defined zone and create biometric templates from detected faces. Software then compares biometric templates derived from live facial images with templates derived from watchlist images. If the software identifies what may be a match, it creates an alert for officers to consider. In this case, a judicial review was brought challenging the policy governing the use of that technology.

Other commentary has, quite understandably, focused on the interesting facts, privacy rights, protest rights and human rights. However, in this article, I have chosen to focus on something narrower, but in my view very important to the AI legal issues I usually discuss. That is the equality/discrimination issues and what future claimants, public bodies and lawyers may learn from the decision.

R (Thompson and Carlo) v Commissioner of Police of the Metropolis (EHRC intervening)

[2026] EWHC 915 (Admin)

The full facts are worth reading and can be found in the full judgment in the link above. For a short overview, the official press summary is here. In brief, the claimants challenged the lawfulness of a policy adopted on 11 September 2024 concerning the use of live facial recognition technology. The court explained why the defendant adopted the technology:

2. LFR is seen by the defendant and other police forces as a powerful tool for the prevention and detection of crime, finding missing persons and the protection of the public. It involves the use of cameras, which are generally moveable rather than fixed, to capture the images of members of the public in a public location. The facial features of individuals are expressed as unique biometric or numerical values. The software then compares those values with the biometric details of a number of persons sought by the police to see whether they achieve a pre-set score indicating the required likelihood of a positive match.

3. If there is no match, the biometric data of the member of the public is automatically and immediately deleted. Any facial images detected by the system which have not been matched are blurred in the images visible to officers. But if the biometrics do match, an alert is created and the two images to which the data relates are examined by officers to decide what if any action to take. The technology is capable of scanning a substantial number of faces per second and a very large number over the course of a deployment.

The court also set out the Claimants’ concerns:

“4. The claimants do not contend that the use of LFR is unlawful as a matter of principle, or in all circumstances. However, they submit that the use of LFR gives rise to “significant civil liberty concerns” which have become greater with the increased deployment of LFR over the last few years. They rely upon a number of matters. LFR involves the extraction of a person’s biometric data, a key attribute of an individual given its unique characteristics. LFR can be deployed without the consent or knowledge of the persons involved, on a large scale or in crowded locations. The overwhelming majority of persons whose facial biometrics are captured and processed are not suspected of any wrongdoing and are not otherwise persons for whom the police are looking. There is a risk of false identification which may result in intrusive questioning to establish identity.”

The court was not deciding whether LFR is good or bad policy. It was deciding whether the MPS policy was lawful on the grounds pleaded. The legal challenges were based on breaches of Articles 8, 10 and 11 of the European Convention on Human Rights (“ECHR”), but ultimately, the court found:

“229. The claimants’ human rights have not been breached whether under article 8 (ground 1) or under articles 10 and 11 (ground 2). Both grounds of challenge fail.”

How AI Featured and the Equality Issues

The judgment contained several important observations, submissions and passages of evidence concerning artificial intelligence, potential discrimination and the Public Sector Equality Duty. I have extracted the main passages below.

The judgement records that:

“144. Mr Thompson is concerned that LFR cameras are often deployed in places with larger Black communities which will result in disproportionate and intrusive targeting of Black people. He is scared of being wrongly identified by LFR again.

145. We should record, however, that the claimants do not raise any issues in relation to the Public Sector Equality Duty under s.149 of the Equality Act 2010.”

The Equality and Human Rights Commission also gave evidence addressing artificial intelligence, facial recognition technology and equality:

164. The Commission relies on the witness statement of Ms Angharad Davies, a Principal in its Compliance Team. Since October 2023, she has led the EHRC work on police use of facial recognition technology, including LFR.

165. Ms Davies explains that the equality and human rights implications of digital services and artificial intelligence have been a key priority for the Commission in recent years. She describes the numerous ways in which the Commission has contributed to public debate and policy-making on facial recognition technology in general and LFR in particular. We do not find her language sufficiently specific to gain a clear picture of what the EHRC has undertaken and achieved. We note however that it has worked with the Home Office, police forces, the College of Policing, the Local Government Association and the Information Commissioner’s Office in relation to the equality and human rights impacts of artificial intelligence. In September 2022, it published guidance called “Artificial Intelligence: meeting the Public Sector Equality Duty”

The court then addressed the wider context of artificial intelligence and the potential intrusiveness of the technology:

“207. The Commission emphasises that the world has seen the rapid development of artificial intelligence very recently with risks that, unless constrained by domestic and international law, States may use digital technologies to suppress dissent. In the domestic context, the uses to which LFR may be put, and the technology which supports it, will doubtless evolve but this claim is concerned with the terms of the Policy under challenge and (as we have said) its lawfulness now. We see no reason to take a different approach to Bridges.”

The discrimination issue then appeared most directly in the submissions about where live facial recognition might be deployed:

“209. Mr Squires submitted that the arbitrary nature of where LFR may be deployed under the Policy may lead to disproportionate deployment in areas of London which are lived in by ethnic minority communities.

210. We heard no developed or meaningful challenge on discrimination grounds. We accept that, if the Policy had the effect of discriminating against a section of the community, it could or would give rise to arbitrary intrusion on individual rights and would not have the quality of law. However, the risk and potential scope for discrimination on grounds of race was no more than faintly asserted by Mr Squires. We are not able to accept, on the thin submissions advanced before us, that concerns about discrimination infect the legality of the Policy.”

Comment

The public lawyer in me is naturally interested in several aspects of this judgment, especially the quality of law requirement, the relationship between policy and discretion and the court’s approach to Articles 8, 10 and 11 ECHR. Those are important issues and in a different legal article I may come back to them.

It is important, though, to remember what this case was and was not. It was not a general merits review of LFR as a policing tool. Nor was it a ruling that LFR will be lawful or unlawful in every future use. The court was concerned with the lawfulness of the Metropolitan Police Service’s September 2024 policy, on the grounds pleaded and on the evidence before it.

For this article, my focus is narrower. The judgment records that no meaningful challenge was made on discrimination grounds and that no issue was raised under the Public Sector Equality Duty. Even so, the judgment mentions, hints at, or leaves open a number of potential AI, equality and discrimination issues. Those issues may matter not only for future challenges to LFR, but also for wider legal challenges to AI-assisted decision-making.

For example, the judgment records Mr Thompson’s concern that “LFR cameras are often deployed in places with larger Black communities which will result in disproportionate and intrusive targeting of Black people.” Although no Public Sector Equality Duty issue was raised in this case, future challenges to AI-enabled surveillance may confront it. If a technology is deployed more often in particular communities, or has unequal effects by reference to race or another protected characteristic, the legal question may include whether the public authority properly understood, assessed and monitored the equality impact of using it in that way.

Those issues may not be confined solely to public bodies. Similar issues may also arise in private law claims, depending on who is using the technology and how it is used. In an appropriate case, arguments may be framed through discrimination, harassment, data protection, misuse of private information or other civil causes of action.

The central question in future cases may not be whether live facial recognition is “AI” in the abstract. It may be where, if anywhere, the alleged discrimination enters the system. For example, is it in the algorithm’s performance, the threshold, the watchlist, the data used to identify, the geography of deployment, the response to an alert or the absence of meaningful monitoring after the system is used?

That matters because discrimination in AI-enabled systems may be difficult to locate with precision. Different tools, human decisions, datasets and operational choices may all interact at different stages. A model may perform well in technical testing and still be used in a way that exposes one community or group to more surveillance than another. A human may review an alert and still be operating within a system whose equality impact has not been fully understood.

Before closing, I should also revisit the Commission’s emphasis that “the world has seen the rapid development of artificial intelligence very recently with risks that, unless constrained by domestic and international law, States may use digital technologies to suppress dissent…” That passage seems to me to matter beyond this individual challenge. It recognises that AI-enabled surveillance is developing quickly, and that public law may increasingly be asked to test not only whether such technology can lawfully be used, but whether the safeguards around its use are sufficiently attentive to equality and human rights considerations. That, in my view, is where some of the most important future challenges may lie.

If you have found this legal article helpful, you may wish to subscribe to my AI law articles via the Substack below. Please do continue sending me decisions that you think should be tracked, whatever the jurisdiction. One of the clearest lessons from this work is that lawyers, judges and court users in different systems are often confronting closely related problems. Sharing decisions, experiences and reflections across practice areas and across jurisdictions remains one of the most constructive ways to respond.

Final Words from Popular LLMs

This is the point at which I invite an LLM to provide an independent comment on the preceding article. At times the observations are slightly off the mark, at other times they are perceptive, and occasionally they produce something entirely unexpected. Whatever the outcome, I find it a useful and revealing exercise to see what emerges.

Chat GPT (Open AI)

“The difficult legal work in future AI-surveillance cases will be to identify precisely where unequal treatment or unequal impact enters the system: not only in the facial-recognition model, but also in the watchlist, deployment geography, policing priorities, officer discretion, feedback loops and post-deployment monitoring. A system may be technically accurate in aggregate yet still operate in a discriminatory way once embedded in real-world policing.”

Gemini (Google)

“”This is an insightful analysis that accurately pinpoints a critical vulnerability in the current legal approach to AI surveillance: the intersection of technical performance and operational deployment. While much of the debate around live facial recognition focuses on the underlying algorithms, this article (not the judgment) correctly emphasizes that bias can easily be introduced through human operational choices, such as where cameras are geographically placed and whose faces populate the watchlists. As AI systems become more deeply embedded in law enforcement, courts will inevitably have to grapple with these nuanced discrimination and equality arguments under the Public Sector Equality Duty and beyond. Ensuring that AI serves public safety without disproportionately infringing on the civil liberties of specific communities will require exactly the kind of rigorous, context-aware legal scrutiny advocated for in this piece.””

Claude (Anthropic)

“”What strikes me most about this judgment is the gap it exposes between the pace of AI deployment and the pace of legal doctrine. The article is right to identify that the discrimination question may not have a single clean entry point: it may be distributed across the algorithm, the watchlist, the deployment geography, and the human responses that follow. The real challenge for law, as I read it in this article, is whether the doctrine can move from reactive to anticipatory fast enough to keep pace with technology that scales in ways that individual legal challenges, by their nature, cannot.”