Ad/Marketing Communication
This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns the AI discrimination case tracker.
Tracker Status: Active/Monitoring
Last Verified: 30 January 2026
Latest Case: Johnstone and National Disability Insurance Agency

Ad/Marketing Communication
UK‑based legal commentary and comparative analysis of international case law on AI related legal issues. This AI Equality, Bias and AI Discrimination Case Tracker forms part of lecturing/teaching law and writing/editing law articles/reports and is communicated solely in connection with promoting or advertising Matthew Lee’s practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers.
Subscribe and Trackers Generally
If these issues interest you, please subscribe below and consider my other trackers.
The AI Equality, Bias and AI Discrimination Case Tracker
This tracker focuses on court and tribunal cases involving equality, bias, and discrimination linked to the use of AI. It highlights both the challenges and opportunities that AI brings to equality law, capturing examples where AI systems have contributed to unfair outcomes as well as where they have been used to promote fairness, accessibility and inclusion.
What you can see
You will see two things below: the AI Equality, Bias and AI Discrimination Case Tracker and a set of Charts/Diagrams. The tracker lists Case Name, Date and a Brief Description of the case, each with a link to the source where available.
What the Charts Will Show (when research is complete)
The charts provide an early picture of the data currently being analysed on equality, bias and discrimination, AI-related and, where appropriate, non-AI related. They explore both the risks and benefits of AI within equality law, offering insight into areas where AI may contribute to bias or inequality, as well as where it enhances fairness, accessibility, and consistency in decision-making. These are complicated issues and they offer early insights into questions such as:
- What negative outcomes are observed from bias and discrimination related to AI and algorithms?
- What type of discrimination is alleged, admitted or found? (e.g., direct, indirect, harassment, victimisation)
- Which protected characteristics/classes are most frequently engaged?
- Where are issues arising?
- Is AI involved, and how? (e.g. fully automated decision-making, assisted decision, profiling, facial recognition/biometrics, scoring/risk assessment etc)
- What positive outcomes are observed? (e.g. improved consistency, increased access, reduced human bias, fairer resource allocation)
- What outcomes are recorded overall?
- What forms of evidence appear?
- What governance themes recur?
What’s in the Vault
These visuals are powered by private research that is not included in the public tracker. The private materials may include deeper analysis, richer tagging, broader categories, timelines, quotations, primary‑source links, and structured coding for technology, legal bases, outcomes, and remedies. I do not publish the underlying databases or research publicly. If you wish to discuss access to the private research, please contact my clerks in Chambers.
Important Note
This tracker is part of ongoing research. Entries are added and revised as I review judgments, orders, official reports, and other primary materials. The tracker is informational only, not legal advice, and inclusion does not imply wrongdoing or any conclusion beyond what is stated in the cited sources. Terminology varies across jurisdictions; categories are standardised for comparison and may not match the exact wording used in a judgment or decision. Always refer to the official source linked from the entry for authoritative detail. If you spot a factual issue, please let me know and I’ll review it.
The Aim of the AI Equality, Bias and AI Discrimination Case Tracker
The purpose is to gather broad, cross‑jurisdictional data about how discrimination/equality issues especially those involving automated systems and AI are surfacing in courts, tribunals, regulator reports, ombuds complaints, settlements, and credible documents.
Charts
AI Equality, Bias and AI Discrimination Case Tracker (Database)
| Date | Case | Country | Brief Details |
|---|---|---|---|
| 22-Jun-2020 | UK | "Paragraph 2.10 of the report explains:In early 2020, the Joint Council for the Welfare of Immigrants (JCWI), and the digital rights group Foxglove brought a legal challenge against the Home Office’s use of the Streaming Tool on the basis that:• it amounted to unlawful discrimination based on race contrary to the Equality Act 2010 (EA2010)• it contained ‘feedback loops’ which could drive further discriminatory decisions within the system.As part of its response to Judicial Review proceedings, the Home Office withdrew the use of the Streaming Tool across all entry clearance operations." | |
| 3-Oct-2018 | UK | The appeal concerned the lawfulness of the use of live automated facial recognitiontechnology (“AFR”) by the South Wales Police Force (“SWP”) in an ongoing trial usinga system called AFR Locate. AFR Locate involves the deployment of surveillancecameras to capture digital images of members of the public, which are then processedand compared with digital images of persons on a watchlist compiled by SWP for thepurpose of the deployment. | |
| 4-Sep-2020 | USA | The issue was whether false arrest was caused by an inaccurate facial recognition system. | |
| 1-Apr-2022 | USA | The issue was whether property management firm software and practices discriminated against renters receiving government housing subsidies. | |
| 1-May-2022 | USA | The issue was whether a company's hiring software unlawfully discriminated against older job applicants | |
| 1 June 2022 | US v Meta | USA | The issue was whether a company's advertising system discriminated against users in housing advertisements. |
| 1-Aug-2022 | FTC v Kochava | USA | The issue is whether a data broker sold sensitive geolocation data from millions of devices, potentially revealing private visits and exposing individuals to serious risks. |
| 1-Oct-2022 | USA | The issue is whether a bank unlawfully discriminated by race in home mortgage refinancing | |
| 1-Jul-2023 | USA | Issue: Did tenant screening company scoring cause racial discrimination. | |
| 1-Apr-2023 | USA | Issue: Does workplace applicant screening algorithm discriminate against African Americans, people over 40 and disabled individuals | |
| 25-Sep-2023 | USA | Issue: Did AI tools lead to housing discrimination. Did the tools reject internet rental applications from those receiving housing assistance. | |
| 2024 | UK | Issue was whether a Real Time ID check system in the UK which relied on Microsoft Real Time ID Check software and required drivers to upload a real time selfie when using the app which is checked against the driver’s profile photo caused indirect race discrimination, harassment and victimisation. | |
| 1-Jan-2025 | UK | Issue: whether Grammarly Pro was a reasonable adjustment. | |
| 1-Feb-2025 | Australia | Issue: whether paid version of ChatGPT (among other supports) met the statutory criteria of “reasonable and necessary” . | |
| 12-Jun-2023 | Netherlands | Issue: Whether Meta Ireland’s Facebook job‑ad delivery skewed by gender based on Global Witness tests | |
| 11-Sep-2023 | USA | Issue: whether a machine-learning algorithm subjected Black policyholders to additional administrative hurdles and delays in processing their claims. | |
| 23-Aug-2024 | Australia | Issue: court considered trans‑rights and equality and how gender‑identity discrimination law applies to AI‑mediated identity checks. |




