AI and Disability: Administrative Appeals Tribunal (AAT) of Australia Decision on Disability Supports

Ultimately, this case highlights the importance of tribunals developing a thorough understanding of AI technologies within the specific legal frameworks they operate under, especially when dealing with vulnerable individuals and equality considerations. Recognising and properly evaluating nuanced differences in AI capabilities will be essential to accurately assessing their true value and ensuring fairness in future decisions.

Ad/Marketing Communication

This legal article/report forms part of my ongoing legal commentary on the use of artificial intelligence within the justice system. It supports my work in teaching, lecturing, and writing about AI and the law and is published to promote my practice. Not legal advice. Not Direct/Public Access. All instructions via clerks at Doughty Street Chambers. This legal article concerns AI Law.

Introduction & Background

Of all the Australian cases I have considered, this one has resonated strongly as I foresee similar issues emerging in UK litigation concerning reasonable adjustments under the Equality Act 2010. AI and disability have been brought into sharp focus by the recent tribunal decision in Johnstone and National Disability Insurance Agency (NDIS) [2025] ARTA 106. Legal practitioners, disability advocates, and interested readers should carefully consider that decision.

The detailed judgment examines various supports requested by a participant with complex disabilities. Significantly, it explores the role of “ChatGPT” and other AI-driven technologies in determining “reasonable and necessary” supports under Australian law.

Below, I summarise the core case details, the tribunal’s decision, the specific role of AI technology, and how similar issues might be addressed under UK equality legislation.

Case Summary

The applicant had multiple impairments, including chronic cardiac disease, spinal injuries, Autism Spectrum Disorder (ASD), and Attention Deficit Hyperactivity Disorder (ADHD), resulting in high support needs at home and within the community.

The National Disability Insurance Agency (NDIA) manages Australia’s National Disability Insurance Scheme (NDIS). The central legal issue was determining whether requested supports, including respite accommodation, specialised wheelchairs, home automation, assistive technologies, and AI-based services, met the statutory criteria of “reasonable and necessary” under the National Disability Insurance Scheme Act 2013 (Cth), amended by the Getting the NDIS Back on Track No.1 Act.

The tribunal approved increased respite days, additional support worker hours, two-way radios, cooling aids, and supplements. However, it refused requests like an advanced off-road wheelchair, dedicated meal-preparation shifts, and various technology subscriptions, deeming them not value-for-money or appropriate.

How AI and Disability Featured in This Case

A particularly novel request concerned funding for a premium subscription to ChatGPT. The tribunal’s reasoning is set out in paragraphs 168 to 173 which I will summarise below.

  1. The applicant argued that while there is a free version it was insufficient. Advantages cited for the subscription version include: Enhanced buffer capacity for complex requests; Greater consistency and reliability; Adaptation to the Applicant’s communication style; Processing external documents; and Compatibility with additional AI tools or ‘plug-ins’.
  • The Agency argued the subscription posed potential risks due to inaccuracies, asserting it did not offer value-for-money compared to the free version. The tribunal rejected the argument regarding harm, finding the Applicant was aware of and capable of managing risks.
  • Oral evidence illustrated the subscription’s practical benefits, reducing caregiver burden through improved communication assistance and document management.
  • The tribunal differentiated this from the decision in Gelzinnis and National Disability Insurance Agency [2021] AATA 3970, where internet access was critical for reducing reliance on support workers due to specific environmental and personal circumstances.
  • Ultimately, the tribunal held the paid ChatGPT version lacked sufficient distinct advantages to constitute value-for-money relative to the free option. Despite acknowledging some merit in the Applicant’s experience, the tribunal found the evidence insufficient to justify funding the subscription.

Comment – How AI and Disability Intersect under the Equality Act

This is the first tribunal decision I’ve seen that explicitly evaluates the differences between the free and paid versions of ChatGPT. Given my own extensive use of large language models (LLMs), I find it difficult not to sympathise with the Applicant’s position. The difference between premium and free versions is significant enough to raise genuine concerns about creating a two-tier AI system. Those who can afford premium versions could gain much greater assistance than those limited to free alternatives, potentially leading to inequality.

There are numerous LLMs, each with distinct strengths and weaknesses. In my experience, I’ve found a mixture beneficial, although I certainly have preferences. Over time, I’ve learned which prompts work best, and the benefit of advanced memory retention offered by certain models has become evident. Below is a concise summary of some key differences I’ve noticed. Please note, these observations are my personal views, and others may have different experiences:

  1. Advanced versions offer superior memory retention, essential for individuals who rely on consistent interactions and personalised experiences.
  2. Premium models are typically more accurate and reliable, this may prove vital for users with disabilities, where even minor misunderstandings could significantly impact quality of life and safety.
  3. More expensive AI versions usually have better integration features, supporting tailored solutions beyond simple text interactions.

Although the Tribunal considered similar issues, it applied a different legal framework from the Equality Act 2010, which would be relevant in the UK context. I’ve been reflecting on how such an issue might unfold in a claim for reasonable adjustments under section 20 of the Equality Act. Different factors, including those related to cost, might carry lessor weight, and when all circumstances are considered, this could lead to varied outcomes.

There is a notable tension between measurable ‘value-for-money’ assessments and the intangible yet significant user experience. In this case, although the Applicant’s evidence effectively highlighted the day-to-day impact of premium AI features, the tribunal placed greater emphasis on the absence of clear expert evidence articulating these differences. Under an Equality Act 2010 claim, courts might face similar challenges, weighing expert evidence against personal experience in evidence. As previously noted, personal experiences with LLMs vary significantly, so there may be some difficulties in obtaining cogent and consistent evidence on the issue.

To be clear, it is unlikely that a court would readily dismiss requests for advanced AI support when reasonable adjustments are required. Representatives would likely bring the decision in Heskett v Secretary of State for Justice [2020] EWCA Civ 1487 to the tribunal’s attention. In that case, the Court of Appeal held that while cost can form part of the justification, it cannot be the sole reason for refusing necessary adjustments.

Decision-makers would therefore need to carefully assess whether declining to provide premium AI tools represents a proportionate means of achieving a legitimate aim, such as financial sustainability. They would also need to consider whether alternative, less costly measures could effectively meet an individual’s needs, a determination that will be highly fact-sensitive.

Tribunals and courts will benefit significantly from a clearer understanding of the distinctions between AI models. The subtle technical differences between free and premium versions can lead to considerable real-world impacts, affecting future cases. My aim in this blog has been partly to highlight these nuanced considerations, which decision-makers should carefully evaluate. This case is significant as it explicitly addresses issues at the intersection of AI and disability.

Greater legal clarity on funding advanced AI tools could encourage public agencies to develop fairer, clearer, and more accessible guidelines on technology support.

Those with duties under equality legislation should approach decisions regarding AI support cautiously. There is a risk that restrictive funding policies for emerging technologies might unintentionally discourage innovative and genuinely beneficial solutions. This highlights a gap between current evaluation frameworks and rapidly evolving technologies, which needs careful attention.

Ultimately, this case highlights the importance of tribunals developing a thorough understanding of AI technologies within the specific legal frameworks they operate under, especially when dealing with vulnerable individuals and equality considerations. Recognising and properly evaluating nuanced differences in AI capabilities will be essential to accurately assessing their true value and ensuring fairness in future decisions.