AI and Equity: Impacts on Access, Justice, and Inclusion
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

In ABA, the biggest “clinical” problems are not always clinical. They are operational. Who gets deemed eligible. Who gets scheduled first. Who receives the full dosage of services versus a trimmed plan that fits a utilization percentage.
AI now sits inside many of those bottlenecks, sometimes loudly (“AI-powered authorizations”), more often quietly: an “intelligent” scheduling optimizer, a predictive no-show score, a triage tool for waitlists, or a note summarizer that becomes the template everyone copies.
For BCBAs and RBTs, the equity risk is simple: when AI systems touch access decisions, small model errors create bad data as well as unequal treatment. And, unequal treatment becomes self-reinforcing because the system learns from the world it helped shape.
Equity is not a slogan. In a pragmatic, contextualistic frame, equity is an outcome pattern. Do access, intensity, and results differ by subgroup in ways that cannot be justified by clinically relevant variables? The truth criterion is “successful working,” meaning you judge your approach by whether it produces better outcomes, not by whether it wins an argument (Hayes et al., 1988). 
If you want a behavior analytic definition you can actually use, try this:
Equity is the fair, transparent, and reviewable distribution of opportunities and supports to access effective ABA services, proportional to clinically relevant need and contextually relevant barriers.
Not a vibe. Not a slogan. An observable pattern. If the pattern is off, you change the system.
Below is a clinic-ready way to think about the problem, and a set of guardrails you can stand up without becoming an AI engineer.
Where AI shapes equity in ABA clinics (and the guardrails to install)
1) Eligibility and intake triage
Problem: Many organizations use automation to prioritize intake calls, score likelihood of enrollment, or route cases to clinicians and locations. These systems tend to “like” families who are easy to reach, quick to respond, and fluent in the organization’s preferred communication channel.
The equity problem is that those traits are not clinical variables. They are context variables. A family working two jobs, managing transportation constraints, navigating limited data plans, or communicating in a non-dominant language can look “low engagement” to a system that equates fast response, or few constraints, with high readiness. The model is not malicious, it is simply blind to the difference between barrier and choice.
Guardrails (Intake):
-
Language access by default: offer interpreter-supported intake when preferred, and do not treat auto-translation as consent-ready. Ironically, AI makes this much easier than it used to be, even if it’s not perfect.
-
A fast human review lane: any “low priority” intake score triggers a quick BCBA or lead intake review before closure or deprioritization.
-
Closure protections: require a minimum outreach standard (number of attempts, multi-channel contact) before “unable to contact” is allowed.
-
Subgroup monitoring: track and stratify time-to-contact, time-to-assessment, and closure reasons by primary language, payer, and region.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.