Hidden Algorithms, Visible Consequences: Applied AI Use Cases
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

At the annual ABA conference, the exhibit hall was louder than usual. Booths no longer promised only cleaner graphs or better data collection. They promised “intelligent triage,” “authorization optimization,” and “predictive insights.” Words like smart, adaptive, and automated stretched across banners in clean sans-serif fonts, repeated often enough to feel inevitable.
Jackie, a BCBA attending her first regional conference, had circled a supervision workshop on her schedule. She had come to refine feedback skills, not evaluate software. But she slowed as she passed a booth labeled: “Precision Insights for Modern ABA”, skimming the table for signs of swag and treats.
Behind the counter stood Dr. Lucas, BCBA-D. Crisp badge. New suit. The kind of welcoming smile that suggests both competence and momentum.
“We’re using AI to reduce administrative friction and improve clinical efficiency,” he said.
Jackie stepped closer.
AI in Applied Behavior Analysis: Where Algorithms Actually Show Up
When behavior analysts hear “AI,” they often imagine generative tools like Large Language Models (Read More on LLM Wrappers Here) or automated note writers. In reality, the most consequential AI systems in ABA are quieter. They are not new platforms so much as invisible layers inside the platforms we already use.
They sit inside goal libraries, intake portals, authorization builders, scheduling systems, and productivity dashboards. They influence decisions long before a BCBA signs off on a treatment plan. They are not replacing clinical judgment outright. They are shaping what appears in a system before judgment is even exercised.
Lucas began with the goal recommender.
“It analyzes intake data and suggests targets aligned with client profiles,” he explained. “It saves so many hours.”
Maya nodded. “What was it trained on?”
He hesitated just long enough.
Goal Recommenders in ABA Practice
Most goal recommendation engines are trained on historical treatment plans, diagnostic codes, age bands, and payer approval data. In other words, they learn from what clinicians have previously prioritized, described specific to the needs of other clients, what payers have historically approved, and what agencies have successfully billed.
If expressive language goals were consistently approved at higher rates, the system will surface them more often. If certain adaptive or community-based goals were frequently reduced during review, those goals may appear less frequently in suggestions over time.
Not because the algorithm understands quality of life or the environment surrounding each client better than BCBAs. But because it optimizes patterns in the textual data it was given.
That distinction matters.
When clinicians repeatedly accept algorithmic suggestions without modification, subtle shifts begin to occur. Goal distributions across a clinic may narrow. Certain domains may become dominant simply because they move smoothly through administrative channels. Others may gradually decline in frequency, not through intentional clinical reasoning, but through reinforcement embedded in payer history.
Over months and years, those small shifts accumulate. New treatment plans become training data. The model grows more confident in its narrowed pattern. No formal meeting is ever held to reduce scope. It becomes efficient.
In cultures that optimize for what moves quickly, what moves thoughtfully and slowly often disappears first.
Installing guardrails at this stage does not require rejecting the technology. It requires making its influence visible and optimizing for quality. Disclosure that recommendations are algorithmically generated ensures clinicians know what they are reviewing. Logging overrides should be encouraged, allowing leadership to see where clinical reasoning diverges from automated suggestions. Periodic review of goal domain distributions can reveal whether certain targets are quietly fading. Monitoring how model updates alter suggestion patterns keeps drift from becoming invisible.
Behavior analysts understand selection by consequences. AI systems optimize loss functions. The outcomes can look similar. The processes are not the same. That difference deserves attention.
Intake Triage and Algorithmic Prioritization
Lucas shifted to intake scoring.
“Our triage model flags high-risk referrals and helps agencies prioritize ethically.”
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.