Behavior Analysts as AI Critics: A Professional Responsibility
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

On a Friday morning that felt indistinguishable from most others, Maya noticed something odd.
Maya is an RBT, sharp, conscientious, and six months into a caseload that had taught her more about real-world variability than any textbook ever could. She was reviewing a session summary generated by a new âAI-assisted documentationâ feature her clinic had recently enabled. The note looked clean, confident, and fluent. Too fluent.
She toggled back to her raw data. A discrepancy surfaced, small but unmistakable. The AI summary described âconsistent engagement during instructional trials.â Mayaâs event-based data told a different story: engagement was variable, context-sensitive, and dipped reliably after schedule changes. Nothing catastrophic, but it was not ânothing,â either.
She flagged it and walked down the hall to Alexâs office.
Alex is the supervising BCBA. Fifteen years in the field, calm under pressure, deeply allergic to hype. Alex had approved the AI tool because, on paper, it lived in what the vendor called a âlow-risk efficiency zone.â It summarized notes, it didnât make clinical decisions. At least, that was the claim.
This moment marks the beginning of the story, not because anything went wrong yet, but because this is how responsibility begins in applied behavior analysis. Not with catastrophe, but with noticing.
The Call to Critique, Not Compliance
Applied Behavior Analysis (ABA) has always been a field of critics. Not critics in the cultural sense, but in the functional sense. We interrogate claims. We trace effects to contingencies. We ask whether something works, under what conditions, and at what cost.
AI does not change that responsibility. It intensifies it.
Being âin the loopâ with AI systems is often framed as oversight. Review the output. Approve the recommendation. Click confirm. But that framing is passive, and passivity is not a behavior-analytic virtue.
In Project Chiron terms, being in the loop is not about supervision, it is about critique.
Alex looked at Mayaâs flagged note and said something deceptively simple:
âLetâs slow down and figure out what this thing is doing.â
That sentence contains the entire orientation shift this moment requires.
The Criticâs Toolkit: Four Skills That Matter
Alex didnât start by disabling the tool or defending it. Instead, Alex turned the moment into supervision, not just of Maya, but of the system.
What follows is the criticâs toolkit Alex walked Maya through, one skill at a time.
1. Claim Parsing: What Is the System Actually Saying?
The vendorâs website claimed the tool âaccurately summarizes session engagement patterns.â
Alex paused there.
âAccurately compared to what?â Alex asked.
âOver what time window?â
âUsing which signals?â
âAnd what does âengagementâ even mean in their model?â
Claim parsing is the skill of breaking vendor language into testable components. Not debating intentions, not inferring ethics, but isolating assertions.
Maya learned to translate marketing language into behavioral questions:
-
What inputs does the model use?
-
What outputs does it generate?
-
What dimension of behavior is it compressing?
-
What variability does it discard to do that?
In other words, what is reinforced in the systemâs loss function, even if no one names it that way.
The AI was not lying. It was optimizing.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.