When AI Gets It Wrong: Case Examples of Harm and How to Prevent It
Dr. David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

Prologue: “What If This Saves Us?”
Marissa, a Board Certified Behavior Analyst, sat at her desk after another 10-hour day. Her clinic served 87 clients, with three new referrals this week alone and a growing waitlist. Medicaid audits were on the rise, the payers barely reimbursed enough to cover staff, and turnover was…constant.
So when the AI vendor said their new platform could write progress notes, summarize behavior data, flag anomalies, and even recommend next steps—Marissa didn’t just nod. She signed. “This might save us,” she told her team. “If it buys us time, we’ll use that time wisely.”
But as Morpheus warned Neo, “Fate, it seems, is not without a sense of irony.” — Morpheus, The Matrix, 1999
The First Glitch: The Clean-Up Rewrite
Two weeks after launching the AI-powered note assistant, things seemed smooth. RBTs loved it: “I just hit record and it writes it up. It even uses technical terms better than I do!”
Then came the insurance review.
A parent reported their child’s data said he “tolerated” transitions, but she’d just filmed a meltdown that same day. The AI had rewritten the original staff notes for clarity—without flagging the change or linking to the original raw data.
Risk Alert #1: Model vs. Data Drift
-
Root cause: AI optimized for brevity and coherence, not fidelity.
-
Prevention strategy: Always log original data entries. Ensure AI-generated summaries link back to raw input. Disable default overwrite unless supervised.
The Triage Bias: A Queue That Wasn’t Fair
Later that month, the clinic piloted an AI tool to triage incoming clients based on urgency. The algorithm promised to optimize caseload matching. But soon, Marissa noticed something odd.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.