The Cost of Carelessness: Real Risks from Misused AI Tools
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

When the System Fails
In Minority Report, the most dangerous failures were procedural.
A name entered incorrectly. A vision misinterpreted. A safeguard was bypassed because it slowed things down. No one intended harm. No one acted maliciously. But the system, under pressure to move fast and look certain, stopped checking itself.
AI failures in real organizations look exactly like this.
They rarely arrive as headline-grabbing catastrophes. They show up as small shortcuts, thin wrappers, unchecked defaults, and quiet assumptions that “someone else has verified this”. And because AI systems scale, those small errors don’t stay small for long.
This issue is about recognizing predictable failure modes before they become reportable events.
The Myth of the “One-Off Mistake”
When AI causes harm, the first reaction is often to blame:
-
a careless staff member,
-
a poorly trained user,
-
or a “bad model”.
That explanation is comforting. It’s also usually wrong.
In practice, AI failures are systemic, not individual. They emerge from workflows that reward speed over verification, volume over quality, and compliance optics over actual oversight.
From a behavior-analytic perspective, these outcomes are not surprising. They are part and parcel of poor contingency design.
Four Predictable Failure Modes
Across healthcare, education, and human services, the same failure patterns appear again and again. ABA organizations are not immune.
1. Thin Wrappers: When “AI-Powered” Means “Uncontrolled”
A thin wrapper (Read Our Dedicated Blog Post on Wrappers) is an AI feature layered on top of a workflow without real safeguards:
-
No task specification
-
No limits on scope
-
No required human review
-
No audit trail
-
No concern for data privacy
Examples:
-
Auto-drafted notes that overwrite originals after hitting an LLM’s API
-
Recommendation engines with no explanation or provenance
-
Dashboards that flag “risk” without defining cost-of-error
These tools often feel polished, but they lack friction where friction is necessary.
In Minority Report terms: the system shows you a vision, but doesn’t tell you how it was produced, how confident it is, or when not to trust it. That works in science fiction. But not in reality.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.