AI Can Help—But it Can Also Harm: Spotting Risks Before They Happen
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA
The screen glitches. Warning sirens hum. A new level of the quest renders, but this time the colors are off—shadows linger at the edges of the terrain. A pop-up appears: “Proceed with caution: Risk assessment required.”

Welcome to the Eighth Issue of Chiron
So far, we’ve built literacy in what AI is, how it learns, how to evaluate outputs, where it’s already embedded, and what unique ethical dilemmas it raises. Now we arrive at a crucial checkpoint: risk assessment.
Because here’s the truth: AI can help, but it can also harm. Your ability to spot risks before they shape practice is what separates a fluent AI system user from a passive user clicking “accept.”
Why Risk Assessment Matters in an AI World
In behavior analysis, we already anticipate risk:
-
Will a new intervention plan generalize?
-
Could a reinforcement strategy inadvertently strengthen problem behavior?
-
What happens if a family or school misapplies a procedure?
AI tools introduce parallel risks—but at scale, with speed, and sometimes invisibly. A single misclassification, hallucinated report, or biased recommendation can impact hundreds of clients in seconds. The macrocontingencies can multiply errors faster than human review can catch them.
In short, what you don’t evaluate up front can become the ruleset of your practice later.
The Behavioral Lens on Risk
Risk assessment in AI isn’t guesswork—it’s contingency analysis applied to a new class of tools. We ask:
-
What outputs is the AI most likely to emit, and under what conditions? Think: “When does it get it wrong—and what happens when it does?”
- What reinforcement history shaped the model? Someone made a lot of decisions to guide the AI system to what it is today. Training data, feedback loops, and vendor incentives are all part of the learning history that shaped that AI product.
-
What are the likely consequences of adopting this system—both for your clients and for your practice? Efficiency gains can come at the cost of skill degradation, bias amplification, or misplaced accountability.
Four Risk Domains to Watch
Like the four-term contingency, risk in AI tends to cluster. Here are the four big categories where behavior analysts should stay sharp:
1. Bias Amplification
-
What it might look like: A scheduling system consistently assigns fewer hours to families who receive Medicaid funding because historical data highlights their lower reimbursement rates.
-
Why it matters: ABA already struggles with equitable access. AI can calcify those inequities if left unchecked.
-
Red flag cue: If a vendor cannot show you an equity audit, assume bias is present.
2. Privacy and Security Breaches
-
What it might look like: Session notes are auto-summarized by an LLM running on a server outside HIPAA compliance or using a thin LLM-wrapper.
-
Why it matters: Protected health information (PHI) becomes training fodder for commercial models, exposing clients and your organization to liability.
-
Red flag cue: Vague vendor answers about data retention, storage, and model training practices.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.