Avoiding Harm: Red Flags in AI Tools for Behavior Analysts
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

The Dinner After the Conference
Note: All names used in Chiron are fictitious.
The conference center lights faded behind them as Jordan and Morgan (both BCBAs from our issue last week) crossed the street toward a small restaurant packed with other attendees unwinding after a long day.
The room hummed with conversation about reimbursement updates, new research, and the familiar question every practitioner seemed to be asking: What exactly is happening to our field?
Jordan slid into the booth and set the conference program down beside the menu.
“My brain is still spinning from that AI panel.”
Morgan laughed.
“Same. Half the vendors sounded like they built miracle machines. The other half sounded like they built… something else entirely.”
The server poured water. Jordan leaned back.
“You know what nobody talked about?”
“What?”
“How to tell when one of these tools is a terrible idea.”
Morgan smiled.
“That’s because most people don’t have a framework for evaluating risk yet.”
Jordan nodded slowly.
“So what would one actually look like?”
Morgan took a sip of water.
“Well… if we were serious about using AI safely, the first step wouldn’t be buying software.”
Jordan raised an eyebrow.
“It would be building the right room around the decision.”
The Risk Problem in AI Adoption
Why AI Tools Require Structured Oversight
Artificial intelligence tools are entering clinical practice faster than most professional guidelines can adapt.
In ABA practice today, vendors already market tools that promise to:
-
generate session notes automatically
-
summarize assessment data
-
predict treatment outcomes
-
optimize scheduling and billing workflows
Some will become valuable tools. Others will introduce risks that only become visible months or years later. The challenge is not that AI tools exist. The challenge is that many organizations adopt them without a structured risk assessment.
Jordan opened the menu.
“So how do you even start evaluating risk?”
Morgan shrugged.
“You build the kind of team that asks the right questions before anything gets purchased.”
The Board You Didn’t Know You Needed
A Practical Governance Model for AI in Clinical Practice
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.