Ethics at the Edge — Navigating AI Dilemmas in a Changing World
David J. Cox PhD MSB BCBA-D, Ryan O'Donnell MS BCBA
The portal flickers again. This time, the terrain ahead isn’t just unknown—it’s unstable. Warnings blink on the edges of the screen. You’ve entered a level where the map isn’t finished yet. Welcome to the world of emerging ethical dilemmas.

"Just Trying to Get Through the Week"
Sam, a mid-career BCBA, is under pressure. Two team members are out, the regional director has reminded her she’s behind on billable hours, and parents keep calling about treatment plan updates. Out of desperation, she signs up for an “AI-powered ABA assistant” that promises to “cut documentation time in half” and “generate behavior plans automatically.”
Sam uploads raw data from her last five client sessions into the tool, skips the tool's optional walkthrough, and lets the software auto-generate SOAP notes and treatment recommendations. She reads none of the notes but signs and uploads them to the client file to meet her documentation deadlines.
One output contains language like “oppositional behavior” and “noncompliant tendencies” and suggests a punishment-based intervention, none of which align with her previous functional assessment or the client's actual behavior.
A supervisor flags the notes as “out of character” for Sam, and a parent later complains that the plan seems punitive. Sam explains, “The system generated it. They said all recommendations are based on best practices and grounded in ABA.”
Welcome to the Sixth Issue of the Ongoing Series.
Behavior analysts pride themselves on seeking ethical clarity. But AI introduces something different: situations where the rules haven’t been written yet, and many consequences emerge only after deployment.
You won’t find clear precedents for many AI-related cases. There’s no section in the BACB Code titled "Automated Session Summarizers" or "Reinforcement Algorithms Trained on Biased Data."
That means we need something more than compliance. We need ethical reasoning applied to unfamiliar territory.
This week, we surface the unique ethical challenges that AI creates, where our current frameworks succeed, where they fall short, and how to approach ethical analysis when the terrain is still shifting.
What Makes AI Ethically Different?
AI tools don’t just automate—they amplify. And that amplification comes with new ethical tensions:
-
Speed vs. Scrutiny: When AI delivers instant suggestions, are you more likely to skip functional analysis?
-
Scale vs. Individualization: When models apply insights from thousands of clients, do they flatten the nuance of the one in front of you?
-
Delegation vs. Accountability: When documentation is auto-written, who owns the final note?
-
Performance vs. Transparency: When the most “accurate” models are black boxes, can you justify using them clinically?
The dilemma isn’t that these tools are inherently unethical. It’s that their use can outpace our ability to analyze their consequences. And in behavior analysis, consequences matter.
Let’s Apply the Code (And See Where It Stretches)
The BACB Ethics Code (2022) still gives us guardrails. But many AI dilemmas push those guardrails outward. Here’s how.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.