Whose Values? Navigating Ethical Dilemmas in AI-Supported Decisions
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

If you’ve worked in applied behavior analysis (ABA) long enough, you’ve seen this movie before. A tool is introduced to “increase efficiency.” A metric is selected to prove it works. The metric improves. Something else quietly degrades.
In the past, that “something else” was often clinical judgment, supervision quality, or individualized programming. Today, it is increasingly values.
AI systems do not weigh dignity, autonomy, social validity, or contextual nuance unless those priorities are deliberately translated into constraints, review rules, or override authority by humans. They optimize what they are asked to optimize. The problem is not malicious intent. The problem is unexamined or hyper-focused objectives.
When an organization deploys an AI tool to reduce documentation time, improve utilization, or standardize decision-making, the model will push relentlessly toward that target. Unless explicitly designed to do so, AI systems won’t automatically preserve dignity, autonomy, nuance, consent, or contextual factors that matter deeply in behavior analytic practice—the kinds of things that are hard to quantify but costly to ignore (especially once a prediction starts getting treated like a conclusion).
This issue sits squarely in Ethical Awareness, not because AI is uniquely dangerous, but because it accelerates a dynamic that has always existed in systems of care. Models make trade-offs. If those trade-offs are invisible, they are also indefensible.
This newsletter lays out a practical framework for making those trade-offs explicit using four tools that fit naturally within behavior analysis: value mapping, informed consent that names AI use, clear human-in-the-loop rules, and an escalation log. None require advanced technical training. All require discipline.
Models optimize objectives, not intentions
Every AI system is built around an objective function, even if it is never labeled as such. In plain language, AI systems adapt by producing outputs that move a number in a desired direction.
- For a scheduling model, that might be utilization rate.
- For a documentation tool, it might be speed or completeness.
- For a risk-flagging system, it might be sensitivity over false negatives.
Behavior analysts immediately recognize the problem. Reinforce only rate, and quality drifts. Reinforce only compliance, and assent erodes. Reinforce only accuracy, and generality suffers.
AI simply executes this dynamic at scale and speed.
The ethical risk is more than a model being “wrong,” but also that it is right according to a narrow definition of success that no one paused to interrogate. Once deployed, a model’s outputs begin to feel authoritative. Dashboards glow reassuringly. Graphs trend upward. What was once a recommendation starts behaving like a verdict. The absence of dissent becomes mistaken for alignment.
At that point, values have not disappeared. They have been overridden.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.