A Tale of Two Systems: Comparing Natural and Artificial Intelligence
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

In Minority Report, the precogs don’t solve crimes. They don’t investigate context. They don’t weigh motives. They don’t consider what changed between the vision and the moment. They generate a signal. The system decides what to do with it.
That’s the right way to think about AI in ABA right now—not as a replacement for clinical reasoning, but as a new kind of signal generator that can be genuinely useful when it’s placed correctly inside a workflow.
Because the truth we need to get comfortable with as this space matures is that AI can help AND AI can mislead. Not because it is evil or autonomous, but because AI is built on a fundamentally different learning engine than biological organisms.
This issue is about learning to recognize the difference. Not in a philosophical way. In a technical, practical, clinic-ready way.
Because if you confuse how these two systems learn, you will misplace trust, misdesign oversight, and build workflows that feel efficient while drifting away from what behavior analysis is supposed to protect: individualized judgment, accountability, and socially valid outcomes.
Two Learning Engines in the Same Room
Behavior analysts work with the most sophisticated learning system on Earth: biological organisms.
Humans, children, caregivers, staff. All living systems who are shaped by reinforcement histories, motivating operations, context, language, and time.
Modern AI systems—especially large language models (LLMs)—are impressive in a different way. They can summarize, draft, classify, and generate fluent text at scale. These systems can detect patterns across large datasets and create structure where humans feel overwhelmed by noise.
But these two systems do not learn the same way.
They can both change with experience, but they change for different reasons, under different constraints, with different failure modes.
Succinctly: Organisms are selected by consequences. Models optimize loss.
That distinction is not semantics. It is the root of everything that follows.
System 1: Natural Intelligence (Organisms)
Selected by consequences, shaped by context. Organisms learn because behavior contacts consequences.
Reinforcement doesn’t just change what someone does—it changes the probability of responding under certain conditions, given a history, given competing reinforcers, given motivational states, given stimulus control.
Behavior analysts live in this world. We think in terms of:
-
Selection by consequences
-
Motivating operations
-
Generalization and discrimination
-
Contextual control
-
Response classes
-
Competing contingencies
-
Rule-governed behavior
-
Shaping and extinction
Biological learning systems have enormous strengths:
- Learn from small data. A child can acquire a new discrimination in a handful of trials. A staff member can adjust their implementation after a single corrective feedback moment. Humans can learn from one salient error in a way no machine can replicate reliably.
- Adapt to changing goals. If the reinforcers change, the behavior changes. If the context shifts, responding shifts. If the contingencies reorganize, humans reorganize.
- Generalize creatively (sometimes too much). Humans can apply learning to new settings, new stimuli, and new problems—even when those contexts are not “in the training set.”
And biological organisms have predictable limitations too:
- We are vulnerable to fluency illusions. Humans mistake confidence for accuracy all the time; especially under stress, workload pressure, or authority cues.
- We drift under reinforcement. If the system reinforces speed, we get fast. If it reinforces volume, we produce volume. If it reinforces “no denials,” we optimize for language that prevents denials even if it thins out clinical nuance.
- Our judgment is shaped by our environment. The biological learning system is powerful, but it is not neutral. It is contingent.
This is why good ABA organizations design contingencies for staff behavior the same way we design contingencies for clients: intentionally.
System 2: Artificial Intelligence (Models)
Optimizing an objective function inside a defined task. AI systems do not learn from consequences in the way organisms do.
They do not “try.”
They do not “want.”
They do not “care.”
They optimize.
A model is trained by adjusting internal parameters to minimize a loss function (i.e., an error measure) over training data. That optimization process can be incredibly effective, but it produces a system that is good at a specific kind of learning:
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.