Neural Networks Aren’t Brains: A Primer for Behavioral Scientists
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

Waking Up in a Different Simulation
The flicker is gone. This time, you don’t respawn in a game world. Instead, code rains down the screen in green glyphs. The hum of machines is everywhere. The portal has shifted—welcome to The Matrix.
If the Ready Player One arc was about learning the rules of a quest, The Matrix is about piercing the illusion. What looks alive…what feels intelligent…may only be an elegant system of math and data shaping.
And nowhere is that illusion stronger than in how people talk about neural networks.
Neural Networks ≠ Brains
By name alone, “neural networks” sound biological. And tech marketing doubles down: diagrams of “nodes” linked like synapses, metaphors of “thinking”, headlines about machines “dreaming”, AI system displays of “reasoning”.
But here’s the reality: neural networks are not brains. They don’t think, feel, or behave. They calculate.
-
A human neuron has complex biochemical states, dendritic structures, synaptic histories, and hormonal modulators. We also have no clue how they develop, strengthen, or weaken their interactive connections.
-
An artificial “neuron” is a line of math: input × weight + bias → squashed through a function. And, every AI system developer knows exactly how they are structured, connected, and updated over time.
Both may be arranged in layers, but the resemblance stops there. One is an organism, the other an equation.
As behavior analysts, this distinction matters. When people confuse neural networks with “minds,” they risk attributing agency or intention where none exists. That confusion leads to misplaced trust in outputs, sloppy ethical reasoning, and sometimes dangerous adoption.
An analogy might be treating a client’s echolalic speech as meaningful manding or assuming it’s “attention-seeking” without a functional analysis. Mislabeling the behavior distorts our ethical obligations and the interventions we choose.
How Neural Networks Actually Work
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.