Why Models Matter: Behavior Analysts and the Math Behind AI
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

We’re borrowing the cadence of Morpheus here; it’s not an actual quote, but the kind of line he’d deliver if he were teaching prompt engineering.
The room tilts into wireframe. Equations hover like green glyphs:

A status bar reads TRAINING…………… Then a bell tone: DEPLOYED.
A tidy dashboard claims 94.7% accuracy.
Morpheus leans in: “Numbers in the Matrix are convincing. Your job is to ask what they mean.”
Welcome to the Eighteenth Issue of Chiron
In this arc we’ve treated AI as engineered contingency systems. Today we zoom into the engine itself: models—formal hypotheses written in math. Their assumptions decide where they shine and where they fail.
We’ll demystify the four core ideas of loss functions, regularization, overfitting, and calibration. Then, we’ll map them to ABA anchors like measurement error, generalization, and treatment integrity so you can interrogate vendor claims with confidence.
This isn’t about turning you into a data scientist; it’s about giving you the right levers and questions so you remain the Operator in the chair.
Models = Hypotheses (with knobs)
A model is a structured guess about how inputs relate to outputs. Change its architecture (linear vs. neural network) and you change what patterns it can represent; change its objective (loss function) and you change what it learns to prefer.
ABA mapping: Think of the model as a highly consistent trainee. Its curriculum (features), reinforcers (loss), and rules (regularization) shape its behavior. You can’t expect a trainee to learn anything other than what those specify. If these are wrong or off in any way, the trainee will master the wrong repertoire.
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.