Artificial intelligence (AI) has moved from abstract theory to everyday practice with startling speed. From automated documentation in clinics to large language models (LLMs) embedded in electronic health record systems, behavior analysts are already surrounded by AI-driven tools. Whether we recognize it or not, AI has entered our workflows, our decision-making processes, and even the ethical dilemmas we face as practitioners.
For Board Certified Behavior Analysts (BCBAs), Registered Behavior Technicians (RBTs), and behavior-analytic faculty, the question is no longer if AI will affect our work but how well prepared we are to engage with it. And preparation, in this case, means AI literacy.
Drawing on insights from Dr. David J. Cox’s 2025 publication titled “Ethical Behavior Analysis in the Age of Artificial Intelligence (AI): The Importance of Understanding Model Building while Formal AI Literacy Curricula are Developed”, Cox (2025) makes the case that AI literacy is not optional; rather, it is a professional necessity for behavior analysts who wish to remain ethical, effective, and future-ready.
AI already influences the infrastructure of Applied Behavior Analysis (ABA).
Consider:
The attraction is obvious. Behavior analysts, often working under crushing caseloads, want tools that save time. But herein lies the problem: without AI literacy, BCBAs cannot reliably evaluate these systems. As Dr. David J. Cox has argued, ethically effective behavior analysts must be AI literate. Otherwise, they risk adopting tools that compromise accuracy, confidentiality, or even client safety.
To understand why AI literacy is urgent, it helps to recall the philosophical foundation of our field. Behavior analysis rests on contextualism—the worldview that emphasizes acts in context, judged by their “successful working”. In contextualist terms, tools are neither good nor bad in isolation; they are judged by how well they accomplish the goals we set for them, given the conditions at hand. (Hayes, Hayes & Reese, 1988)
AI challenges us precisely here. Without literacy, behavior analysts cannot contextualize AI tools—cannot evaluate whether a system trained on vast but unrelated datasets truly fits the context of individualized ABA practice. Contextualism demands that we move beyond surface-level adoption and instead ask: Does this tool work, for this client, in this setting, given these contingencies?
Maria, a BCBA at a pediatric autism clinic, stares at her growing stack of unfinished session notes. An AI vendor promises relief: “Type a few bullet points, and our system will draft the full note for you.” Tempting—especially after a 10-hour day. She gives it a try. The AI-produced notes look polished, but on closer read, they gloss over critical details like the exact prompting strategy used during a toileting routine. They even describe progress on a goal that wasn’t addressed that day, and another that isn’t even a part of this client’s program. Without the wherewithal to know what kinds of errors to look for (i.e., AI literacy), Maria might have accepted the output and signed her name. But contextualism pulls her back: the note must represent this client, in this clinic, under these contingencies. The shortcut risks misrepresenting the intervention and compromising accountability.
Across town, James, a BCBA in a public elementary school, tests a new AI program that analyzes classroom video and generates behavior descriptors for students. The tool spits out terms like “defiant,” “agitated,” and “peer conflict.” At first glance, the convenience is incredible—no more tallying behaviors by hand. But when James reviews the flagged footage, he notices a problem: the system labeled a student as “noncompliant” for quietly refusing to pick up a pencil after being teased. To the algorithm, this looks like defiance; in context, it’s avoidance in response to peer bullying. Without literacy, James might have accepted the AI’s label, setting in motion unnecessary interventions. Instead, he recognizes that contextualism demands more: the meaning of behavior cannot be lifted wholesale from an algorithm’s training set and dropped into a classroom.
Meanwhile, Tonya, a BCBA supporting adults in a group home, is pitched a new AI tool that drafts individualized behavior intervention plans from intake data. The vendor explains: “Just upload the client’s files, and the system creates a plan aligned to best practices.” Curious, Tonya tests it with a new client—an adult who loves gardening and spends hours outdoors with housemates. The AI’s plan, however, recommends interventions focused on indoor routines and ignores the client’s preference for outdoor engagement entirely. It also recommends strategies that are impossible to implement with the group home’s current staff-to-client ratios. On paper, it looks impressive; in reality, it clashes with the rhythms of the home and the dignity of the individual. Contextualism reminds Tonya that no algorithm knows this household like she does. AI can provide a draft, but the final plan must be grounded in the lived contingencies of the client’s life.
The BACB Ethics Code for Behavior Analysts (2020) is clear: practitioners are responsible for the accuracy of their work, for safeguarding client dignity, and for ensuring that any tools or technologies used align with evidence-based practices.
AI complicates these obligations in several ways:
In this sense, becoming AI literate is not just best practice—it is an ethical obligation under the BACB Ethics Code for Behavior Analysts. Your clients deserve your best. You deserve to give your best. And, the field of ABA will be better if we all give our best.
This blog is a sample of our weekly newsletter, designed for ABA professionals who are building AI literacy skills. Subscribe here. Every week, we break down the foundational knowledge BCBAs and others in applied behavior analysis need to make informed decisions about the AI tools flooding our field.
In his commentary on ethical AI use, Cox (2025) emphasizes that understanding mathematical model building is the core skill needed for AI literacy. That does not mean every BCBA needs to code neural networks. Instead, it means developing a working understanding of how AI models are trained, what data they rely on, and how to critically interpret their outputs. Cox draws an analogy to automobiles: most of us do not need to rebuild an engine to drive safely, but we must know how to fuel the car, recognize when it malfunctions, and judge whether it is the right vehicle for the task. Similarly, BCBAs must know when and how to use AI tools safely, and—perhaps most importantly—when not to trust them.
Beyond these fundamentals, behavior analysts face several novel challenges that make AI literacy indispensable. One is automation bias: humans tend to favor automated outputs over their own judgment, even when those outputs are wrong. Without literacy, a BCBA might defer to an AI-generated recommendation that misrepresents a client’s needs. Another is the shifting nature of professional roles. As AI systems increasingly manage routine documentation and analysis, the BCBA’s role may evolve toward oversight, interpretation, and ethical governance—functions that demand critical fluency with these tools. Finally, there is the issue of protecting the science of ABA itself. If behavior analysts fail to evaluate AI systems, other professions will step in to define standards, often without grounding in contextualism. Developing AI literacy ensures that ABA retains its voice in shaping responsible, ethical applications of these technologies.
Not every role requires the same depth of literacy. Cox (2025) outline different levels:
For behavior analysts looking to begin their AI literacy journey today, several practical steps stand out:
AI is not going away. If anything, its influence will expand, shaping what it means to be a BCBA, a supervisor, or a faculty member in the coming decade. The choice before us is stark:
Dr. David J. Cox has warned that being an ethically effective behavior analyst now means being AI literate. The future of ABA depends on whether we rise to that challenge.
Ready to go deeper? Enroll in the Foundations of AI Literacy for Behavior Analysts course and earn 1.5 BACB® Ethics CEUs. Taught by Dr. David J. Cox, Ph.D., M.S.B., BCBA-D—an internationally recognized leader in behavioral data science—this course equips you with the critical thinking skills to assess AI tools ethically and effectively. Build the critical skills to evaluate AI tools confidently, ethically, and without needing a background in computer science. Earn 1.5 BACB® Ethics CEUs today, and unlock access to our advanced, role-specific tracks built for RBTs, BCBAs, Clinical Directors, Academics, and Organizational Leaders. Learn more or register here.
About the Authors:
David J. Cox, Ph.D., M.S.B., BCBA-D is a full-stack data scientist and behavior analyst with nearly two decades of experience at the intersection of behavior science, ethics, and AI. Dr. Cox is a thought leader on ethical AI integration in ABA and the author of the forthcoming commentary Ethical Behavior Analysis in the Age of Artificial Intelligence. He’s published over 70 articles and books on related topics in academic journals and, quite possibly, the only person in the field with expertise in behavior analysis, bioethics, and data science.
Ryan O’Donnell, MS, BCBA is a Board Certified Behavior Analyst (BCBA) with over 15 years of experience in the field. He has dedicated his career to helping individuals improve their lives through behavior analysis and are passionate about sharing their knowledge and expertise with others. He oversees The Behavior Academy and helps top ABA professionals create video-based content in the form of films, online courses, and in-person training events. He is committed to providing accurate, up-to-date information about the field of behavior analysis and the various career paths within it.
Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb.com/wp-content/ethics-code-for-behavior-analysts/
Cox, D. J. (2025). Ethical behavior analysis in the age of artificial intelligence (AI): The importance of understanding model building while formal AI literacy curricula are developed. Perspectives on Behavior Science, 48, 621-631. https://doi.org/10.1007/s40614-025-00459-z
Hayes, S. C., Hayes, L. J., & Reese, H. W. (1988). Finding the philosophical core: A review of Stephen C. Pepper’s World hypotheses: A study in evidence. Journal of the Experimental Analysis of Behavior, 50(1), 97–111. https://doi.org/10.1901/jeab.1988.50-97
50% Complete
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.