Using AI in Practice: What Works, What Doesn’t, and Why
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA
When the System Fails
January has a particular feel in clinics. The calendar flips, authorization clocks reset, and suddenly the operational friction that was barely tolerable in December becomes impossible to ignore. Notes are stacking up. Schedulers are juggling cancellations like air-traffic controllers. Families are calling about delayed starts. Staff are frustrated, leadership is tired, and everyone is quietly wondering how this year got so complicated so fast.
This is the moment when AI shows up wearing a cape.
For the BCBA in our story, the pitch arrives in familiar forms: an “AI-powered” authorization helper, a smart scheduling optimizer, a note summarizer that promises to cut documentation time in half. None of these tools are obviously unethical. None of them are obviously unsafe. They sit in that gray zone where pressure, optimism, and good intentions overlap.
This newsletter is about what happens next, not in theory, but in practice.
It starts the way these things usually do, with good intentions, mounting pressure, and a system that seems to offer answers faster than humans can generate them. What matters is not that the tools exist, but which ones are trusted, which ones are questioned, and which ones never get the chance to decide anything at all.

A BCBA Under Pressure
Meet Alex, a mid-career BCBA overseeing a growing clinic. Clinical quality is solid. Families are engaged. But operationally, the wheels are wobbling. Prior authorizations are delayed. Intake packets are incomplete. Schedules are behind by weeks. Alex is staying late to clean up documentation and feels the creeping sense that something will slip. Leadership asks a simple question: “Can AI help us get ahead of this?”
Alex knows the answer is “maybe,” which is more dangerous than “no.” The clinic already uses software with AI embedded in it, autocomplete in notes, summary views in dashboards, predictive flags for cancellations. Turning more of that on feels like stepping into a room where the lights are dim, and the interfaces float just out of reach; inviting, responsive, opaque.
Alex pilots a few tools.
-
A note-drafting assistant.
-
A scheduling optimizer.
-
A template generator for authorization packets.
At first, it feels like progress. Drafts appear faster. Calendars fill more efficiently. The system hums. And then the first appeal comes back denied.
Sorting the Work: Green, Yellow, and Red AI Patterns
Before we go any further in the story, we need a shared map. Not all AI uses are equal, and pretending they are is how clinics drift into trouble.
Green Patterns (Low Risk, High Return)
These are workflows where AI assists with formatting, drafting, or organizing without making clinical or access decisions.
Examples (assuming redaction, no PHI, or a “walled-garden” system):
-
Drafting session note language from clinician-entered bullet points
-
Formatting reports to match payer templates
-
Summarizing meetings or supervision notes for internal use
-
Cleaning up grammar, clarity, or structure in parent communications
Why green works:
-
The BCBA remains the decision-maker
-
Errors are obvious and easily corrected
-
Outputs do not directly change treatment, dosage, or eligibility
-
Client data is safe and secure
In Alex’s clinic, green tools reliably cut documentation time by 25–40 percent without increasing error rates. Green AI use is limited to drafting, formatting, summarizing, or organizing human-provided information, but not to interpreting, deciding, or externally recommending. Informed consent from client guardians is obtained, and AI outputs are never final. A human reviews, edits as needed, and signs every document, retaining full responsibility for the outcome. Prompts and raw outputs are logged at a basic level to support quality assurance and later review. Green tools may not expand scope or permissions without explicit re-approval.
Yellow Patterns (Conditional, Require Guardrails)
Yellow workflows are where most clinics get into trouble, not because they are forbidden, but because they require explicit review structures.
Examples:
-
Drafting authorization narratives
-
Suggesting schedule optimizations that affect service intensity
-
Generating treatment-plan language that references goals or procedures
-
Summarizing data trends with recommendations attached
Why yellow is risky:
-
Outputs influence access, intensity, or justification
-
Errors can be subtle, not obvious
-
Drift can accumulate over time rather than appearing immediately
This is where Alex notices something unsettling. Authorization drafts look polished, but the language starts to sound the same across clients. Phrases that worked once are now everywhere. Clinical nuance disappears. A phrase that worked once is now everywhere. It feels efficient, but it also feels brittle.
Red Patterns (Do Not Automate)
Red workflows are those where changing the answer would change treatment, risk, or ethical exposure.
Examples:
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.