When AI Breaks Trust: Lessons from the LAUSD Hiring Scandal
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA

“You don’t just build AI. You build trust.”
Meet, Ed
In March 2024, the Los Angeles Unified School District (LAUSD) announced a bold, high-profile launch of an AI-powered student assistant called Ed, a cheerful animated chatbot shaped like a sun, built to nudge students, connect them to services, and serve as a central hub for school engagement.
By June, it was offline.
By July, the company behind it had furloughed staff and lost its CEO.
And by August, the public wanted answers.
Today’s issue explores what went wrong with Ed, why it matters beyond education, and what behavioral health organizations should learn before launching (or buying) their next AI-powered tool.
A Shiny Launch, a Sudden Crash
The promise was sweeping: A $6 million AI assistant developed by startup AllHere Education, integrated into LAUSD’s digital ecosystem to personalize engagement and support. Ed would unify the district’s fragmented systems into a single conversational hub, provide personalized academic nudges based on each student’s data, connect families to services in multiple languages, and keep students on track with proactive reminders. District leaders spoke of offering each learner a “school of one,” where academic support, social-emotional resources, and parent communication converged in a single, always-available digital assistant. Ed was marketed not just as a tool, but as a paradigm shift in how the second-largest school district in the nation could deliver individualized learning and engagement at scale.
What unfolded revealed the gap between promise and practice. Early users reported that many outputs were generic rather than personalized, and the breadth of goals (e.g., tutoring, attendance, counseling, logistics) meant Ed struggled to excel in any one domain. Concerns surfaced around student data privacy, while the startup contracted to build Ed faced financial collapse, leading the district to “unplug” the system just months after launch. Analysts have since argued that LAUSD scaled too quickly without clearly defining the initial problem to solve. The episode illustrates a larger pattern: AI is often promoted as transformative, but without stable infrastructure, rigorous safeguards, and careful scope, even well-intentioned projects risk overpromising and underdelivering.
The result:
-
Company collapse within 3 months
-
Loss of human moderators required to supervise AI output
-
Suspension of the chatbot due to supervision and safety risks
-
Data privacy concerns, whistleblower complaints, and district investigations
What Went Wrong (and It’s Not Just AI)
Overpromised, Underbuilt: AllHere’s chatbot aimed to stitch together dozens of LAUSD data systems to provide seamless, personalized support. But data integration across large public systems is a specialized, high-stakes domain, and AllHere lacked experience with that level of technical complexity.
- Lesson: Ambition ≠ capacity. Building “AI-powered tools” is not just about plugging in ChatGPT—it’s about managing, protecting, and interpreting data responsibly across systems.
AI Without a Human in the Loop: When AllHere furloughed its team, Ed couldn’t operate safely. Why? Because human moderators were needed to check what the AI said. Without them, LAUSD suspended the chatbot immediately.
- Lesson: AI tools used with clients, families, or the public must have real-time evaluation frameworks with humans in the loop—not just a vague “review system.” If an AI fails without humans, it wasn’t safe to begin with.
Data Risks and Missing Transparency: A whistleblower claimed AllHere mishandled student records. Meanwhile, LAUSD was already facing separate cyberattacks, raising fears that AI initiatives might further jeopardize sensitive data.
- Lesson: Data governance must come before AI deployment. Behavioral health organizations can’t afford to “plug in” AI without knowing where data goes, who sees it, and how it’s stored.
No Stakeholder Buy-In: Parents, teachers, and unions weren’t consulted. Many families didn’t even understand what Ed was. When the system failed, their trust went with it.
- Lesson: If people don’t understand or trust your AI tool, it will fail, even if the tech works. Engagement is not optional. Build trust first.
Misaligned Priorities: Educators and parents pushed back hard: why spend millions on a chatbot when literacy rates were falling, classrooms were overcrowded, and basic needs were unmet?
- Lesson: AI should serve real problems, not optics. If your AI project doesn’t meet a clear, behaviorally meaningful need, it will be perceived (and resented) as a distraction.
Key Takeaways for Behavioral Health Organizations
- Vet Vendors for Data and Domain Expertise: Many AI vendors are “thin wrappers” (see: Behind the Wrapper: What You Need to Know About LLM-Powered Tools) with little experience in your field. Ask about their data architecture, compliance practices, and behavioral relevance. You don’t want a chatbot trained on general internet data managing ABA care plans.
- Start Small, Then Scale: Don’t bet the house on a multimillion-dollar system-wide rollout. Start with small, staff-led pilots. Let BCBAs and frontline staff evaluate tools. Learn before you launch.
- Get Your Data House in Order First: If your organization’s data systems are messy, fragmented, or insecure, AI will amplify those weaknesses. Focus on cleaning, organizing, and protecting data before building intelligent tools on top.
- Ensure Human Oversight at Every Level: Never allow unsupervised automation in clinical, billing, or documentation workflows. Design “human-in-the-loop” protocols for every AI feature. That includes review, override, and audit trails.
- Prioritize Consent and Communication: Clients, caregivers, and staff should understand what an AI tool does, how it works, and how their data is used. No hidden algorithms. No implied consent. Transparency is a minimum requirement, not a bonus feature.
Questions Every Behavioral Health Org Should Ask Before Launching AI
-
Who built the tool, and do they understand our clinical and operational context?
-
What data is this AI trained on? Does it include populations like ours?
-
How is the data stored, encrypted, and audited?
-
What happens if the vendor collapses or the tool breaks?
-
Can we explain every output to a caregiver, clearly and honestly?
-
Is this solving a real need, or is it just adding complexity?
-
Who can override the AI’s outputs, and how?
-
Have our frontline staff and clients been consulted before rollout?
-
What is the clear benefit for clients, not just administrators?
-
If this tool failed publicly, would we be proud of how we deployed it?
Final Thought: Avoiding the Ed Effect
LAUSD’s Ed chatbot wasn’t just a technical failure. It was a trust failure, a governance failure, and a reminder that AI is only as good as the humans implementing it.
As one LA parent put it: “We don’t want Ed. We want smaller class sizes and happy teachers.”
Your clients, too, may want fewer buzzwords and more impact.
AI can absolutely help. But only if we build it with competence, with care, and with clarity.
Until next time,
—The Project Chiron Team
Next Issue: From IEPs to Intake Notes: AI in the Wild
🧭 Signals in the Noise
Each week, we’re curating a short list of articles, research, and media that sharpen your understanding of AI's impact on behavior analysis, ethics, and model-based decision-making. No hype—just signal.
- ChatGPT versus clinician responses to questions in ABA: Preference, identification, and level of agreement
- Tristan Harris on The Diary of a CEO
- Google's SynthID: A tool to watermark and identify content generated through AI
- SynthID: A Technical Deep Dive into Google’s AI Watermarking Technology
- "Smart Rounding" - Instacart’s AI-Enabled Pricing Experiments May Be Inflating Your Grocery Bill, CR and Groundwork Collaborative Investigation Finds
- OpenAI, Anthropic, and Block join new Linux Foundation effort to standardize the AI agent era
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.
Responses