Talking to Machines — The Art and Science of Prompting GenAI Tools
David J. Cox PhD MSB BCBA-D, Ryan L. O'Donnell MS BCBA
The Portal Opens Again. The world shimmers into focus. A console appears at the center of the map—a glowing interface, waiting for your input. The next level isn’t about fighting enemies or dodging traps. It’s about the words you choose.

Welcome to the ninth issue of Project Chiron.
We’ve explored AI’s foundations, risks, and ethics. Now we arrive at the moment of interaction: how do you actually talk to the machine?
Because whether you’re drafting notes, analyzing data, or scripting training modules, the quality of your interaction with AI depends on how you shape its inputs. And here’s the kicker: prompting isn’t magic—it’s shaping applied by a different kind of (artificial) organism to your verbal behavior.
Why Prompting Feels Weird (Blame Google)
For 20 years, we’ve been trained by Google search boxes. Minimal words. Sparse commands. “ABA assessment template pdf.”
That economy of input works well for search engines that match keywords. But with generative AI (GenAI) tools like ChatGPT, Claude, or Gemini, it’s the opposite. More context, not less, yields better results.
In other words:
-
Google rewards brevity.
-
GenAI rewards clarity, detail, and iteration.
This explains why many first-time users complain: “It just gave me fluff.” That’s not the system failing—it’s your learning history showing.
Prompting GenAI as a Behavioral Act
Think of prompting GenAI systems as a shaping procedure:
-
Your first prompt = baseline behavior.
-
The AI’s response = consequence.
-
Your edits, refinements, and clarifications = successive approximations toward YOUR target output.
Prompts aren’t spells spoken to an oracle. They’re operants under stimulus control. Each new iteration narrows the range of likely responses until you get what you want.
Just like with learners, reinforcement history matters. GenAI tools adapt to how you phrase, structure, and correct your inputs.
Best Practices for Effective Prompts
Here are strategies to embed in every prompt you input that reliably improve outcomes when talking to GenAI systems:
-
Set the Role:
-
Frame the system for the upcoming task: “You are a BCBA supervising early intervention cases” or “You are a BCBA reviewing reinforcement schedules.”
-
Role assignment acts like an establishing operation—it increases the likelihood that specific textual responses occur by narrowing the statistical landscape to the words and language in the training data most associated with the role you provide.
-
Define the Task:
-
Say exactly what you want: “Summarize this SOAP note in plain language for parents.” Don’t just say “summarize.”
-
Specificity functions like a clear SD. The more specific, the more precise the response.
-
Provide Context:
-
Add the variables that matter: age, setting, goals, prior history.
-
Vague antecedents lead to vague outputs.
-
If it would matter to you when you’re doing this work, it will matter to the GenAI system.
-
Constrain the Output:
-
Specify the exact output format you want: bullet points, tables, reading-level restrictions.
-
This is like programming a data sheet—form dictates function.
-
Iterate:
-
Treat each prompt-response cycle as a trial. Modify based on what you get.
-
Rewards (positive feedback, acceptance) versus neutrality (ignore, restate) shapes the machine’s behavior in session.
Caution at the Console: Adding rich context makes GenAI system’s responses sharper—but in healthcare and education, the very details that improve outputs (client history, diagnoses, school data) can also trigger HIPAA or FERPA violations if the platform isn’t secured for protected information. In other words: context is power, but only if the arena you’re playing in is safe. Be aware.
Beyond Tricks: A Systematic Approach to Prompt Engineering
Too often, “prompting” is portrayed as a clever hack: type the magic words, get the magic result. But real literacy means treating prompting as a systematic practice, not a one-off parlor trick.
In practice, that means:
Chiron: The AI Literacy Series for ABA Professionals
A weekly newsletter exploring how ABA professionals can develop essential AI literacy skills to ensure ethical and effective practice in a rapidly changing field.