The hype cycle leaves this part out: getting AI-generated educational content right is a grind.
It doesn’t come out polished. It doesn’t come out clinically accurate on the first pass. It doesn’t come out tailored to your learners without serious iteration. And it absolutely will not tell you when it’s wrong.
That last part is the one that matters most.
What the Process Actually Looks Like
Every piece of educational content I produce with AI goes through multiple review cycles.
Clinical facts get verified against current practice guidelines, the actual source documents, not the AI’s training data. This step is not optional. AI systems have training cutoffs. Guidelines update. Drug indications change. Dosing recommendations get revised. The 2023 AF guideline changed recommendations that had been stable for a decade. An AI trained before those changes doesn’t know that. You have to know that. If you’re producing content that students or clinicians will rely on, you’re responsible for the accuracy regardless of where the first draft came from.
Language gets calibrated for the specific audience. A BSN student learning hemodynamics for the first time needs different framing than a post-master’s ACNP candidate who has clinical experience but is building on an incomplete foundation. The AI doesn’t know which one you have. You do. You have to make that call on every piece of content, and you have to catch when the generated output defaulted to the wrong level.
Images are their own review cycle. Careful prompting, frequent regeneration, and a willingness to throw out most of what comes back. Clinical accuracy in an image matters as much as clinical accuracy in text.
Customizing a single lesson for different audiences isn’t one prompt. It’s a workflow. It takes domain expertise to know when something is wrong, pedagogical judgment to know when something is right but poorly framed, and time to iterate from the first draft to something you’d actually put in front of learners.
What It Produces
The output is content I could not have produced alone.
Not because the knowledge wasn’t there. I have 30 years in cardiology and have been teaching graduate nursing for most of that time. The clinical content I can produce is reasonably solid. I might have some knowledge gaps. (After 30 years, I’m still not entirely sure what all those numbers on the LFT panel mean. That’s a joke. Mostly.)
The constraint was never knowledge. It was hours.
A branching clinical case study that I would have spent four hours building from scratch can now be scaffolded in an hour and refined from there. A set of spaced retrieval questions covering three weeks of pharmacology content that would have required an afternoon of writing can be generated, reviewed for clinical accuracy, and refined in a fraction of that time.
That’s not a shortcut. It’s a compression of the work that didn’t require my specific expertise. The work that does require it, verifying clinical accuracy, calibrating level, making pedagogical decisions, providing feedback that changes how students think, that work didn’t go anywhere. I’m still doing it. I have more time to do it because the other work got faster.
The QA Process Is the Work
Here’s the thing about AI-assisted content creation that I think gets missed: the QA process isn’t a step at the end. It’s the work.
Generating a first draft is not creating content. It’s creating a starting point. The first draft is something to react to, not something to use. The value is that reacting to a draft is faster and cognitively different from generating from nothing. It’s easier to catch a framing error in existing text than to notice its absence when you’re building from scratch.
For anyone using AI to produce educational content: what does your QA process look like? How many passes before you trust the output? I’m curious whether others are finding the same thing, that the tool is genuinely useful but the assumption that it saves time is only true if you’re willing to do the verification work that makes the output trustworthy.
The answer to “how many passes does it take” is: more than you’d expect, and fewer than building from scratch. That’s the honest answer. It’s also enough to change what’s possible.