Every NP program in the country is currently having a version of the same meeting. Someone from the dean’s office says the curriculum needs to address AI. A faculty committee forms. Six months later, a one-hour module gets added to the informatics course. The box gets checked.
That’s not AI education. That’s AI acknowledgment.
The Actual Problem
Nursing faculty are being asked to teach AI literacy without being taught it themselves. Most of the faculty who are designing these modules learned informatics in an era when the big question was whether EHR documentation was changing clinical reasoning. They’re not equipped to teach large language models, clinical decision support systems, or the epistemological problem of AI-generated differential diagnoses — because nobody taught them.
This is not a criticism of nursing faculty. It’s a structural problem. The technology moved faster than the pipeline for training educators.
The result is that students are graduating with a vague awareness that AI exists and a practiced ability to spot AI-generated text. Neither skill is clinically useful.
What Clinical AI Literacy Actually Requires
A nurse practitioner working in 2026 needs to be able to do three things with AI tools that their program almost certainly didn’t teach.
Evaluate a clinical decision support recommendation. When the EHR flags a potential drug interaction or suggests a diagnosis, the NP needs to know what to do with that. Not just click through it, not just accept it — but evaluate whether the recommendation is based on current evidence, whether it applies to this patient, and whether the algorithm’s training data is likely to have included patients like this one. That requires understanding how clinical AI systems are built. Most graduates don’t have it.
Use AI tools without offloading clinical judgment. Large language models are extraordinarily useful for synthesizing evidence, drafting documentation, and generating differential diagnoses as a starting point. They are not a replacement for physical examination findings, clinical intuition built from years of practice, or knowledge of what the patient said when you asked the right question. Students need to learn where the tool ends and the clinician begins. That line is not intuitive.
Recognize AI-generated errors. LLMs hallucinate. Clinical AI systems can be trained on data that doesn’t generalize. Decision support tools can be wrong in ways that look right. A clinician who understands the failure modes of these tools is far safer than one who treats AI output as authoritative.
What Faculty Need First
Before any of this can be taught well, faculty need it themselves. Not a workshop on ChatGPT. Not a webinar on responsible AI use. Real, working familiarity with clinical AI tools — what they can and can’t do, where they fail, how to evaluate their outputs.
The programs that figure this out first will produce graduates who are genuinely prepared for the practice environment they’re entering. The ones that don’t will produce graduates who either over-trust AI tools or reflexively distrust them, both of which lead to worse patient care.
One module in informatics is not enough. Neither is a policy statement about academic integrity.
The faculty need education first. The curriculum follows from that.