Recently, I bought a children’s book for my 4-year-old granddaughter. Tucked beneath the copyright page was a sentence I’d never seen before: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.”
It wasn’t a medical text. It wasn’t a journal article or a clinical guideline. It was a Dr. Seuss book. That single sentence popped more than the story itself. And not in a good way.
The warning clearly reflects growing anxiety about large-scale AI scraping and intellectual property. Publishers have reason to worry about how creative work is absorbed, reused, and monetized by machines that don’t ask permission. But encountering that language in a children’s book made me realize something deeper was at stake — something less legal and more educational.
Because this isn’t just about protecting content. It’s about how learning happens.
Medicine has always been a cumulative discipline. We learn by reading what came before us, by hearing stories retold on rounds, by watching mentors reason out loud, and by borrowing language until it becomes our own. Knowledge moves forward because it’s shared, reshaped, and reused. That has always been true for humans. Now it’s becoming true for machines.
Artificial intelligence is no longer a distant abstraction in medicine. Clinicians already use AI to summarize charts, generate differential diagnoses, interpret images, draft patient instructions, and support trainees who are still finding their clinical footing. Whether we like it or not, machines are becoming part of the learning ecosystem.
That’s what makes these new prohibitory statements different from traditional copyright notices. They aren’t just legal guardrails. They shape what machines are allowed to read. And, in doing so, shape what clinicians will one day learn with those machines.
From an educational standpoint, this matters more than we may realize. AI systems trained on incomplete or selectively restricted material risk developing blind spots. If landmark reviews, reflective narrative discussions, or ethically complex case analyses are excluded, what remains may be thinner, flatter, and more procedural. The result isn’t just a less informed machine — it’s a shallower learning partner.
And that has downstream consequences. An AI that has absorbed only checklists and billing-friendly language may be adequate for rote tasks, but it won’t model uncertainty well. It won’t reflect moral tension. It won’t recognize the gray zones where medicine actually lives. Those are precisely the places where trainees and patients need the most help.
There’s also an equity issue hiding in plain sight.
Elite institutions and proprietary vendors may negotiate access to restricted materials, while freely available AI tools and the underresourced learners who rely on them are left with less complete knowledge bases. We risk recreating an old hierarchy in a new form: not who has the best teachers, but whose AI has access to the best books.
For many trainees, especially in overstretched health systems, AI is becoming a learning support. When access to high-quality training data is uneven, differences in confidence, depth of reasoning, and clinical judgment are likely to widen.
To be clear, the impulse behind these warnings isn’t malicious. Many authors worry, rightly, that their work will be flattened, stripped of attribution, or repurposed without consent. AI does not read with empathy. It does not understand intention. It patterns language without experiencing its weight.
But protecting medicine’s human core by walling off its stories may backfire.
Medicine has never learned well from summaries alone. We learn through repetition, metaphor, rhythm, and surprise. We learn from stories we don’t fully understand the first time we hear them. We learn by sitting with ambiguity before we know what to do with it.
As a medical student, I was told something simple and enduring: if something is worth learning, you’ll hear it more than once. Medicine repeats what matters — on rounds, in lectures, in stories retold — until meaning finally sinks in.
Which brings me back to Dr. Seuss.
Dr. Seuss was never training a machine. He was training minds: through rhyme, absurdity, misdirection, and joy. He was teaching pattern recognition before children knew what patterns were. He was preparing readers to tolerate nonsense long enough for meaning to emerge.
Those are the same cognitive muscles physicians rely on every day — trained through repetition, tested by uncertainty, and strengthened only by use. When a medical student hears a strange constellation of symptoms and doesn’t yet know what to make of them. When a resident senses something is wrong but can’t articulate why. When an attending pauses, revises a story, and says, “Let’s think about this another way.” When we begin treating even children’s books as intellectual territory that must be defended against thinking systems, we risk confusing protection with paralysis. We may succeed in limiting what machines can ingest, but at the cost of narrowing the very learning traditions we’re trying to preserve.
Medicine has always depended on open circulation: of ideas, of failures, of stories told before they’re fully understood. If AI is going to participate meaningfully in medical education, and it already is, it needs access to that full tradition, not just the safest or most monetizable fragments.
Medicine cannot afford to muzzle its own learning process in the name of control. If we do, we won’t just be teaching machines less medicine. We’ll be teaching ourselves less, too.
Do you agree that muzzling AI hurts medical education? Share why or why not in the comments.
Arthur Lazarus, MD, MBA, is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of numerous books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book, a novel, is Against the Tide: A Doctor’s Battle for an Undocumented Patient.
Image by Moor Studio / Getty




