Article Image

If You Had AI in Medical School, Would You Trust It?

Op-Med is a collection of original essays contributed by Doximity members.

I’m often surprised that ChatGPT is only two and a half years old. In the span of my time in medical school, it has changed the world (and will continue to do so), but we still have to remember that it’s a child. While I do consider myself an early adopter of technology, I’m also quite skeptical at heart. However, even in the last year, I’ve seen how artificial intelligence is quietly reshaping how medical students learn. For me, it’s become something like a study partner — one that I feel unafraid to ask a “dumb question” of. It fills a valuable niche: not a source of truth, but a place to start when my understanding of a topic still feels loose, wobbly, and unanchored.

One of the oddities of medical education is how quickly you are expected to run before you’ve learned to walk. Professors often speak in dense, high-level language, layering one complex mechanism atop another, trusting that we’ve internalized the fundamentals. Sometimes we have. But more often than not, I find myself understanding just enough to follow along — without the concepts really settling into place.

That’s when I find myself typing a question into ChatGPT. It’s the kind of question I’d be too embarrassed to ask out loud: “Why is vitamin D important for calcium again?” Not because I’ve never learned it, but because I’ve forgotten the details, or never quite saw how the puzzle pieces fit together. The answer I get is usually pitched at just the right level — neither rudimentary nor so complex that it’s useless. This is how I use AI to study. It helps me connect the dots.

I won’t stop there. I’ll cross-check it with UpToDate, a simple Google search, or bring it to small group, where one of our professors might say something like, “While that’s true on paper, what I actually do in clinic is ...” That human layer — the stories, the clinical shortcuts, the nuance — is what gives the topic weight. AI helps me create a sketch; my professors paint in the shadows and texture.

There is something oddly comforting about studying with AI. It’s always there. It doesn’t judge. It doesn’t assume I already know what I probably should. On long afternoons, when I’m toggling between Anki, lecture slides, and a cup of coffee, it’s often the nudge I need to get unstuck. If you need a quick refresher on a topic that you haven’t thought about for some time, it’s a great way to recall the basics, not necessarily learn something new where you won’t have the knowledge to question its responses.

Even though I use AI, I still don’t trust it in the way I trust the people I learn from. It’s not a mentor. It just responds literally and (as of now) only in block text. It doesn’t pause to figure out what I already know and where my gaps in knowledge are. Professors do that. Classmates do that. There’s a kind of learning that happens only when someone challenges you, interrupts your shortcuts, forces you to justify your answer, and asks the right questions.

That’s why I love small group. Someone brings up a patient they saw last week. Someone else questions how to pick the right medication within a drug class, or an indication that they’ve seen other physicians use, or brings up a nonclinical reality, such as an exorbitant price, that changes how we treat someone in the real world. It’s messy and human and often frustrating — but in that mess, real learning happens. The “cleanliness” of ChatGPT’s responses and its supposed self-confidence means that it’s easy to feel that its response is complete. 

I’ve also noticed how easily people — myself included — can slip into trusting AI by default. I’ve seen friends ask GPT questions not just about medicine, but about anything that pops into their head. Once, we were having a conversation about birds and where they tend to live, and before anyone reached for Google, someone just typed it into ChatGPT. The answer came back quickly — and confidently. I was skeptical, so I searched for it myself. To my surprise, it was accurate. But that small moment stuck with me: how natural it felt to trust AI first and verify later, if at all.

That moment haunts me a little when I think about how I use AI to study. I haven’t yet caught it making that kind of mistake when answering medical questions, but the possibility nags at me. The danger isn’t necessarily in getting a wildly wrong answer — it’s in getting one that’s almost right. Believable. Plausible. But just inaccurate enough to steer you off course. And I’m scared that I won’t catch it. AI is great for helping me break down the basics, but when it comes to nuance — to the little details that make up clinical decision-making — it starts to fray at the edges.

I suspect AI will always be there, evolving in the backdrop of my career. I can imagine calling on it as a resident to quickly scan a new paper or generate a differential for a condition I haven’t thought about since studying for Step 1. But medicine is about caring for other people. AI can’t replace that.

How do you use AI? Share in the comments.

Sahil Nawab is a medical student at UMass Chan Medical School in Worcester, MA. He is an avid aviation enthusiast and private pilot who enjoys exploring the world from many unique perspectives. Sahil is a 2024-2025 Doximity Op-Med Fellow. 

Image by rob dobi / Getty

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med