Article Image

Why We Need the AI to Be Wrong

Op-Med is a collection of original essays contributed by Doximity members.

In 2017, I saw a mug in a break room. Big white letters on black ceramic with a stethoscope printed on it: “Please Do Not Confuse Your Google Search With My Medical Degree” — a visual wink to remind you that the doctors, the trained, the credentialed, the seasoned, are still the ones who “listen.”

It was funny. A jab at patients who show up armed with printouts, search results, and self-assurance. And I remember thinking: yes, of course, the experts. I should never be that person, the one who barges in with handouts. But now that I’ve become a physician myself, I wonder if that mug revealed more about us than it ever taught patients what not to do.

When a patient or client walks in holding a printout, it doesn’t just feel like information. It feels like intrusion. A quiet breach of the social contract that says: we name the problem, we define the meaning, we tell the story. Because now, it’s not just patients, clients, or customers Googling. It’s the machine in the room. The machine isn’t just providing information anymore. It doesn’t just inform. It’s speaking, guiding, framing, and we’re responding. We’re not just reacting to its answers. We’re reacting to its tone. Its timing. Its authority. Not just intellectually — emotionally. Because it narrates.

Now AI doesn’t offer just one article; it delivers a cleanly written, synthesized, individualized plan. And when AI does it, it also frames a story; it chooses where to begin, what to emphasize, and what to leave unsaid, guiding our attention before we’ve even realized it. And it doesn’t ask to speak. It just does. And its chances of being right? Much higher than some dot-com printout.

When the machine gives the correct answer — the accurate one, the relevant one — something strange happens. We don’t feel reassured. We feel threatened. We need it to be wrong. Experts Google things all the time. We read summaries, decision trees, reviews. So, the statement on the mug wasn’t just about Google’s inaccuracy rate but about who gets to define the story. The issue isn’t external information: it’s who brings it into the room. It touches our professional identity.

When that authorship is interrupted, even by accurate information, it unsettles something deeper than workflow. So we need it to be wrong — and when it is, even slightly, we pounce. We remember the mistake. We repeat it. We tell it like a cautionary tale about the dangers of trusting the machine.

“It confidently recommended prenatal vitamins to a 78-year-old man.”

“It once said GCS 0 — even my coffee mug has a GCS of 3.”

“One time, it made up a diagnosis, like it invented DSM-6.”

We collect these examples like receipts. Not because they’re common, but because they confirm what we quietly hope is still true: that we are smarter, better, more human than the machine. And that’s strange, because these same people would never close a textbook over a small error. They’d say, “It’s just outdated.” We forgive books, our colleagues, ourselves. But we don’t forgive the machine. Why?

The irony is that the same people who won’t trust a model in their professional domain might happily take its gardening advice, flaws and all. We say we want help: something that provides consistency, efficiency, objectivity. But when we’re handed something that delivers precisely what we claim to want, we hesitate. We say we can’t trust it, but we can’t quite say why.

Psychologists call it confirmation bias: if we want the AI to be wrong, we’ll remember every flaw and discount its correct answers. But that doesn’t fully explain why it feels personal. Self-determination theory goes deeper: it posits that autonomy is a core human need. When a machine makes decisions for us, especially in high-stakes work, it threatens that need, and we push back, sometimes without realizing why. So when a machine speaks in the same language we do, with similar cadence, confidence, and clarity, we stop reacting to it as a tool. And we start reacting to it as a voice in the room. It offers recommendations with a confidence that feels human. And when it disagrees, it can provoke resistance. Not because it’s offensive, but because it’s too coherent, too confident, too close. It threatens our sense of purpose, our intuition, even our belonging.

It’s easy to forget the machine isn’t a person, because we respond to it like one. Sometimes, we even react to AI the way we might react to certain colleagues: the ones whose confidence unnerves us, or whose authority we instinctively question. So it no longer becomes an assistant.

It becomes a frenemy: competent, charismatic, and irritatingly right. We admire it, but we don’t want to admit it. And this “frenemy” can trigger patterns we’ve rehearsed our whole careers — over-deference, knee-jerk skepticism, or even subtle “undoing” of its suggestions — whether or not those reactions serve the work at hand.

In our professional training, whether for doctors, lawyers, engineers, or teachers, we’re taught to think independently. We're also socialized into a professional identity: the expert as interpreter, meaning-maker, and authority. When an AI system enters that space and does the same thing, with accuracy and fluency, the real disruption isn’t technological. It’s psychological.

In medicine, that might mean dismissing a correct AI-generated diagnosis because it came from “the wrong source.” In law, it could mean discarding a well-founded argument because it was written by a machine. The danger isn’t the model’s mistake — it’s ours.

We’re not resisting the machine because it’s bad or inaccurate. We’re resisting it and remembering that one time it got something wrong because it might be better than we want it to be. We reject the machine not because it fails, but because it succeeds in the space we define ourselves by.

We say we want to trust. We say we just need a little bit more accuracy and consistency. But if we’re honest? We want to be needed, we want to be the one who knows.

Sometimes, we need the machine to be wrong so we can hold on.

Jasmine Kim, MD, is a psychiatrist and clinical informatics fellow at Boston Children’s Hospital / Harvard Medical School. She writes about human–AI relational dynamics: how people relate to AI, not as neutral tools but as social and psychological entities—and how our reactions (trust, resistance, bias) reflect our own emotional and cognitive patterns, not just the AI’s content. Though Dr. Kim wrote this essay with the help of ChatGPT, the ideas, the thoughts, and the struggles behind them are entirely her own.

Illustration by Diana Connolly

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med