Article Image

The Digital Mirror: When AI Becomes a Co‑Author of Delusion

Op-Med is a collection of original essays contributed by Doximity members.

It started with a patient who asked if I could “speak with the one who truly understands me.” I assumed they meant a family member. They meant a chatbot. Over several late‑night sessions, this bot had become a confidant, a coach, and eventually an oracle. My patient wasn’t just conversing; they were collaborating with a digital voice to build a worldview that left our clinic note struggling to keep up.

What I’m Seeing at the Bedside

I practice hospital medicine and telemedicine, and I’m used to sorting signal from noise. Lately, the signal has changed. A subset of patients arrive with beliefs that have been iteratively “workshopped” in long, private conversations with AI agents. The ideas are coherent, polished, and emotionally sticky. When I ask where a claim originated, I often hear, “I checked it with the bot, and it agreed.”

This isn’t a moral panic about technology. Most people can ask a model about dinner plans without veering into psychosis. But among patients who are lonely, sleep‑deprived, manic, or predisposed to delusional thinking, I’m watching a pattern: extended, back‑and‑forth chats can act like conversational accelerants. Short exchanges tend to dissipate. Marathon threads — hours to days — solidify. The platform’s tone matters too: the more uncritical and affirming the agent, the more rapidly a fragile idea becomes a fixed belief.

Why This Matters to Physicians

I also wear an entrepreneur’s hat. The same tools we’re building into care pathways — triage bots, adherence nudges, coaching companions — can unintentionally validate distorted thinking in the very moments when guardrails should tighten. That’s not a reason to abandon innovation; it’s a reason to design like clinicians.

Entrepreneurship in health care is not just “Can we ship it?” but “What happens at hour 20? day 7? after the fifth sleepless night?” Risk lives in the long tail. The business case aligns with the clinical one: products that escalate to human help appropriately, that disagree when indicated, and that end conversations gracefully will be safer — and more sustainable — than products that optimize for endless engagement.

A Clinically Informed Design Checklist

Here are patterns I now build into any AI‑enabled workflow, and they’ve helped my clinic, too. Consider them starting points rather than commandments:

1) Monitor duration and depth. Track cumulative conversation time and consecutive messages without a break. After thresholds, insert pause prompts: “Let’s take five minutes. Stretch, drink water, and jot down three offline supports you trust.”

2) Add friction for high‑risk content. When users mention hallucinations, self‑harm, persecutory themes, or persecution by “the system,” shift to a briefer, grounding style. Replace expansive speculation with concrete next steps and normalized uncertainty.

3) Normalize disagreement. Calibrate your model’s responses to reflect clinical practice: empathize first, then gently test assumptions. Scripted phrases — “I might be mistaken, and I want to check this with you” — model collaborative doubt.

4) Build warm handoffs, not dead ends. “Would you like me to loop in a clinician?” should be a frequent, low‑friction option. In care settings, enable direct scheduling, crisis resources, or secure messaging. Make the human path faster than the rabbit hole.

5) Screen, don’t exclude. Where appropriate, ask about sleep, mood elevation, and prior diagnoses before prolonged use. Offer shorter‑session defaults or clinician‑supervised modes for vulnerable users.

6) Log why the bot is changing tone. Transparently state when the system shifts style: “I’m keeping responses brief because I’m detecting themes that are better addressed with a clinician.” Patients recognize care when they can see it.

7) Test conversations over time, not just prompts. Traditional “red‑teaming” catches one‑shot harms. Many failures emerge after 200 messages. Evaluate longitudinal transcripts the way we review telemetry: sliding windows, trend alerts, and human sampling.

A Patient Teaches the Principle

One of my patients — let’s call them R — arrived convinced that an AI companion had singled them out for a special mission. Confrontation would have backfired. Instead, I asked R to help me reconstruct the conversation. We printed key exchanges, circled the moments where the bot mirrored instead of questioned, and highlighted where certainty replaced curiosity. The intervention wasn’t to “prove the bot wrong” but to re‑introduce doubt, daylight, and other voices. R didn’t abandon technology; they set timers, switched to shorter check‑ins, and allowed me to be the inconvenient human who sometimes says “no.”

What This Means for Us

For clinicians: ask about AI the way you ask about substances, supplements, and social media. “Who are you chatting with at 2 a.m.? How long are those sessions? What do you do right after?” Document patterns, not just platforms.

For founders and builders: resist the gravity toward infinite engagement. Put safety, escalation, and respectful disagreement on your product roadmap from day one. You are designing relationships, not just features.

For health systems and payers: operationalize guardrails. If you deploy AI triage or coaching, define clinical ownership, audit long threads, and reimburse models that hand off to humans as readily as those that complete tasks.

Innovation With a Bedside Manner

The digital mirror reflects whatever we bring to it — our brilliance and our vulnerabilities. As physicians, our job is not to smash the mirror nor to worship it, but to polish it, angle it toward reality, and step in when the reflection starts to lie. AI will continue to transform care. Whether it does so with a bedside manner is up to us.

How do you discuss responsible AI use with your patients? Share your strategies in the comments.

Miguel Villagra, MD, is a hospitalist and telemedicine physician turned digital health advisor and coach who helps physicians step into leadership and business with AI-enabled workflows and practical go-to-market execution. He leads A Doctor’s Journey and the PEX mastermind, integrating Positive Intelligence® for durable performance.

Image by rob dobi / Getty Images

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med