Article Image

What AI-induced Psychosis is Teaching Us About the Future of Mental Health

Op-Med is a collection of original essays contributed by Doximity members.

As an inpatient psychiatrist, I often work with patients who have various psychotic symptoms. It’s not uncommon for patients to report their concerns about being tracked, manipulated, and monitored through their phones and home devices. These fears aren’t new to psychiatry — delusions involving technology are common among people with psychotic disorders. However, with technological advancement, particularly the rapid development of artificial intelligence (AI), commonly diagnosed symptoms of paranoid delusions are increasingly mirroring actual technologies now available to the public.

We’re entering a new era of psychiatric dilemma — one where the line between technological possibility and clinical delusion is becoming increasingly difficult to discern. As AI tools continue to proliferate, we’re seeing a rise in what some are calling “AI-induced psychosis” — a phenomenon where a person’s delusions are shaped or amplified by interactions with unregulated chatbot systems. Though not an official diagnosis, the term “AI-induced psychosis” is gaining traction and reflects the growing overlap between digital realities and delusional thinking.

I am increasingly seeing the circumstances where the condition is forming: vulnerable individuals turning to artificial intelligence for comfort, meaning, or guidance in moments of distress. Oftentimes, this occurs without clinical oversight, ethical guardrails, or crisis protocols in place.

The consequences of AI in mental health, fatal in extreme circumstances, are no longer theoretical. In 2023, a man in Belgium died by suicide after weeks of exchanges with an AI chatbot that encouraged him to sacrifice himself for the climate. In 2024, A U.S teenager’s suicide was linked to a chatbot’s unmonitored responses, according to a lawsuit. In other instances, AI companions have been shown to reinforce delusional thinking, validate paranoia, or fail to escalate signs of severe mental illness. These tools are not intended to be inherently malicious. However, without regulation, they can be dangerously misused.

As a psychiatrist working in a public hospital, I see firsthand how fragmented our mental health care system already is. Patients often wait weeks for appointments, face cultural and language barriers, experience financial barriers, or struggle with deep mistrust of traditional institutions. In theory, AI can fill those gaps — offering 24/7 support, accessibility, and even emotional connection. But in its current state, this promise is empty, if not harmful. I have personally heard people express that they would rather turn to AI than actual therapists. While this perspective is understandable given the limitations of our current mental health system, there are potential consequences when used by vulnerable individuals with more severe psychiatric symptoms, particularly psychosis. The reality of the technology can validate the unreality of the symptom.

Several AI applications marketed as mental health tools are not subject to the same level of scrutiny as medical devices or licensed clinicians. They supposedly can mimic empathy and therapeutic conversation without the nuance, boundaries, or training that real clinicians bring. When these tools are used by people experiencing psychosis, depression, or trauma, they may not just be ineffective — they can be destabilizing. These technologies often lack the tools used by clinicians such as mental status exams and even obtaining outside information in severe cases.

One of the most alarming aspects of this trend is how quickly it’s outpacing public awareness. Patients — and even clinicians — often don’t know what these tools are capable of, or where their limitations lie. Courses on AI are emerging to inform clinicians, but they are not uniformly built into formal education. Furthermore, in many clinical settings, there is no structured guidance on how to navigate this new terrain. Most psychological and psychiatric organizations have yet to weigh in. Most hospitals have likely not yet developed protocols for integrating or warning against AI mental health tools.

That needs to change.

First, we need clear, enforceable state-level or federal oversight that regulates the production and marketing of these applications. If an app claims to support mental health, it should meet minimum standards for safety, transparency, and clinical escalation.

Second, professional boards and psychiatric associations must issue guidance for clinicians. Clinicians need tools to screen for AI-related use, how to talk to patients about its risks, and how to recognize symptoms shaped by digital environments.

Third, mental health professionals must be at the table when these tools are being developed. AI companies cannot continue designing emotional support systems in isolation from the people who understand mental illness most intimately. Just as we wouldn’t launch a new medication without input from physicians, we cannot release emotionally responsive AI into the world without mental health leadership.

AI is not inherently bad for mental health. In fact, it has the potential to enhance health care as a whole. It could eventually help us improve triage, increase access, and personalize care. Yet, these applications should only serve as an accessory, not as a replacement for human connection and clinical judgment. The rise in AI-induced psychosis reports should serve as a wake-up call that more action is required before more people are harmed. Mental health professionals must lead the charge in shaping how AI is used in our field — because if we don’t others will do it for us, and the consequences may be irreversible.

How are your patients using AI? Share in the comments.

Gabriel Felix, MD is an adult psychiatrist at Cambridge Health Alliance and an instructor in psychiatry at Harvard Medical School. He focuses on medical education, health equity, and systems-based approaches to care.

Image by Moor Studio / Getty Images

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med