Article Image

PGY-AI and the Future of Psychiatric Education: Learning to Think in the Age of Machines

Op-Med is a collection of original articles contributed by Doximity members.

The session, titled “PGY-AI: The Impact of Artificial Intelligence on Psychiatry Education”, was held at the Los Angeles Convention Center and was led by Dr. Adrian Jacques Ambrose, with speakers Drs. Eric Kramer, Brendan Ross, Vlad Velicu, and Nhi-Ha Trinh. It explored the promises and pitfalls of artificial intelligence (AI) in training future psychiatrists.

As a PGY1 psychiatry resident at Nassau University Medical Center, the session felt both timely and deeply personal. Like many residents, I have started to explore the use of AI tools — whether it's asking ChatGPT to help me compare my differential diagnoses or reading about GPT-powered scribes that generate clinical notes in real time. This session was the first time I saw the discussion elevated to the level of ethics, equity, and educational philosophy.

The session began with a reflection on how AI is already embedded in the way trainees learn. Dr. Brendan Ross, a PGY2 at Mount Sinai, described how medical students are using generative AI like ChatGPT to draft patient assessments, simulate clinical interviews, and analyze cases. Tools like Gemini AI can transform dense scientific texts into conversational podcasts. Others like OpenEvidence and AI-driven case simulators are offering new ways to practice, review, and study.

But is delegating our learning to AI tools a shortcut or a detour? As Dr. Ross provocatively posed, quoting The New Yorker: "Why even try, if you have AI?"

Dr. Vlad Velicu, a geriatric psychiatry fellow at Mount Sinai, took a sobering turn, discussing the legal, historical, and ethical implications of AI in medicine. He described the evolution of AI from science fiction to clinical reality — and warned that we may have reached a point where even physicians can be fooled by machines that pass the Turing Test.

He raised the question: If a machine makes a clinical recommendation that harms a patient, who is responsible? The current legal doctrine — the "learned intermediary" principle — places responsibility on the physician to understand the tool. But, with "black box" algorithms, even developers can't always explain how conclusions are generated. This becomes particularly problematic when AI hallucinations generate fabricated citations or inaccurate clinical logic.

This theme, outsourcing not just documentation but cognitive processes, ran throughout the panel.

Dr. Eric Kramer, a PGY4 from UC Irvine, discussed the growing use of AI scribes to reduce documentation burden and improve physician wellness. As someone who has used AI scribes in outpatient settings, he shared that note-writing time had decreased by nearly 50%. More importantly, the shift allowed him to be fully present with patients, reducing "pajama time" and improving rapport.

Yet he cautioned that documentation isn't just clerical — it's pedagogical. For junior learners, composing a note is a way to synthesize information, clarify reasoning, and generate hypotheses. If AI fills in these blanks, we risk diminishing educational depth.

He posed questions we must all confront: Should AI be used only for progress notes and not for intakes? Should it generate only the subjective portion, not the assessment? At what level of training should AI support be introduced — PGY3? PGY4? Certainly not in the earliest stages, where foundational thinking is still forming.

The final speaker, Dr. Nhi-Ha Trinh, associate professor of psychiatry at Harvard Medical School, brought the conversation full circle by grounding it in ethics and equity. She reminded us that AI isn’t neutral — it inherits the biases of its creators and training data. Without careful oversight, AI could worsen existing disparities in psychiatric care. She also emphasized the AAMC’s principles for AI in medical education: maintain a human-centered focus, ensure ethical use, provide equitable access, and protect data privacy.

A sobering moment came when she referenced a recent lawsuit in which a chatbot’s advice may have contributed to a teenager’s suicide. As AI grows more powerful, so do its risks, especially when mistaken for empathy or therapeutic judgment.

Yet this wasn't a dystopian session. The overall tone was one of cautious optimism. The speakers didn’t call for banning AI from psychiatric education — they called for intentional integration. Dr. Ambrose, who chaired the session, emphasized the need for organizations like the ACGME and the APA to issue guidelines. We must ensure AI enhances, rather than replaces, clinical reasoning.

I left the session with a paradox in mind. AI is designed to make us faster and more efficient. But in psychiatry, the most important work often requires slowing down — sitting with ambiguity, listening beyond words, and wrestling with the "why" behind symptoms. Those aren't tasks we should automate.

As a first-year resident, I still stumble through progress notes, rewrite my formulations, and second-guess my diagnoses. That's where growth lives. AI may someday write a better SOAP note, but it won't become a better psychiatrist.

The session challenged me to ask not how we should use AI in training, but why. What kind of psychiatrists are we trying to create? If we want clinicians who can think deeply, connect authentically, and reason ethically, AI must be our assistant, not our author.

The Fourth Industrial Revolution is here. We need to teach — and learn — with our eyes wide open.

Dr. Jaka has no conflicts of interest to report.

Image by Moor Studio / Getty

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med