You may think no one reads your progress notes – but the words we leave behind in patients’ charts don’t just sit there. Stigmatizing language in the EMR can shape patient care, for better or worse. Could artificial intelligence (AI) be a solution, or will it amplify the harm?
While AI bias in clinical decision-making is widely recognized, clinicians’ own stigmatizing language in patients’ charts is an overlooked threat. As a palliative care physician, I’ll never forget supporting a caregiver whose spouse’s hospitalization – scarred by stereotyping and stigma-laden documentation – ended with substandard care. Despite their advocacy, they went home as a widowed single parent – reviewing funeral plans instead of a discharge summary. Perhaps their story could have ended differently if bias and stigma hadn’t influenced multiple aspects of the care plan.
Unfortunately, this isn’t an isolated incident. Studies show that physicians often use stigmatizing language to label patients as “difficult,” question their credibility, and reinforce stereotypes. These terms remain relatively common in medical notes — and appear disproportionately in Black patients’ records. More than one in 10 patients report feeling disrespected or mislabeled after reading their clinic notes, particularly those already facing poor health or unemployment. But this goes beyond hurt feelings — stigmatizing language is linked to higher rates of diagnostic errors.
Bias is also embedded in EMRs, where hundreds of stigmatizing terms appear in billing codes and dropdown menus. For example, the ICD-10 code for “noncompliance” is disproportionately assigned to Black patients with controlled diabetes who have public insurance and live in lower-income areas. Consequently, our digital infrastructure normalizes bias without clinicians typing a single word.
As more health systems integrate generative AI into EMRs, we risk multiplying this problem. AI’s known biases, combined with the harm caused by loaded language, could create a perfect storm for medical errors. In a study of 60,000 ICU admissions, an AI model trained on notes with stigmatizing language was less accurate at predicting patient mortality. When I meet with families in the ICU, helping them understand their loved one’s illness and prognosis is crucial for guiding treatment recommendations – and our own words cannot stand in the way.
It’s no secret that time spent on EMRs contributes to clinician burnout. Some clinicians may worry that reviewing their notes to remove biased language will require more of their already limited time. However, AI can play an integral role. Natural Language Processing (NLP) can find harmful terms in clinical notes and help clinicians recognize and correct their documentation. Early research indicates that this approach can analyze thousands of notes and may offer a path to safer, more equitable documentation.
The risks of biased medical documentation and AI aren’t limited to visit notes. Some health systems now use AI to respond to patient messages, as messaging volumes have skyrocketed since the pandemic. While accuracy, empathy, and time saved have been studied, less is known about the language used in AI’s generated responses. In one study, 35% of responses from 14 large language models (LLMs) to 60 clinical questions contained stigmatizing language. However, prompting the models with a list of biased terms to avoid dropped the number of stigmatizing answers down to just 6%. With careful design, AI can help us unlearn harmful habits, instead of replicating them, while easing some of our documentation burdens in a patient-centered way.
Our words are harming our patients. The promise of AI to humanize health care will remain out of reach unless we take responsibility for our stigmatizing documentation practices. At the end of the day, a patient’s narrative isn’t our story to judge — it’s theirs to tell. Let’s recognize whose story we’re centering and leave our biases behind at the login screen. We don’t need to wait for better generative AI to make the “poor historian” problem a thing of the past. Start today: document with patient-centered language, quote patients thoughtfully, and delete stigmatizing phrases from copy-forward and templated notes. Garbage in, garbage out is an old adage, but in medical documentation and AI-assisted care, it carries dangerous new weight.
Lindsey Ulin is a palliative care physician in Dallas, TX. She enjoys writing in indie coffee shops and bookstores and spoiling her dog Winston. She tweets at @LindseyUlin. Dr. Ulin is a 2025-2026 Doximity Op-Med Fellow.
Image by rob dobi / Getty Images




