Article Image

Why This Doctor Thinks We Need More Transparency For Algorithms in Health Care

Op-Med is a collection of original articles contributed by Doximity members.

A decade ago, we physicians were compelled to participate in digitization and the implementation of the EHR system. As we got confident with EHR use, AI was knocking on our doors. AI is the science and engineering of making intelligent machines, especially intelligent computer programs. We embraced health care AI, and that includes the use of algorithms to procure, identify, and treat disease patterns. 

In health care AI, an algorithm is a tool only as smart as the person designing it. It is designed by non-doctors who have no exposure to the evidentiary standards of care in medicine. The algorithms have shortcomings and bias akin to clinical trial designs that influence outcomes.

I still remember the jaw-dropping moment in my intern year when my attending echoed “clinical trial” in the morning report. I do not have any memories, good or bad, about the ensuing discussions about bias in trial designs, so there is very little reason to believe that the AI tool designers are aware of the bias either. The only fact I have committed to memory is that bias in trials influences the significance of data and can alter treatment plans and render trial data useless. 

We, physicians, are the end-users of the AI tools and yet are blindsided by the algorithm rationale. We practice evidence-based medicine and AI algorithms need a similar evidence-based benchmark so that they will withstand the scrutiny of time. I think we need an Algorithm Transparency initiative as the first step for us to engage in the evidence-based use of health care AI tools.

Almost every reader has interacted with an algorithm. Algorithms are entrenched in our daily life workflows through social media, news feeds, political lobbying, and smart money managers, just to name a few. The perception that an algorithm is always right and unbiased is a fallacy.

I had an interesting experience recently with Spotify. The music streaming services use a feedback-based algorithm computed on likes and preferences to optimize an individual’s playlist. I enjoyed my Spotify playlist for years; however, my teen introduced me to a song about Captain Shazam called “Shout My Name.”  Voila! In less than 24 hours, I had a newly revamped music playlist full of new genres. My old playlist was limited by my listening patterns, but the teen’s Gen Z music chords accentuated my listening experience, resulting in a new, blended playlist, a win-win situation for a perfect family drive. Introducing a planned “disruptor” like this to an algorithm to review new information will help overcome bias.

Unlike my accidental playlist reinvention, Joy Buolamwini, a digital activist, has performed groundbreaking research in examining gender and racial bias in facial analysis software. Her work with inclusive design modeling in AI has allowed many international travelers to use facial recognition during airport security.

The music stream algorithm and the limitations of the facial analysis software establish that algorithms have a bias. Without having information on the rationale of the algorithm and its application, can a physician obtain informed consent for its use? No. 

Patients make an informed decision about their treatment options. In order to get informed consent to use the AI tools, we should be able to understand and explain the AI tools. In the current health care AI, the rationale for the algorithm lacks clarity.

For example, during EHR implementations, the automated alerts in the physician workflow were speculated to improve outcomes. However, the alarms had a high “signal to noise” ratio, making it burdensome. Health care systems devised transparent scoring systems to identify best practices for alert system prioritization. The clarity offered by the scoring system has enhanced its utility and improved reimbursements and quality of care.

Similarly, AI-driven insights might be predictive, diagnostic, or prescriptive based on the health care decision; however, does it meet the standard of care? The answer is unknown. 

We practice evidence-based medicine to drive clinical management, treat, and diagnose. The hierarchy of evidence defines the “medical standard of care.” Medical member associations recommend position statements. The expert panelists weigh in on the available clinical data studies to develop consensus statements. A systematic review of available evidence by third-party organizations leads to clinical practice guideline statements. The contributions from the panelists on these review boards constitute a  “peer-reviewed external validation process.” The “peer-reviewed external validation process”  is the “disruptor” necessary to challenge the algorithms, just like I observed in my music feed. 

Algorithm Transparency Initiative

We are ready to nose-dive into the turbulence of artificial intelligence. As practicing physicians let us take off with the following AI insights.

1. Bias exists in AI algorithm. It's not IF, but WHEN.

2. Bias is inevitable. Look hard, look deep.

3. A “peer-reviewed external validation process” is the “Disruptor.” Trust medicine, before tools.

4. Adopting regulatory oversight is cost-effective. The burden of proof is on the AI tool.

We are the pillars of evidence-based practice of medicine and cannot treat our patients effectively if we use algorithms we know nothing about. We must advocate for establishing a hierarchy of evidence to the health AI algorithms.

Nita K. Thingalaya, MD, BCMAS Dipl ABOM, is a Board-certified Internist, who specializes in Medical Affairs and Obesity Medicine. She practices telehealth and hospital medicine. She is currently the Medical Director in Healthcare Utilization. Her diverse experience in clinical research, utilization, and informatics make her a leader in Medical Affairs. The article is independent of her affiliations past or present. Dr. Thingalaya is a 2019-2020 Doximity Fellow.

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med