Article Image

Can AI Help Create Better EHR Problem Lists?

Op-Med is a collection of original articles contributed by Doximity members.
Image: Artistdesign29/shutterstock.com

Most physicians, myself included, are ambivalent about their EHR system.

We love the ability to access and compare a prior EKG with a new EKG when we see a patient with atypical chest pain. We are a click away from looking at serial hematocrits. And how nice is to see that the subtle lesion on a chest X-ray today was there five years ago and is absolutely unchanged?

But along with these great comparative tools, there are some problems with our current EHR systems. For example, perceived problems with the EHR, as eloquently summarized in the paper Electronic Health Record Use a Bitter Pill for Many Physicians, include:

  1. frustration with lack of interoperability

2. increased administrative work load

3. lack of belief in improved quality of care

4. possible decreased quality of care

5. decreased efficiency

So are things getting better?

I am not sure.

Consider this. In 2012, one study suggested that 30% of physicians felt that EHRs may increase the chance for errors. A 2013 RAND paper suggests that EHRs contributes to professional dissatisfaction. And, in 2014, in a survey of 600 physicians 67% reported dissatisfaction with their EHR!

(One qualification regarding EHR/physician satisfaction — the only thing more ambivalent than individual physicians opinion on the subject of EHRs are the recent surveys about EHR use. It’s very easy to find conflicting opinions. Consider two of the more recent EHR discussions: After years of frustrations, user wish-list turns positive versus a recent Medium article Physician Satisfaction with EHRs: It’s Even Worse Than You Think.)

But is there hope for EHRs on the horizon?

Thanks to Artifical Intelligence (AI), I think so, at least in one mundane but very important area: The EHR Problem List.

I would like to introduce you to a recent pilot study “Automated problem list generation and physician perspective from a pilot study”, published in late 2017.

In this study, IBM’s Watson, using both Natural Language Processing (NLP) and Machine Learning (ML), was able to create a superior EHR Problem List by reviewing a complete set of patient chart notes from one of the most commonly used EHR systems in the country (one said to have 50% of the EHR market — hmm, I wonder who that could be).

Roughly speaking, the Watson model, trained by medical experts, extracted information from both structured data (things like diagnosis codes) and unstructured data (things like chart notes) from a gold standard set of 399 charts. Effectivly these experts taught Watson what a relevant problem list should look like.

So, in this small Watson pilot study (27 encounters), when a candidate problem was identified (for example, asthma), the Watson algorithm would go through the entire chart to consider multiple factors to determine whether or not this was a real problem. These factors would include such items as candidate term frequency, associated medications with this term, frequency of the problem in the larger data set, history of diagnosis of the candidate problem, and frequency of the problem across all institutions contributing to the SNOMED CT CORE subset, eventually leading to an updated problem list.

In a sense, this is what we physicians already do to create a good EHR Problem List, with one key difference: Watson will use ALL of the prior patient encounters. Whether it is 5 or 500 encounters — it doesn’t matter. In fact, the more, the merrier.

So did this algorithm work?

To some extent, in this study, it did.

For example, when looking at 27 different encounters, it found on average 4.33 important or very important missed problems per assessment! (An example of an important missed problem would be a distant history of DVT).

Just think what that means clinically if your AI-enabled EHR is able to automatically find and highlight 3–4 potentially relevant medical problems unknown to you about your patient prior to your patient disposition.

Wow.

Now, this algorithm isn’t quite ready for prime time. This was a very small study, and there were some downsides.

Retrospectively this algorithm tended to list some redundant and non-active problems and would occasionally list a problem that was not well supported.

Also, I am not sure of the business model here. Who is going to get paid for this increased functionality? The EHR vendor? IBM?

There is still work to do.

However — for me at least — if an AI-enabled EHR can automatically review the patient’s ENTIRE chart and then propose an update to the problem list, it will be a game changer.

Why? Because every physician I know with any clinical experience understands the hard way what the risks are to our patients when we aren’t aware of a prior significant problem deeply embedded in the chart.

If an AI-enabled EHR can reduce this risk for us, it will make up for a lot of our frustrations we have with EHRs, because, as mundane as it seems, there isn’t a single physician that doesn’t want the most accurate and current EHR Problem List possible.

Dr. Matthew Rehrl is a physician who has served in a C-Suite advisory role on social media within healthcare for over a decade. His current focus is the ethics of AI in healthcare. He reports no conflict of interest. He is a 2018–19 Doximity Author. He can be found on matthewrehrl.com and @matthewrehrl.

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med