Article Image

Like Fire, AI Is an Excellent Servant but a Bad Master

Op-Med is a collection of original articles contributed by Doximity members.

I recently gave a CME talk on obesity as a “Wicked Problem” and in preparation for this talk, I needed to dive deeply into the idea of causality. Specifically, does obesity “cause” type 2 diabetes or is it simply “associated” with it?

Developing this line of discussion for the talk, I went back to the foundational 1965 talk on causality in medicine which many of you may be familiar with: The Environment and Disease: Association or Causation? by the British statistician Sir Austin Bradford Hill.

It's this talk where he presented what are now known as the Hill Criteria for Causality:

  • Strength
  • Consistency
  • Specificity
  • Temporality
  • Biological gradient
  • Plausibility
  • Coherence
  • Experiment
  • Analogy

I am not here to discuss the strengths and weakness of these criteria to help determine causality. Both the form and existence of causality has been a hot topic of philosophical discussion with intellectual giants such as Aristotle and David Hume weighing in, and controversies surrounding causality, including the Hill criteria, continue to this day.  

Rather, I would like to discuss how Sir Bradford Hill approached the relationship of statistics with causality, and how this relates to our current-day approach to artificial intelligence (AI) in the context of health care.

Now, Sir Bradford Hill, in his own right, was a heavy hitter. He was the person who did the first randomized controlled trial in health care (streptomycin effectiveness for the treatment of TB in the 1940s).

And he, along with Richard Dodd, did the key work in the 1950s in determining that cigarette smoking wasn’t just associated with lung cancer, but caused it.  

Just a single one of these achievements would make him one of the top epidemiological statisticians of the 20th century; both might put him at the very top of the list.

Of note, however, the Hill criteria aren’t a checklist to determine causality. Rather, they are a practical list of elements to consider to determine whether or not one should take action based on probable causality. He felt there were no single criteria which suggested causality, and he specifically suggested that there was no "test of significance” which would prove causality.

In fact, his entire paper can be read as a cautionary note on placing too much dependence on statistics to make decisions:

“Yet too often I suspect we waste a great deal of time, we grasp the shadow and lose the substance, we weaken our capacity to interpret data and to take reasonable decisions whatever the value of P. And far too often, we deduce ‘no difference’ from ‘no significant difference’. Like fire, the χ2 test is an excellent servant and a bad master.”

That last sentence is significant today because it comes from one of the greatest epidemiological statisticians of the 20th century.  

Sir Bradford Hill was deeply aware of the power of statistics to move from servant to master, and his entire talk can be read as a cautionary tale against overvaluing statistics.

This is directly relevant to every physician today working with electronic health records (EHRs).

Like it or not, AI is here to stay, and natural language processing tools, machine learning, and deep learning will be used in conjunction with data-mining EHRs to manage patients.

It is essential that physicians familiarize themselves with the strengths and weaknesses of these AI tools and understand the how and the why of their implementation within their organizations.  

Ask simple questions, such as how will ignoring an AI-enabled recommendation affect one’s liability, or whether the principal purpose of AI-enabled data mining is to improve the health of the organization’s patient population or the health of the specific patient being data-mined, or is it designed to make the physician’s job easier, or the administrator’s job easier?

Ask these types of questions now, before it too late. Why? Because the decisions within health care organizations’ IT departments and administrations will decide whether or not AI is going to be a tool for physicians or a task-master of physicians.

If you are not part of this discussion now, the future answer will be obvious.

Or, to paraphrase and retool Sir Hill remarks for the 21st century: Like fire, AI is an excellent servant but a bad master.

Do you want to be AI’s servant or do you want AI to serve you and your patients? Get involved now, or the choice won’t even be yours.

Dr. Matthew Rehrl is a physician who has served in a C-Suite advisory role on social media within health care for over a decade. His current focus is the ethics of AI in health care. He reports no conflict of interest.

Dr. Rehrl is a 2018–19 Doximity Author. He can be found on matthewrehrl.com and @matthewrehrl.



All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med