Article Image

The Pitfalls of Statistical Illiteracy

Op-Med is a collection of original articles contributed by Doximity members.

"There are three kinds of lies: lies, damned lies, and statistics.”

— Anonymous

I’d dare to say that most clinicians cringe when they think about statistics. This is not unique to clinicians; behavioral economists suggest that humans suffer from what has been dubbed as “collective statistical illiteracy,” an inability to critically assess probabilities and statistical information encountered on a daily basis. Clinicians, as well as other professionals, are not immune to this form of cognitive bias. 

To illustrate the concept of our statistical illiteracy, I will refer to the 1995 Contraceptive Pill Scare. The UK Committee on Safety of Medicine published a warning against the use of third-generation oral contraceptive pills as they increase the risk of fatal blood clots “by 100%.” This committee warning created a wave of panic among the public and confusion among clinicians. Many abruptly stopped taking the pills altogether. The result of this “100%” increased risk announcement was an estimated 13,000 added abortions in 1996 and additional cost to the U.K. NHS of $70 million.

What went wrong? The statistics were presented by the committee to the clinicians and the public as relative risk rather than absolute risk values. The statistics comparing side effects of second- and third-generation contraceptive pills showed that 2/7,000 women on a third-generation pill suffered from major clots, compared to 1/7,000 women on second-generation pills. The "100%" relative risk was not a lie but the absolute risk of one extra case for every 7,000 women is extremely low, population-wise. What would have happened if the warnings were presented to the public as an absolute risk? Could this have led to fewer abortions and unplanned pregnancies? 

Another example is screening tests. The CDC and NIH statements about the benefits of regular screening tests are commonly interpreted as “undergoing this screening will decrease the risk of getting a certain disease.” Clinicians know that screening detects existing diseases at an early stage which could lead to better therapeutic outcomes, but we overlook the fact that early detection does not reduce the chances of getting the disease. For example, a regular mammogram every 1–2 years does not affect the risk of getting breast cancer nor will it prevent it. Yet, in four European countries, almost three-quarters of a random sample of women who underwent mammogram screening believed that screening reduced the risk of developing breast cancer.

A third example is survival rate. Statements such as “the five-year survival rate among patients in this facility continued to increase” need to be interpreted with caution by clinicians. It is not uncommon to come across printed and digital ads comparing mortality rates to five-year survival rates to intentionally confuse the audience. To make choices about treatment options, we are better off evaluating the annual mortality rate of a disease rather than the five-year survival rate. In the hypothetical scenario below, the annual mortality rate remains the same in both groups and the improved five-year survival rate does not mean more lives saved. 

Suppose we have a group currently diagnosed with disease X, with a median age of 55. On average, they would have survived two years. In other words, almost all patients are expected to die by the age of 57 due to disease complications and limited treatment options. What is the five-year survival for this group? It is 0%. None lived until the age of 60. Now, suppose the same group was diagnosed five years earlier — at the age of 50 — by a sensitive screening test. They will still receive the same treatment as the other group and they will still live, on average, to the age of 57. Now, what is the five-year survival rate of this group? 100%. This perfect survival rate occurred even though nothing changed; no one lived any longer through early detection of the disease

The brain is wired to compare: “50% reduction” is easier to understand than “two lives will be saved instead of one for every 1,000 in the population.” The obvious problem of relative-risk thinking is an overestimation of benefits over risk — or vice versa. When we think “the risk of this operation is…,” “the risk of taking this medication is…,” or “the risk of dying from this disorder is…,” there is a natural tendency to think relative risk which is of limited value to understand whether the intervention will result in benefit (if the base rate is unknown). According to a study published in the medical journal JAMA, only 25 out of 360 original studies published in top-tier medical journals such as JAMA, NEJM, The BMJ, and The Lancet in the 1990s reported absolute risk reduction. By 2007, the number of articles reporting absolute risk reduction increased to about half in the BMJ, JAMA, and the Lancet5.

Despite encountering statistics every day in our health care practice, our natural collective statistical illiteracy could distort our clinical judgment, especially when data is packaged in a way that aligns with our human cognitive biases. Training our minds to look for the absolute risk and mortality rates before looking at the relative risk and survival rates, respectively, will diminish our clinical judgment mistakes.

Dr. Soliman is MS IV at Weill Cornell Medicine. He holds PhD in genetics from the University of Toronto and an MBA from Cornell University.

Image by Mopic / Shutterstock

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med