As physicians and researchers, we are trained to practice evidence-based medicine. Unlike in the basic sciences, however, proving cause and effect in clinical studies is often not straightforward. And although a good data story can be hard to ignore, accepting it without appropriate skepticism can lead us dangerously astray.
Consider three high-profile studies, all published in top-tier journals last fall: a study on cardiovascular benefits of maintaining a normal blood pressure throughout the day; a study on management of severe hyponatremia in hospitalized patients; and one on when to perform an emergent diagnostic paracentesis in patients admitted with ascites.
The first paper, an observational analysis from a cohort of patients in China, showed that more time in the target blood pressure range each day was associated with reduced risk of cardiovascular disease and stroke. While many people would take such a finding at face value, these patients took different and unreported combinations of drugs, some of which are known to have cardiovascular benefits independent of blood pressure. In other words, not all drugs that lower blood pressure may have this protective effect, and benefiting from the aforementioned drugs may or may not require such tight blood pressure control.
The logical conclusion that we should counsel our patients to repeatedly check their blood pressures throughout the day may seem harmless and even prudent, but besides the fact that stressing about one’s blood pressure all day would probably increase it, imagine stopping every 10 minutes to take your blood pressure at the gym, or even during sex.
The second study reflects a decades-long debate over how quickly to correct severe hyponatremia. While this condition is dangerous, rapid normalization has been thought to cause a rare but devastating and permanent form of brain damage called osmotic demyelination syndrome. In this meta-analysis, we are presented with an association between this heretofore shunned faster sodium correction, and lower mortality and shorter hospital length of stay. However, the study does not sufficiently consider that the same hyponatremia occurs for many reasons, from severe liver failure, which carries between 30%–50% in-hospital mortality, to simply drinking too much water, a problem that is inherently much easier to recover from.
The likely reason for the reported finding is that by nature, patients with more serious illness are often predisposed to hyponatremia that is harder to reverse, hence slow correction being associated with poor outcomes. While trying to increase sodium rapidly in these patients will not cure the disease, it will increase the risk of a serious neurological event.
The last study is a meta-analysis that reports a mortality benefit of performing a diagnostic paracentesis within 12–24 hours of arrival in the hospital to test for spontaneous bacterial peritonitis. This was interpreted as showing that we should "establish paracentesis within 24 hours of admission as a quality metric for hospitals," but it brings up a problem similar to the prior studies, where patients who were already likely to do poorly may not have gotten the procedure due to legitimate safety concerns common in this disease state, such as being shocked and/or at high risk for bleeding. Even if they had undergone the procedure early on, the prognosis would still almost certainly have been grim. Given that this association may not indicate causation, should hospitals be penalized when their doctors decide that it might not be the best time to stick a large needle in the abdomen?
In all these cases, the large amounts of data provide credence to what seem like, and may even be, obvious conclusions — normal blood pressure is better, normal sodium is better, and detecting deadly infection earlier is better. However, every action has a cost, whether it is blood pressure checks at the gym or in bed, an increased risk of catastrophic brain damage or abdominal hemorrhage, or financial penalties for using common sense — and the real benefit may be uncertain.
Our ability to navigate this minefield is part of what sets us apart from our non-physician colleagues. When presented with a new finding, even when published in a high-profile journal, our first instinct should be to consider the possibility of confounding. Guided by our clinical experience and knowledge, we should actively look for alternative non-causative explanations like those suggested for the studies above, and the more plausible they are, the more skeptical of the conclusion we should be. (That said, we also need to know when to set aside our personal experience to avoid confirmation bias.)
Ultimately, our desire to practice at the cutting edge should be tempered by our sacred responsibility to act in the best interests of our patients and avoid harm. When to incorporate new findings is a matter of judgment that we make individually and with our peers in the medical community. But given how frequently key confounders are overlooked, a higher level of baseline scrutiny is clearly needed, starting with ourselves.
What's a recently study result you have looked at with skepticism? Share in the comments.
Eric Gottlieb is a hospitalist physician in the Boston area and an instructor in medicine at Harvard Medical School. The views expressed here are his own and do not necessarily reflect those of the organizations he is affiliated with.
Collage by Joe Lee