COMMENTARY

Estimating Risk/Benefit: Facts Are a Basic Requirement

John Mandrola, MD

Disclosures

January 23, 2017

There are important studies, and then there are damn important studies.

Centuries ago, before there were medical universities, the word    doctor denoted a learned person or teacher. Nothing has changed.     The chief duty of all modern-day clinicians remains that of an expert     teacher. Yet if we are to teach we must know the facts.

This means knowing more than anatomy and physiology and what the guidelines     say; it means knowing the actual benefits and harms of our interventions.

Do we?

The Study

A             recently published systematic review     suggests that clinicians' knowledge of the benefits and harms of medical     interventions is dubious.

Two researchers from Bond University in Queensland, Australia asked a     simple but provocative question: Do clinicians have accurate expectations     of the benefits and harms of medical treatments, screening, and diagnostic     tests?[1]

A total of 48 eligible studies that surveyed expectations of more than     13,000 clinicians were identified. Included studies covered a range of     clinical topics such as cancer screening, fetal and maternal medicine,     cardiovascular disease, surgery, and medications. They focused on an array     of medical interventions that included treatments (n = 20), medical imaging     (n = 20), and screening (n = 8).

Estimating Benefit. First the authors reported on expectations of benefits. Clinicians did     poorly. In only three of the 28 outcomes assessed in the studies did     clinicians pick the correct estimate of benefit more than 50% of the time.     Most often, clinicians overestimated benefits, but underestimation also     occurred in 9% of outcomes.

Estimating Harm. The researchers then reported expectations of harms, which were compared     against the correct estimates in 26 studies for 69 outcomes. Again,     clinicians performed poorly: in only nine of the 69 (13%) outcomes did more     than 50% of clinicians correctly estimate harms. In this case, clinicians     mostly underestimated harm with overestimation occurring in only three     outcomes.

Study Limitations

We should start with the limitations. First, many of the reviewed studies     were small and used unvalidated survey questions, which is important     because risk-prediction accuracy can vary according to how it is assessed.     Second, most of the harms from medical imaging that were measured were of     cancers caused by radiation received during the imaging procedure, which is     a hypothetical harm. Third, for each study included in the systematic     review, the researchers accepted the study authors' estimates of benefit     and harm without verifying against the best evidence at the time.

It is not our job to be all-knowing; it's our job to know the     published benefits and harms of the interventions we recommend.

Some critics might also argue that it's hard to define ignorance—in a broad     sense—in a rapidly changing healthcare environment. I don't buy that     argument. It is not our job to be all-knowing; it's our job to know the     published benefits and harms of the interventions we recommend.

Even with its limitations, this systematic review is shocking. And it's not     just clinicians who have a knowledge deficit. These two authors have     previously shown that patients, too, overestimate benefits and     underestimate harms from medical actions.[2] That both parties     participating in the medical decision are deluded in the same direction     does not bode well for decision quality.

Possible Explanations

It Just Makes Sense. The authors offer several possible reasons for their findings. One is a     preoccupation with empiric pathophysiologic mechanisms rather than actual     trial data. A recent example of this kind of "it-makes-sense"     thinking comes with the flop of bioresorbable coronary stents. These     devices should have worked better than conventional metal stents     because dissolving struts meant better artery mechanics and less nidus for     clot in the future. The actual evidence, however, showed that the bioresorbable device was no better at delivering better mechanics,[3] and it led to higher late stent thrombosis.[4]

A study published in 2013 by Prasad and colleagues reported on more than a     hundred—146 to be exact—such medical reversals.[5]

Bias. Bias is another reason clinicians don't make accurate predictions of     benefit and harm. Clinicians are human, and humans seek evidence that     supports action they believe is beneficial. In prostate cancer care, for     instance, radiation oncologists favor radiation while surgeons favor     surgery[6]. In cardiology, little inference is needed to see     bias in the discussion between interventionalists and surgeons on the     merits of             two recent trials of stents vs bypass surgery in patients with left         main coronary artery disease.[7,8]

Therapeutic Illusion.      The authors also suggest optimism and the therapeutic illusion as a     possible source of overestimation of treatment benefits. They cited a     beautiful paper: In 1978, British surgeon KB Thomas compared two strategies     in patients with undiagnosed complaints. He either told these patients they     had no disease and gave no treatment, or he diagnosed them with a condition     and treated them.[9] When he found equal outcomes with the two     approaches—and that more than half the patients got better regardless of     treatment—he concluded that "the results of this study support the belief     that             the patient who is made better with no treatment will also be made         better with treatment. The danger is that the doctor may ascribe recovery to his treatment."

Inherent Problems in Publishing.The authors point to pitfalls in the     medical literature, including "the misleading portrayal of intervention     benefits and absence of harms in journal articles and information from     commercial sources." I could support this statement of the obvious with     many words and citations. Or we could just accept that slanted portrayals     of evidence are not nefarious but normal operating procedure. Scientists     aren't in the business of underselling their results.

Decision Support as a Remedy?

In medicine, it's okay to ask for help. Decision aids, which are easily     derived from absolute event rates in clinical trials, can be used in real     time in the exam room. There's ample evidence that decision aids improve     decision quality from the patient perspective.[10] Decision aids     help you see what a 1% absolute risk reduction looks like. Namely,     your eyes are drawn to the 99 of 100 people who get the same benefit with     or without treatment—like Dr Thomas's revelation.

Decision support also helps give accurate accounts of harm. For instance,     if you add up infection, bleeding, pneumothorax, and inappropriate shocks for the ICD group in the recently published    DANISH trial, you     find a 13% complication rate.[11] Although this estimate may be     slightly high because some patients may have had two complications,     double-digit ICD complication rates are not out of line with two other     published studies.[12,13] How many patients offered ICDs are     given this information?

One vital warning about decision aids: these devices can be used to scare     people into making the "right decision." An industry-sponsored abstract     presented at the 2016 American Heart Association meeting found that perceptions of risk can be manipulated by how data are presented.[14] In this case, showing people their lifetime cardiovascular     risk rather than their 10-year risk made them more likely to engage in     "prevention strategies." Guess what type of company sponsored this study?

Conclusion

If doctors want to continue in their roles as learned people and trusted     teachers, our knowledge of the benefits and harms of medical action needs     improvement. We should hurry because the digital era and its democracy of     information is decreasing our asymmetry of knowledge.

The digital era and its democracy of information is decreasing     our asymmetry of knowledge.

It's time for a shift in culture. I think everyone in healthcare relies too     much on guidelines. These documents paternalistically say "treatment X is     recommended." The writers review the literature but they don't, in easily     accessible ways, tell us the absolute benefits and harms of treatment X.     Clinicians, therefore, get a sense that there are right and wrong     therapies. Statins, ICDs, mammograms, annual physical exams, etc—all are     "right." Thus, we needn't bother with their actual benefits and harms. This     culture begins in training.

All medical action is a gamble. It's time that both patients and clinicians     had the right odds. Shared decision making is a fantasy if neither     participant in the decision has accurate expectations.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....