Eric J. Topol, MD: Hello. I am Eric Topol, editor-in-chief of Medscape. With me today is Professor John Ioannidis from Stanford, who I have been dying to have a chat with for a long time. I'm so glad we could get together. John, welcome.
John P. A. Ioannidis, MD, DSc: Thank you, Eric. It's a great pleasure to chat with you.
Becoming the 'Conscience of Biomedicine'
Topol: I have been following your work and career for a number of years. You are the "contrarian of medicine." I say that in a positive way.
Until I finally had the chance to do this interview with you, I did not know some of your background. You were a math prodigy in high school, you received the National Award in Greece, and you are the son of two physician researchers. You seem like you were made for this role you have, in terms of the conscience of biomedicine. How did you get your roots in this model that you really espouse?
Ioannidis: I was exposed to a lot of science early on. I loved lots of different aspects of the scientific method and scientific discipline I found in mathematics, biology, bench research, clinical research, and clinical epidemiology. I was always very unfocused and I wanted to try my hand at different types of research.
I realized that I was making errors again and again in almost everything that I was trying. I started realizing that other people were also making errors—in the lab, the clinic, and in published literature. Errors are common. They are human. Some of them are probably more common than they should be.
Topol: You got to the point where you estimated that 90% of medical research is flawed.[1,2] That gets depressing, right?
Ioannidis: One can see it as the glass half empty or half full, or 10% full or maybe a little bit more. Medicine has made tremendous progress and is still making progress. One can focus on that.
The question is, how can we improve the efficiency of what we are doing? And how can we decrease the error rate? How can we less frequently be misled and send our best people down blind alleys?
If we see the positive message that we can identify problems and get rid of them, that is very optimistic.
It's Not Just Biomedical Research
Topol: You have been on a crusade and have hit on almost every discipline: genetics, psychology, neuroscience, clinical trials, drug companies, the whole lot. Most recently, I noticed you even went after economics.[3] Is there anything that you have not worked over?
Ioannidis: The great fun and opportunity when working on meta research—or research on research—is that one very quickly realizes that research methods and research practices, and the way they are applied or transformed, are pretty similar across very different disciplines.
The scientific method is pretty unique. There is heterogeneity in the way that different disciplines have preference for some aspects of it or how exactly to operationalize it, but we can learn a lot by comparing notes. If you look at different fields, you realize that some of the big problems we face in biomedicine may have been solved in other fields pretty easily and may be a done deal.
Vice versa, one could probably transplant some good ideas from biomedical disciplines to other fields. The concepts are similar and the manifestations are different. Obviously the consequences are different, because in medicine it is about lives and people dying because of suboptimal information.
Evidence- ish-Based Medicine
Topol: The wide-angle lens that you have applied is important. It is much more than medicine, and I give you a lot of credit for identifying these common threads.
The problem we have in medicine, though, is this evidence basis, which as you have really proven over the years is so shaky and tenuous. We are trying to make decisions for patients and select treatments and tests and whatnot. What are we going to do since most of the evidence is baseless?
Ioannidis: Some evidence is reliable. There is a gradient. We have very strong evidence for some treatments, interventions, and policies and we need to do something because of it. If we don't, it would be really stupid.
This is not just for interventions but for risk factors. Even in observational epidemiology, no one would deny that smoking is horrible and is going to kill 1 billion people unless we get rid of it. We don't need randomized trials to prove that.
But, of course, there is the other end of the gradient where there is a lot of unreliable evidence. A lot of evidence is very tenuous. We need to train people to understand what the limitations are, what the caveats are, how much they can trust or distrust what they read or what they see, and what they are being called to do. Then make them ask for better evidence.
There is no reason why we should continue to live with suboptimal evidence. Clinicians and clinical researchers should be at the forefront because they realize on a daily basis that they don't have evidence they can trust. They can create questions to try to get the type of evidence they need.
Thoughts on PREDIMED
Topol: This brings up something that just happened. One area that you have tackled is nutritional science. The Mediterranean diet was studied in PREDIMED, the largest trial of a randomized diet using hard outcomes. It was published in 2013 in the New England Journal of Medicine, and now NEJM retracted it and republished it[4] in the same day. It had all sorts of irregularities. What is your take on this? It is right up your alley as to flawed science.
Ioannidis: Nutrition is clearly a mess, and I have long advocated that we can fix some of that mess by running large-scale, long-term, randomized trials with clinical endpoints. PREDIMED was a trial that tried to do that. It was pretty much the exception compared with all of that irreproducible mess of nutritional epidemiology. I was very happy to see it published. I was very excited that at last we are making some progress.
But unfortunately, PREDIMED seemed to take the path of observational epidemiology in publishing zillions of papers with results that were far more tenuous, and I think what we saw in the retraction was a signal that the data had major flaws. Clearly, the retraction was the right thing to do. However, even after the retraction, I don't feel that we have seen the whole story.
I think that the problem that was detected by statistical analysis was with baseline characteristics being so similar. The correction that led to the re-publication does not explain that this cannot happen by chance; meaning, there is no reason why (if indeed a whole village was randomized as an entity instead of on an individual basis, or some couples were randomized together rather than as individuals) that should not have led to the pattern that was detected by testing the baseline characteristics.
My strong belief is that PREDIMED is a seriously flawed trial. I cannot trust it any longer. I love olive oil. But I'm sorry—I cannot trust it. I think there are major problems beyond the retraction. We are looking at some of that and hopefully we will publish some evidence showing that there are deeper problems than that.
Topol: That is really important because I have been influenced by prior studies, like the Lyon Heart study,[5] which was fairly well done and a smaller trial, albeit for secondary prevention. But this is why it is such an opportune time to talk to you. A very high-profile journal, NEJM, retracts and republishes an article in the same day. Something is wrong with our system of evidence, right?
Ioannidis: Clearly, and I think that just republishing a trial with seemingly the same results is not going to fix it. In the case of PREDIMED, I would argue that one would have to obtain all of their old data—not the clean data, the raw data—before arbitration for an independent committee to analyze.
If this were to happen, my bet would be that the effect sizes would shrink or even go away. I would hate to see that. I would like to bet against my own prediction. But there are some very serious problems when we trust trials that have no transparency. They have no openness. They are not willing to share. They are not willing to go through re-analysis. They are not willing to have some independent scrutiny on what is going on. This is still [true for] the majority of randomized trials being published—in NEJM and in other journals as well.
Topol: Wouldn't you have thought that the editors of NEJM, particularly due to this unprecedented thing, would have raked over these data and raked over the investigators as to getting to transparency and truth?
Ioannidis: I would have hoped so, and I still hope that they will allow some further probing into this trial. It would be a lost opportunity if we don't learn more because I think it is just the tip of the iceberg. Far more is going on and, in a way, PREDIMED may be the most honest compared with other trials that may be less honest.
Medscape © 2018
Cite this: Eric J. Topol, John P. A. Ioannidis. Ioannidis: Most Research Is Flawed; Let's Fix It - Medscape - Jun 25, 2018.