Don’t always trust research, most scientific studies may be wrong
Is everything we eat linked to cancer, question some researchers, as they argue that most research published in journals is sloppy and the results cannot be replicated. Here’s what you should keep in mind.fitness Updated: Jul 06, 2018 10:54 IST
A few years ago, two researchers took the 50 most-used ingredients in a cook book and studied how many had been linked with a cancer risk or benefit, based on a variety of studies published in scientific journals. The result? Forty out of 50, including salt, sugar, flour and parsley. “Is everything we eat associated with cancer?” the researchers wondered in a 2013 article based on their findings.
Their investigation touched on a known but persistent problem in the research world: too few studies have large enough samples to support generalised conclusions. But pressure on researchers, competition between journals and the media’s insatiable appetite for new studies announcing revolutionary breakthroughs has meant such articles continue to be published.
“The majority of papers that get published, even in serious journals, are pretty sloppy,” said John Ioannidis, professor of medicine at Stanford University, who specialises in the study of scientific studies.
This sworn enemy of bad research published a widely cited article in 2005 entitled: Why Most Published Research Findings Are False. Since then, he says, only limited progress has been made. Some journals now insist that authors pre-register their research protocol and supply their raw data, which makes it harder for researchers to manipulate findings in order to reach a certain conclusion. It also allows other to verify or replicate their studies.
Because when studies are replicated, they rarely come up with the same results. Only a third of the 100 studies published in three top psychology journals could be successfully replicated in a large 2015 test. Medicine, epidemiology, population science and nutritional studies fare no better, Ioannidis said, when attempts are made to replicate them.
“Across biomedical science and beyond, scientists do not get trained sufficiently on statistics and on methodology,” Ioannidis said. Too many studies are based solely on a few individuals, making it difficult to draw wider conclusions because the samplings have so little hope of being representative.
Coffee and red wine
“Diet is one of the most horrible areas of biomedical investigation,” professor Ioannidis added, and not just due to conflicts of interest with various food industries. “Measuring diet is extremely difficult,” he stressed. How can we precisely quantify what people eat? In this field, researchers often go in wild search of correlations within huge databases, without so much as a starting hypothesis. Even when the methodology is good, with the gold standard being a study where participants are chosen at random, the execution can fall short.
A famous 2013 study on the benefits of the Mediterranean dietagainst heart disease had to be retracted in June by the most prestigious of medical journals, the New England Journal of Medicine, because not all participants were randomly recruited; the results have been revised downwards. So what should we take away from the flood of studies published every day?
Ioannidis recommends asking the following questions: is this something that has been seen just once, or in multiple studies? Is it a small or a large study? Is this a randomised experiment? Who funded it? Are the researchers transparent? These precautions are fundamental in medicine, where bad studies have contributed to the adoption of treatments that are at best ineffective, and at worst harmful.
In their book Ending Medical Reversal, Vinayak Prasad and Adam Cifu offer terrifying examples of practices adopted on the basis of studies that went on to be invalidated, such as opening a brain artery with stents to reduce the risk of a new stroke. It was only after 10 years that a robust, randomised study showed that the practice actually increased the risk of stroke.
The solution lies in the collective tightening of standards by all players in the research world, not just journals but also universities, public funding agencies. But these institutions all operate in competitive environments. “The incentives for everyone in the system are pointed in the wrong direction,” says Ivan Oransky, co-founder of Retraction Watch, which covers the withdrawal of scientific articles. “We try to encourage a culture, an atmosphere where you are rewarded for being transparent.”
The problem also comes from the media, which according to Oransky needs to better explain the uncertainties inherent in scientific research, and resist sensationalism. “We’re talking mostly about the endless terrible studies on coffee, chocolate and red wine,” he said. “Why are we still writing about those? We have to stop with that.”
Follow @htlifeandstyle for more
First Published: Jul 06, 2018 10:53 IST