In 2005, PLoS Medicine published an essay by John Ioannidis, called "Why most published research findings are false," that has been downloaded over 100,000 times and that was called "an instant cult classic" in a Boston Globe op-ed of July 27 2006. This week, PLoS Medicine revisits the essay, publishing two articles by researchers that move the debate in two new directions.
In his 2005 essay, Dr Ioannidis wrote: "Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment." He argued that there is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims, and went on to try and prove that most claimed research findings are false.
However, in this week's PLoS Medicine, Ramal Moonesinghe (US Centers for Disease Control and Prevention) and colleagues demonstrate that the likelihood of a published research result being true increases when that finding has been repeatedly replicated in multiple studies.
"As part of the scientific enterprise," say the authors, "we know that replication--the performance of another study statistically confirming the same hypothesis--is the cornerstone of science and replication of findings is very important before any causal inference can be drawn." While the importance of replication was acknowledged by Dr Ioannidis, say Dr Moonesinghe and colleagues, he did not show that the likelihood of a statistically significant research finding being true increases when that finding has been replicated in many studies.
The authors say that their new demonstration "should be encouraging news to researchers in their never-ending pursuit of scientific hypothesis generation and testing." Nevertheless, they acknowledge that "more methodologic work is needed to assess and interpret cumulative evidence of research findings and their biological plausibility," particularly in the exploding field of genetic associations.
In the second article, Benjamin Djulbegovic (University of South Florida, USA) and Iztok Hozo (Indiana University Northwest, USA) say that Dr Ioannidis "did not indicate when, if at all, potentially false research results may be considered as acceptable to society." In their article, they calculate the probability above which research findings may become acceptable.
Djulebegovic and Hozo's new model indicates that the probability above which research results should be accepted depends on the expected payback from the research (the benefits) and the inadvertent consequences (the harms). This probability may dramatically change depending on our willingness to tolerate error in accepting false research findings. Our acceptance of research findings changes as a function of what the authors call "acceptable regret," i.e., our tolerance of making a wrong decision in accepting the research hypothesis. They illustrate their findings by providing a new framework for early stopping rules in clinical research (i.e., when should we accept early findings from a clinical trial indicating the benefits as true?).
"Obtaining absolute 'truth' in research, say Djulbegovic and Hozo, "is impossible, and so society has to decide when less-than-perfect results may become acceptable."
Moonesinghe R, Khoury MJ, Janssens ACJW (2007) Most published research fi ndings are false--But a little replication goes a long way. PLoS Med 4(2): e28. (http://dx.doi.org/10.1371/journal.pmed.0040028)
Djulbegovic B, Hozo I (2007) When should potentially false research fi ndings be considered acceptable? PLoS Med 4(2): e26. (http://dx.doi.org/10.1371/journal.pmed.0040026)
Materials provided by Public Library of Science. Note: Content may be edited for style and length.
Cite This Page: