A new study published in the open access journal PLOS Biology suggests that the scientific literature could be compromised by substantial bias in the reporting of animal studies, and may be giving a misleading picture of the chances that potential treatments could work in humans.
Testing a new therapeutic intervention (such as a drug or surgical procedure) on human subjects is expensive, risky and ethically complex, so the vast majority are first tested on animals. Unfortunately, cost and ethical issues constrain the size of animal studies, giving them limited statistical power, and as a result the scientific literature contains many studies that are either uncertain in their outcomes or contradictory. A way around this limitation has been to conduct a "meta-analysis": scientists collect data from a large number of published studies on the same intervention, combine them using sophisticated statistical methods, and then end up with a much more solid basis on which to decide whether to proceed with human clinical trials.
In the new study, Konstantinos Tsilidis, John Ioannidis and colleagues at Stanford University examined 160 previously published meta-analyses of animal studies looking at potential treatments for a range of serious human neurological disorders (multiple sclerosis, stroke, Parkinson's disease, Alzheimer's disease and spinal cord injury). These meta-analyses covered 1000 original published animal studies comparing more than 4000 sets of animals. The authors' "meta-analysis of meta-analyses" used the most precise study in each meta-analysis as an estimate of the true effect size of a particular treatment. It then asked whether the expected number of studies had statistically significant conclusions. Alarmingly, the authors found that more than twice as many studies as expected appeared to reach statistical significance.
The authors suggest that rather than reflecting wilful fraud on the part of the scientists who conduct the original studies, this "excess significance bias" comes from two main sources. One is that scientists conducting an animal study tend to choose the method of data analysis that appears to give them the "better" result. The second arises because scientists usually want to publish in higher profile journals; such journals tend to strongly prefer studies with positive, rather than negative, results. Many studies with negative results are not even submitted for publication or, if submitted, either cannot get published or are published belatedly in low-visibility journals, reducing their chances of inclusion in a meta-analysis.
It is likely that the types of bias reported in the new PLOS Biology paper have been responsible for the inappropriate promotion of treatments from animal studies into human clinical trials. It also seems unlikely that this phenomenon is confined to studies of neurological disorders; rather this is probably a general feature of the reporting of animal studies.
The authors suggest several remedies for the bias that they have observed. First, animal studies should adhere to strict guidelines (such as the ARRIVE guidelines) for study design and analysis. Second, animal studies (like human clinical trials) should be pre-registered so that publication of the outcome, however negative, is ensured. Third, availability of methodological details and raw data would make it easier for other scientists to verify published studies.
Materials provided by Public Library of Science. Note: Content may be edited for style and length.
Cite This Page: