Oct. 10, 2012 A workshop sponsored by NIH's National Institute of Neurological Disorders and Stroke (NINDS) has produced a set of consensus recommendations to improve the design and reporting of animal studies. By making animal studies easier to replicate and interpret, the workshop recommendations are expected to help funnel promising therapies to patients.
Biomedical research involving animals has led to life-saving drugs for heart disease, cancer, stroke, diabetes, HIV-AIDS, and many other conditions, but positive results from animal studies are sometimes difficult to translate into successful clinical trials.
"Our goal is to ensure that preclinical animal studies are reported in sufficient detail so that funding agencies, scientific journals and the broader scientific community can adequately review the research and decide how to move forward," said NINDS Director Story C. Landis, Ph.D.
The workshop recommendations, published in the Oct. 11, 2012 issue of Nature, apply to scientific papers as well as grant applications that describe preclinical animal studies -- those intended to develop and test potential therapies. About 95 percent of the animals used in research are mice and rats.
The recommendations say that all preclinical animal studies should include details about four key aspects of research methodology: randomization, blinding, sample size estimation, and data handling.
Randomization means randomly assigning animals to treatment and control groups. In a blinded study, the researchers who analyze the results are unaware of (blinded to) which animals are in the treatment and control groups until the analysis is complete.
Sample size estimation refers to calculating, before beginning the experiment, the smallest number of animals that can be used per group to detect meaningful differences between groups. Common data handling issues include how to conduct an interim data analysis without looking for a hoped-for result, and how to analyze outlier data. (For example, if 24 of 25 animals improve on an experimental drug and one gets worse, what happened? If the animal developed an illness unrelated to the drug, should it be included in an analysis of the drug's efficacy?) The recommendations say that these kinds of decisions need to be made during the design of the study rather than when it is under way.
The workshop, held June 20-21, 2012 in Washington, D.C., brought together NINDS representatives, patient advocates, and scientists from academia and industry. Editors from Cell, the Journal of the American Medical Association, Nature, Nature Neuroscience, Neurology, Neuron, and Science Translational Medicine also participated.
The recommendations highlight several disease areas where inadequate reporting has hindered translation from animal studies to human trials. Animal studies of stroke have helped bring about the use of medications to control risk factors such as high blood pressure, and have led to drugs for dissolving the blood clots that can cause stroke. Unfortunately, efforts to develop neuroprotectants -- drugs that would shield vulnerable brain cells from a stroke -- have met with repeated failure, despite promise in animal studies.
"For more than decade, the stroke community has been very proactive in working to identify the reasons for failures in clinical trials, and their analyses have led to a number of consensus criteria to optimize the value of animal stroke studies," said Walter Koroshetz, M.D., deputy director of NINDS.
In 2007 an international consortium of researchers found that of 166 animal studies on the physiology of stroke, only 18 reported on blinding, five reported on randomization and none reported on sample size estimation. The group's effort, called the Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Stroke, also has shown that preclinical studies reporting the least methodological detail also tend to report the largest effects from experimental treatments.
These issues are not unique to the stroke field. A review of 100 animal studies on cancer found that 21 studies reported on randomization, two on blinding, and none on sample size estimation. In early 2012, an NINDS-funded program called Facilities of Research Excellence -- Spinal Cord Injury (FORE -- SCI) found that many spinal cord injury studies were difficult to replicate because of incomplete or inaccurate methodological details. In some cases, the FORE -- SCI group was able to learn these details and successfully replicate studies by working closely with the original investigators.
"The goal of the workshop recommendations is to improve the quality of scientific reporting through a shared effort," said Shai Silberberg, Ph.D., a program director at NINDS who helped organize the workshop. "Achieving a meaningful change will require the cooperation of funding agencies, journal editors and investigators, including those who volunteer their time to review scientific manuscripts and grant applications."
Dr. Silberberg emphasized that NINDS recognizes a distinction between animal studies to test hypotheses about potential treatments versus observational and exploratory studies meant to generate new hypotheses. The recommendations focus on hypothesis-testing studies rather than hypothesis-generating studies, he said.
The workshop recommendations also note that hypothesis-testing preclinical studies should be designed with the same rigor as clinical studies. In the 1990s, concerns about under-reporting and bias in clinical studies led British and Canadian researchers to develop the Consolidated Standards of Reporting Trials (CONSORT) statement. It includes a 25-item checklist of vital information that researchers should provide and readers should look for in write-ups of clinical studies. The CONSORT statement has been adopted by more than half of the core biomedical journals searchable through NIH's index of scientific publications, PubMed.
NINDS has already taken steps to implement the recommendations. In August 2011, the Institute published a notice in the NIH Guide for Grants and Contracts emphasizing the importance of good study design in grant applications. Consistent with the recommendations, NINDS has posted a list of points for grant applicants to consider when designing and reporting experiments, and for reviewers to consider when reading grant applications. NINDS is evaluating ways to encourage broad adoption of the recommendations. Possibilities include providing additional training to investigators, and creating a checklist similar to the CONSORT list.
Other social bookmarking and sharing tools:
The above story is based on materials provided by NIH/National Institute of Neurological Disorders and Stroke.
- Story C. Landis, Susan G. Amara, Khusru Asadullah, Chris P. Austin, Robi Blumenstein, Eileen W. Bradley, Ronald G. Crystal, Robert B. Darnell, Robert J. Ferrante, Howard Fillit, Robert Finkelstein, Marc Fisher, Howard E. Gendelman, Robert M. Golub, John L. Goudreau, Robert A. Gross, Amelie K. Gubitz, Sharon E. Hesterlee, David W. Howells, John Huguenard, Katrina Kelner, Walter Koroshetz, Dimitri Krainc, Stanley E. Lazic, Michael S. Levine, Malcolm R. Macleod, John M. McCall, Richard T. Moxley III, Kalyani Narasimhan, Linda J. Noble, Steve Perrin, John D. Porter, Oswald Steward, Ellis Unger, Ursula Utz, Shai D. Silberberg. A call for transparent reporting to optimize the predictive value of preclinical research. Nature, 2012; 490 (7419): 187 DOI: 10.1038/nature11556
Note: If no author is given, the source is cited instead.