Our Reproducibility Problem
The first wave of results are in from a project trying to replicate findings from high-impact cancer studies, and the findings are a mixed bag of data.
The development of a drug candidate typically begins with academic researchers discovering a biological pathway or molecules that might be a good disease target, and then publishing their work in scientific journals. Replicating those results can be challenging. In fact, findings reported in 2015 by Boston University suggest that as much as US$28 billion is spent yearly on preclinical research that can’t be reproduced.
A high-profile effort known as the Reproducibility Project, funded by the US National Institutes of Health to try and replicate influential papers in cancer biology, bears this out. The first round of results, published today in eLife, are discouraging: Only two of five high-impact papers included in the review could be replicated. Two others had inconclusive results and another failed the reproducibility bar entirely. The findings, reported on yesterday by Science staff writer Jocelyn Kaiser, got mixed reactions from the cancer research community. Some said there was clear-cut evidence that there is a problem in how we conduct our research, but others believe that the results simply show that good studies can be difficult to precisely produce, and were critical that the Reproducibility Project’s decision to adhere to the strict protocol established by eLife didn’t leave enough room for troubleshooting.
In a Eureka post last year, Charles River scientists Julie Frearson and Robert Hodgson also talked about solutions to the reproducibility issue, in this case for Alzheimer’s disease. The Alzheimer’s disease literature is littered with hundreds of novel potential treatments based on findings in a single lab. “Translating those findings into treatments that have clinical utility has been unsuccessful and even translating those findings into something that works across labs is unsuccessful more often than not," they noted.
Frearson and Hodgson suggest that a reproducibility network spearheaded by the US government for the pressure testing of key discoveries would be one way to focus industry efforts on validated phenomena and thereby reduce attrition in the earliest phase of drug discovery-target validation. Independent laboratories, such as CROs, could be used to offer “industry-standard quality management infrastructure and methods, non-biased approaches to studies and the staff to conduct high-quality projects efficiently.