Making Efficacy Models Count
Without a doubt, animal models of disease have contributed widely to the development of new drugs for the treatment of many human diseases such as cancer, diabetes or infectious diseases.
Efficacy and safety data obtained from preclinical studies help scientists decide whether to move a compound into clinical trials or whether to shelve it. But preclinical animal models of disease don’t always predict how a potential drug or vaccine candidate will perform in humans. This statement is particularly true in the field of stroke and traumatic brain injury, where novel drugs may look promising in preclinical testing, but end up demonstrating much weaker or no efficacy in clinical trials. One recent review in PLoS Medicine, for instance, referenced the fact that only 11% of investigational drugs that enter clinical testing are ultimately licensed.
These uneven results have, inevitably, raised questions about the value of animal models of disease and the design of in vivo preclinical efficacy studies. With the steep cost of developing new therapies and vaccines growing by the day, researchers and pharmaceutical manufacturers are working out ways to improve the odds of an experimental drug or vaccine making it through the clinical trials process.
Sizing Up the Model
The reasons for the lack of correlation between preclinical and clinical results are complicated and varied. Sometimes the conduct and reporting of the preclinical efficacy studies is not robust enough (e.g. lack of study detail, no randomization, bias in endpoint assessment, inadequate reporting). There is also the well-known phenomenon of publication bias, where researchers, drug developers and even journal editors selectively publish studies that have demonstrated significant and positive results in contrast to studies that turned up an inconclusive or negative finding. This imbalance in the literature can sometimes lead to a skewed view of how well a particular compound actually performed in preclinical testing.
Also, there are limitations as to how well animal models actually reflect human disease, particularly when the pathogenesis of the human disease is not well-understood or when the disease being studied in animals is unique to humans. Researchers in the areas of stroke, traumatic brain and spinal cord injury have, for some years, been trying to determine what data from animal efficacy studies justifies movement of a new drug into clinical trials.
Nor are the rules governing preclinical efficacy studies consistent throughout the field. Unlike preclinical safety studies, preclinical efficacy studies are not held to any specific regulatory guidelines, such as the Good Laboratory Practice (GLP) regulations that are required for drug regulatory authorities to help establish the safety of drugs prior to clinical trials. In the absence of equivalent guidelines, several groups have stepped in and issued their own lists of guidelines and recommendations on the design and execution of in vivo efficacy studies.
A systematic review of some 26 different “quality checklists” by McGill University scientist Valerie Henderson and colleagues in the PLoS Medicine review identified 55 recommendations, including the establishment of preclinical data repositories and instituting research-wide reporting standards for animal studies. The researchers combed five different databases, such as Medline and Google Scholar, to conduct the meta-analysis.
Refining the Process
So how might these guidelines impact a Contract Research Organization (CRO) such as Charles River, which provides preclinical animal model discovery services to clients around the globe? In some cases, the recommendations are already common to GLP practices, so the impact will not be that significant. But others will require consideration and development.
Some suggest that researchers reproduce their treatment effects in more than one animal model type, and then have independent research groups validate the results. While the idea has merit, many novel therapies are not available for testing by independent groups. In addition, what considerations will be given to assess and control the variability of disease severity in a given animal model?
As researchers grapple with the various challenges in translating animal science into clinical efficacy, it’s also important to note that the relevance of the animal model, when to administer the drug and which clinical endpoints are specified are likely to change with refinement of disease diagnosis and management. Therefore, it is important for labs to continue to engage with the wider scientific and medical community to ensure that preclinical studies are designed on the basis of what is known and performed clinically.
Validity checklists suggested by Hendersen’s team are an important step towards determining whether those preclinical efficacy findings are reliable and whether they can be generalized to a clinical population. In addition, as these checklists are refined and adopted, the value of animal efficacy models in the drug development process can be evaluated.
In other words, let’s promote standards that help ensure reproducibility and translatability of animal efficacy studies.
Hendersen et al., Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001489