National Academies Press: OpenBook
« Previous: 3 The Promise and Perils of Animal Models
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

4

Reproducibility and Predictivity

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

TRANSLATIONAL STROKE RESEARCH: THE “WORST PRACTICES” OF ANIMAL RESEARCH

Ulrich Dirnagl, professor and director of the Department of Experimental Neurology, Charité Universitätsmedizin Berlin, Germany, took a different tack than some of the earlier speakers in the workshop. Rather than talking about the successes or opportunities of animal research, he discussed his work in translational stroke research as a cautionary tale. Dirnagl said that learning from the “worst practices” of his field could help other researchers do better in the future. Over the last decades, researchers studying stroke in animal models have published “thousands of articles in top journals,” using “millions of animals” while consuming “a lot of money.” However, the treatments that were found to be highly effective in animal models had no effect—or even a negative effect—once they were translated into clinical trials on humans. For example, a meta-analysis of animal studies on neuroprotectants found a mean treatment effect of roughly 30%. When tested on humans, these neuroprotectants had no effect or even may have harmed patients. There remains only one clinically proven pharmacological therapy for acute ischemic stroke, and no proven way to protect or regenerate the brain, said Dirnagl.

“If translation is possible—at least in principle—then why have we not succeeded in neuroprotection” posited Dirnagl. While the easiest answer is that animals are simply different than humans and cannot be used to predict human response, Dirnagl said that he feels this is highly unlikely. Citing several examples in which animal models were useful for stroke research, Dirnagl said that there are several reasons why stroke drug research in animals has been unsuccessful.

First, said Dirnagl, many of the animal studies published in the stroke field suffer from a lack of internal validity. There is selection bias, with randomization practiced around 30% of the time. In addition, there is high performance and detection bias, with blinding also practiced around 30% of the time. Attrition bias is widespread: treatment and control groups often have different sizes, which is often due to loss of animals/group that is not adequately explained.

Second, many of these studies also suffer from a lack of external and construct validities. Dirnagl compared a group of mouse subjects, all of whom are young inbred twins who have been raised in isolation and fed on a strict diet of granola, to the diversity of the human population, who vary widely by age, sex, comorbidities, medications, and exposure to pathogens and antigens throughout life. All of these cofounders in humans are not replicated in the animal model and make it difficult for such research to predict the response of human patients. As examples, Dirnagl presented one study in which a drug had an effect on males but not females (Harhausen

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

et al., 2010), and another where the drug only worked in younger subjects but not older (Thériault et al., 2016). Another study found that an intervention worked only in subjects that were not hypotensive (De Geyter et al., 2013). Factors such as gender, age, and co-morbidities are highly relevant for human patient populations, but animal research often does not account for this variability, said Dirnagl.

A third issue is the high degree of standardization in animal models; subjects are usually kept in identical environments and treated identically, to the extent possible. While this is often seen as a benefit to animal studies because it removes the influence of known variables, it can lower external validity by finding “truths” that are only valid in the controlled environment of the study (Richter et al., 2009). This effect makes the research less reproducible and less predictive of the response seen by the human patient, said Dirnagl.

A fourth issue is the use of specific pathogen free housing facilities, which are free from certain infectious organisms in order to prevent these organisms from interfering with the experiment. These conditions result in mice with the immune status of a newborn, which may have tremendous consequences for the ability to translate data from these mice into humans.

Finally, there is a complete power failure in the way experiments are done, said Dirnagl. A meta-analysis found that the mean group size in stroke mouse studies was eight subjects (Holman et al., 2016), which gives these studies a mean statistical power of around 45%. This means, said Dirnagl, that the false-positive rate and the overestimation of true effects are both around 50% (Dirnagl, 2016).

Dirnagl addressed the issue of avatar mouse models (i.e., co-clinical trials), in which mice are transplanted with patient-derived xenografts in order to screen for anti-cancer drugs for an individual patient. While this approach has promise, Dirnagl cautioned that this personalized animal research may suffer from the same issues as traditional animal model research, such as lack of statistical power, lack of internal and external validity, and lack of attention to the effect of comorbidities and other con-founders. The lack of a mature immune system in mouse models may be particularly problematic for xenotransplant research. Dirnagl concluded that most of the issues, which have confounded population models, would remain problematic in our personalized models, so attention must be paid to them.

SYSTEMATIC REVIEWS

Merel Ritskes Hoitinga, professor in evidence-based laboratory animal science at Radboud University Medical Center in the Netherlands, said that in order to move forward with animal-based research for precision

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

medicine, it is important to first reflect on the current situation, which unfortunately, is a “reproducibility crisis.” A 2015 analysis reported that the non-reproducibility of preclinical studies is somewhere between 51% and 89% (Freedman et al., 2015). One consequence of this reproducibility crisis, said Hoitinga, is a translation crisis. As Ulrich Dirnagl discussed, the translation success rate in stroke research is “abysmally low,” and the fact that the number of drugs being approved annually by the U.S. Food and Drug Administration (FDA) has declined in recent years suggests that there is a problem. The reproducibility crisis has a number of possible causes, said Hoitinga, including the following (Prinz et al., 2011):

  • Lack of reporting of methodological details
  • Lack of standardization
  • Incorrect statistical analysis, insufficient sample sizes
  • Interaction with different environmental influences
  • Selective reporting of results
  • Bias toward publishing positive results
  • Significance chasing
  • Errors undetected in the current peer-review system

Fortunately, said Hoitinga, there is a solution to this crisis: systematic reviews. Systematic reviews are an appropriate avenue for providing transparency in support of the quality of animal studies and will hopefully trigger improvement in the field. Hoitinga and her group at the Systematic Review Centre for Laboratory Animal Experimentation (SYRCLE) teach researchers how to perform systematic reviews. SYRCLE holds workshops, provides coaching and supervision, develops guidelines and tools, and has started an international network of ambassadors who promote systematic reviews of animal studies in their local areas. In addition, SYRCLE executes systematic reviews and attempts to identify success factors for translation of animal studies. The six steps of a systematic review are fairly simple, but it is important to perform them carefully and objectively in order to reach evidence-based conclusions, said Hoitinga.

  1. Phrase the research question
  2. Search for all evidence
  3. Select relevant studies
  4. Extract characteristics (e.g., species, sex, dose)
  5. Assess study quality
  6. Perform data meta-analysis

The benefits of systematic reviews are myriad, according to Hoitinga. Systematic reviews can help researchers make an evidence-based choice

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

of animal models. For example, a series of systematic reviews revealed that rodents were a poor choice for modeling cartilage defect and that large-animal models would be more appropriate (de Vries et al., 2012). Systematic reviews are linked to more translational transparency and can help with implementing the three Rs across animal-based research (replace, reduce, refine). For example, a systematic review found that there had been many animal studies showing that chemotherapy reduces wound healing (Jacobson et al., 2017); now this is a known effect, and the experiments do not need to be repeated, said Hoitinga. Finally, systematic reviews make reporting quality transparent. By gathering and analyzing all relevant studies, a systematic review can reveal where there are issues with the quality of the research, for example, whether the studies reported the use of randomization and blinding. This last benefit is extremely important for translation. Fifty to 80% of studies do not report if they used randomization or blinding; when a study does not report on these factors it is unclear whether or not the results are reliable or reproducible as the results could overestimate the effect of an intervention and—upon translation—unnecessarily expose patients to potential harms (Horn et al., 2001; Sena et al., 2010).

Improving the reporting quality of animal studies is the goal of the Animal Research Reporting In Vivo Experiments (ARRIVE) guidelines, which were first published in 2010 and have since been adopted by hundreds of journals. The ARRIVE guidelines provide a checklist of details to include in submitted papers, including species and strain of animals used, randomization and blinding procedures, and drug formulation and dose (Kilkenny et al., 2010). The guidelines are included in the instructions to authors for journals that have adopted them. Unfortunately, there has been hardly any improvement in publication quality of animal studies, said Hoitinga. A report in Science indicated that researchers are either unaware of the guidelines or are simply ignoring them (Enserink, 2017). There are efforts afoot to improve the uptake of ARRIVE, but it will take commitment and culture change among those involved, including funders, editors, academics, and laboratory animal veterinarians.

Hoitinga concluded with a simple statement: “Transparency provided by systematic reviews leads to subsequent improvements in quality of reporting,” and high-quality reporting leads to a more reliable assessment of the translation potential of data. Moving forward in precision medicine will require the use of animal models that are chosen based on high-quality evidence from well-reported and replicable studies. Performing systematic reviews of studies will help ensure transparency and high quality.

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

REPRODUCIBILITY IN LARGE SHARED DATASETS

Arjun Manrai, instructor at the Department of Biomedical Informatics at Harvard Medical School, began with a quote from Karl Popper: “Non-reproducible single occurrences are of no significance to science” (Popper, 1959, p. 86). Reproducibility is a very old concept, he said, that has gained a great deal of attention lately. The crisis of reproducibility in science has been particularly prominent in drug development, but he noted several efforts to monitor it, including Retraction Watch, a blog that tracks retractions as a window into the scientific process (retractionwatch.com), and multiple studies that have identified numerous instances of irreproducible data (e.g., Begley and Ellis, 2012).

Attempts to improve the reproducibility of biomedicine and animal science, said Manrai, have largely focused on the responsibilities and obligations of the individual researchers. A 2016 survey by Nature asked 1,500 scientists what could improve reproducibility: the top two answers were “better understanding of statistics” and “better mentoring and supervision” (Baker, 2016). However, said Manrai, it is not just an issue of individual knowledge and responsibility. There are critical structural factors that influence the reproducibility of study findings that need to be understood and addressed. One of these structural factors—hidden multiplicity—is particularly relevant for precision medicine research.

Manrai discussed two technologies common in precision medicine—the simple nucleotide polymorphism (SNP) chip and the large shared database. He said that multiplicity is explicitly and rigorously addressed in the context of the SNP chip, but almost never addressed in the context of a large shared database. The database resembles high-throughput technologies in scale but not in synchrony: it is a tool utilized by many investigators for multiple studies, but the studies are conducted over many years. Because of the lack of synchrony, there is little attempt to address the issue of multiplicity in these database studies. Manrai said that a 2005 paper titled “Why Most Published Research Findings Are False” by John Ioannidis provides a framework to think about the reproducibility of an entire field of science (Ioannidis, 2005). Ioannidis applied the concept of PPV—usually used to describe the positive predictive value of a diagnostic test—to a given scientific field. Calculating PPV involves assessing several parameters, key ones being “R” (the pre-study odds, i.e., the ratio of non-null to null relationships that are being studied in the field) and “u” (bias term describing the proportion of findings that are published that would not be published if studies had been conducted perfectly).

Manrai has worked to extend the Ioannidis model to large datasets and to develop a calculation for a new metric called “the dataset positive predictive value” (dPPV; Manrai et al., 2018). The math involved is complex,

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

but the general idea is to look at two key factors: how many researchers are investigating different relationships, and how well these different studies are powered. Manrai noted that dPPV is a stochastic process that can be written as the ratio of Poisson binomial distributions; that is, dPPV is a time-varying random variable. A Poisson binomial distribution is a generalization of the binomial distribution with non-equal probability of success on each trial. Communal science settings that use large databases are ubiquitous, said Manrai, for example, large federal datasets, such as the National Health and Nutrition Examination Survey, as well as large-scale genomics datasets, such as the Genome Aggregation Database and the Mouse Phenome Database. In a recent publication documenting the importance of reproducible precision medicine, Manrai examined classifications of patient records at a leading testing laboratory, and he found that genetic variants were misclassified and that these misclassifications disproportionately affected African Americans as compared to other populations (Manrai et al., 2016). Understanding communal scientific inquiry through frameworks like dPPV may help avoid such irreproducible data.

Using a simple scenario Manrai showed workshop participants how to calculate the dPPV. In this scenario, 1,000 relationships are being studied, using a shared dataset. One study is performed per relationship, with a significance threshold set at p < 0.05 for each study. The pre-study odds—the ratio of non-null to null findings—is 0.001, and the studies are well-powered at 0.8. It might seem, said Manrai, that this is a good setup. However, if only the studies that survive the 0.05 threshold are reported, the expectation of the reproducibility of the findings from this dataset is less than 2%. Manrai explained that, although the statistical power exceeds the false-positive rate by a factor of 16, the null relationships dwarf the non-null relationships by a factor of a thousand. For some real-life examples of the many paths researchers can take during data analysis that may be unaccounted for, Manrai referred workshop participants to an article by Gelman and Loken (2014).

The reproducibility of findings from a large shared dataset is influenced by several factors, said Manrai. First, reproducibility is lowered when researchers who arrive late to the dataset pursue hypotheses that are a priori less likely. In this scenario, the key parameter is the ratio of non-null to null hypotheses, represented by “R.” Researchers who had earlier access to the dataset may have investigated relationships that were a priori more likely. However, Manrai said that the reproducibility of research on less likely hypotheses could be improved by performing studies with greater power. When the first genome-wide association studies were conducted, said Manrai, researchers learned about the effect size spectrum and compensated by increasing sample sizes to the tens or hundreds of thousands of individuals in order to find genome-wide significant associations. Effect

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

sizes may have been lower than initially hoped for, but the results were much more reliable compared to previous work in human genetics and candidate gene era studies.

Manrai noted that some of the findings of his work are counterintuitive. For example, he found that more studies per relationship tend to lower the mean and increase the variance of dPPV. Although one would assume that more studies would help to corroborate relationships, he explained that, when there are many studies being conducted on a given relationship, if the findings are selectively reported then the ones that are reported tend to be less reproducible over time. Another finding, said Manrai, is that dPPV variance reduces with more liberal data governance. That is, when data are open and any researcher can conduct a study, the dPPV tends to be less variable than when there are a small number of studies being conducted on a dataset.

In concluding, Manrai stressed that multiplicity in large shared data sets must be addressed. He also emphasized that, while individual responsibility for reproducible data is very important, structural factors are just as influential and critical.

THE PARADOXES OF PRECISION MEDICINE

Jonathan Kimmelman, associate professor in the Biomedical Studies Unit/Social Studies of Medicine at McGill University, Canada, introduced workshop participants to the five paradoxes of precision medicine. These paradoxes, he said, are creating a very heavy demand for and an expectation about the quality of preclinical evidence, while concomitantly reducing the ability to produce high-quality, reproducible evidence.

The first paradox, said Kimmelman, involves the issue of smaller samples. Precision medicine is aimed at taking heterogeneous populations and dividing them into smaller groups that are more homogeneous (e.g., in terms of response to medication). However, by breaking up the large groups into smaller ones, research on the small groups has lower statistical power with a higher degree of variance. Relying on traditional randomized controlled trials may not be possible in these situations, said Kimmelman, so researchers may need to combine different forms of evidence.

The second paradox is the issue of boundaries. Kimmelman explained that one goal of precision medicine is to develop diagnostic techniques that allow the stratification of patients. Diagnostic techniques typically rely on establishing a boundary—a cut-off point that assigns individuals into different categories. However, there may be a variation of alleles whose location in regard to the boundaries is unclear. While it may be possible to determine with a high degree of certainty how a specific individual with a specific allele reacts to a drug, there may be uncertainty about how other

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

alleles will affect this response. Using multiple diagnostic techniques further muddies the waters, said Kimmelman, because each technique has its own boundaries and commensurate uncertainty about whether these boundaries are drawn correctly. This creates a “proliferation of imprecision.”

The third paradox is the issue of algorithms and interpretation of data. The data and algorithms used to determine how to match patients to treatments are, obviously, critical to an accurate matching. The use of preclinical data in these determinations can be controversial, said Kimmelman. Some clinicians have chosen not to include any preclinical evidence due to concerns about reproducibility, while others have determined that preclinical evidence is clinically actionable in the absence of other higher forms of evidence. The need for high-quality evidence to inform patient classification algorithms compounds the demands on the quality of preclinical evidence.

The fourth paradox concerns the rapid evolution of knowledge and diagnostic techniques in the area of precision medicine. As precision medicine advances, information is constantly accruing about treatment approaches and patient populations, and techniques that evolve based on these new data. By the time a study is published, said Kimmelman, the techniques or algorithms used may already be outdated.

Finally, the fifth paradox is about integrating diverse datasets. Kimmelman said that in non-precision medicine, large clinical trials with a low degree of variance can be synthesized into a meta-analysis regarding the clinical utility of a treatment. However, in precision medicine, the trials are much smaller and may be testing different drugs or different diagnostic techniques to classify patients. Aggregating such disparate information—together with preclinical research—is a considerable challenge.

One other way that precision medicine is creating pressures on preclinical research is the occasional reliance on preclinical research to make clinical decisions. Kimmelman gave an example of a study on pediatric solid tumors, in which the majority of findings on clinically actionable mutations were based on preclinical research or even just “expert opinion” (Harris et al., 2016). Kimmelman also reported a case in which a patient was offered personalized off-label therapy, based on preclinical data that was published in Nature (Al-Marrawi et al., 2013). In precision medicine, he said, a treatment does not “necessarily have to go through clinical trials to get to clinical practice, [which] creates some real pressures on getting the preclinical evidence exactly right.”

In addition to these five paradoxes, Kimmelman discussed factors that challenge the ability to make clinically generalizable inferences from preclinical evidence. Kimmelman said that there are three steps in the cycle of translation, with challenges or threats at each one. The first step is research design. Threats at this step include design choices that lead to poor internal and external validity. For example, lack of blinding or randomization, or

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×

reliance on a study population that does not simulate the patient population well. The next step, said Kimmelman, is reporting. As other speakers discussed, there is a tendency to publish only positive studies, which Kimmelman said is a real problem in the context of precision medicine. As an example, he said that a meta-analysis of sorafenib research found that the mean effect size dropped by about 37% when statistically corrected for non-published preclinical studies (Mattina et al., 2016). The third and final step is uptake, which depends in great part on the inferences made about the preclinical research. Kimmelman noted that an experiment is valid only insofar as the interpretation or clinical inferences made from the study are accurate. Kimmelman’s team has done a systematic study of the quality of inferences from preclinical research that revealed that experts were remarkably poor at predicting whether studies would be reproducible (Benjamin et al., 2017).

Kimmelman made several recommendations for moving forward given the paradoxes and the challenges he identified. The first recommendation was to produce “clinical-grade preclinical evidence.” Because preclinical evidence may be used to inform clinical decision making, it is critical that preclinical research be of high quality. Such research should be highly powered, should use randomization and blinding, and should have a prespecified hypothesis, and there should be a mechanism (e.g., prospective registry) to protect against publication bias. Kimmelman’s second recommendation was to improve the quality of the reporting of studies. He noted that only 37% of trials of drugs that do not get FDA approval are published, in contrast with 75% of trials of drugs that do receive FDA approval. There is a great deal of information loss from these unsuccessful drug development studies, he said, and it is “unconscionable that we tolerate this degree of non-publication.” He added that non-publication also leads to highly biased datasets.

The third recommendation concerns the issue of uptake. Noting that physicians are the primary decision makers, Kimmelman said that their ability to make inferences and decisions can be improved through training on the content of the research, training to avoid or to better utilize biases and heuristics, and being provided feedback on their decisions. The published literature on decision making suggests that the best kinds of decisions and prediction contexts are those in which humans work in concert with machines. In the current healthcare context, said Kimmelman, there is no existing system or technology to help healthcare providers make better inferences from research and better clinical decisions.

Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 51
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 52
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 53
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 54
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 55
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 56
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 57
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 58
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 59
Suggested Citation:"4 Reproducibility and Predictivity." National Academies of Sciences, Engineering, and Medicine. 2018. Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25002.
×
Page 60
Next: 5 In Vitro Alternatives to Animal Models »
Advancing Disease Modeling in Animal-Based Research in Support of Precision Medicine: Proceedings of a Workshop Get This Book
×
Buy Paperback | $55.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Precision medicine is focused on the individual and will require the rapid and accurate identification and prioritization of causative factors of disease. To move forward and accelerate the delivery of the anticipated benefits of precision medicine, developing predictable, reproducible, and reliable animal models will be essential. In order to explore the topic of animal-based research and its relevance to precision medicine, the National Academies of Sciences, Engineering, and Medicine convened a 2-day workshop on October 5 and 6, 2017. The workshop was designed to focus on the development, implementation, and interpretation of model organisms to advance and accelerate the field of precision medicine. Participants examined the extent to which next-generation animal models, designed using patient data and phenotyping platforms targeted to reveal and inform disease mechanisms, will be essential to the successful implementation of precision medicine. This publication summarizes the presentations and discussions from the workshop.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!