Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 29
3 Cutting-Edge Efforts to Advance MCM Regulatory Science NONCLINICAL APPROACHES TO ASSESSING EFFICACY A challenge facing developers of MCMs is how to increase the predic- tive value of nonclinical data, said panel moderator Lauren Black, senior scientific advisor at Charles River Laboratories. In the absence of clinical trials, nonclinical data can, for example, help define a human dose regi - men and predict a reasonable likelihood of human efficacy. In addition to animal models, other nonclinical tools such as in silico biology and biomarkers can be employed to inform and advance MCM development. In Silico Approaches to Efficacy Assessment of MCMs A systems biology approach to health and disease acknowledges that there are likely complex molecular mechanisms, with groups of molecules, genes, proteins, and metabolites working in a coordinated fashion, that differ in healthy versus diseased states, explained Ramon Felciano, founder of Ingenuity Systems. These molecular mechanisms trigger higher-order cellular mechanisms and disease mechanisms that drive overall physiology (Figure 3-1). Technologies that have emerged over the past decade or so (e.g., genomics, proteomics, metabolomics) have generated a flood of new data, driving the need for new types of analytics such as in silico or computer modeling of biology. These new data enhance and align with existing knowledge of disease pathways and mechanisms from the literature. A typical systems biology approach is philosophically data driven 29
OCR for page 30
30 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT Discovery Toxicology Biomarkers Pharmacogenomics Use understanding Understand Identify novel Elucidate biological of disease mechanisms behind biomarkers by mechanisms of mechanisms to understanding role differential response drug action and identify and validate to R(x) in disease pathways toxicity targets Patient Docetaxel Disease Prostate mechanisms cancer Cellular mechanisms Apoptosis Angiogenesis Molecular mechanisms Fas Vegf Cancer Experimental Data Literature and Prior Computational Knowledge Modeling FIGURE 3-1 In silico modeling of disease mechanisms for drug development. SOURCE: Ramon Felciano. 2011. Presentation at IOM workshop; Advancing Regulatory Science for Medical Countermeasure Development. Figure 3-1 and holistic, Felciano said, using computer-based tools and techniques to model and understand complex biological function. Experimental designs are typically comparative in nature (e.g., healthy versus disease, disease versus treatment, dose response). The complexity and volume of the data that are generated by these approaches typically require fairly sophisti- cated computational and statistical modeling for analysis and prediction. Research teams are often interdisciplinary by necessity, with therapeutic area researchers, computer scientists, statisticians, and others working together. Primary benefits of this approach, Felciano said, include better under- standing of disease progression, generation of novel hypotheses for thera- peutic or diagnostic targets (i.e., biomarkers), and characterization of plausible mechanisms that correlate with these diagnostic and prognostic markers. Compared with other therapeutic areas, there has been relatively little research done in the area of MCMs using a systems biology approach, Fel- ciano noted. He cited one retrospective study of yellow fever vaccine that
OCR for page 31
31 CUTTING-EDGE EFFORTS demonstrates the potential of in silico approaches to modeling. Querec and colleagues (2009) used a systems biology approach to identify early gene signatures that predicted immune response in humans to the yellow fever vaccine. There are several challenges to using in silico techniques in MCM efficacy studies, Felciano said. As this is a new field, no dominant model - ing formalisms have yet emerged, and there is a lot of new math being generated alongside the new data. Some of the “-omics” technologies are still relatively new, and there are issues to be addressed, for example: measurement accuracy and reproducibility, false positive results, and cost effectiveness. Felciano noted that FDA has a Voluntary Exploratory Data Submission (VXDS) program in which industry submits candidate data- sets that FDA can use to evaluate the regulatory applicability of these new approaches. Other challenges are that systems biology experiments are complex and interdisciplinary, requiring substantial time, interdisciplin - ary expertise, and resources for analysis. Thus far, there are few successful applications of in silico techniques to infectious disease. In addition, there are few good predictive models to bridge animal data to humans. To leverage in silico modeling for MCM development, Felciano said there is a need for more VXDS submissions for clinical infectious disease and MCM studies, with an emphasis on proposals including genomic markers of efficacy. Secondly, Felciano recommended a public “genome- to-phenome” database that characterizes, at a systems biology level, how existing animal models are representative of given target endpoints and underlying mechanisms. This would allow for assessment of concordance between existing “well-characterized animal models” and in silico links between molecular systems and animal study endpoints. Finally, Felciano recommended the collection and integration of quantitative data on human and animal model immunity in normal, vaccinated, and infected individuals. This would allow for analysis of efficacy and common mark - ers of response for existing treatments between humans and animals, as well as between animal models. In summary discussion, participants discussed clinical trial simula - tions, in which virtual patients are put through in silico trials, a process that allows a company to model a wide range of trial designs and analysis methods, with the goal of reaching programmatic decisions more quickly, more cheaply, and with greater certainty. To leverage in silico modeling, there is a need for more complete, shared databases of human and animal study data (e.g., genome-to-phenome information; data on human and animal model immunity in normal, vaccinated, and infected individuals). It was also suggested that a Bayesian, model-based, predictive frame- work should be developed that would essentially create in silico animals and a virtual human. It was noted that such a project would require per-
OCR for page 32
32 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT missions, funding, and collaborations on the scale of IBM’s Watson project or the Manhattan Project. Using Biomarkers to Connect Animal Systems with Clinical Efficacy Measurements of biomarker molecules are intended to allow connec - tion of physiological changes with changes in outcomes or risks, explained Leigh Anderson, founder and CEO of the Plasma Proteome Institute. Bio - markers measured in blood and tissue are generally proteins, measured by immunoassay, or mRNAs (ribonucleic acid), measured using microar- rays. Anderson offered cardiac troponin as an example of a successful biomarker; an increase in this protein is indicative of a recent heart attack. Candidate biomarkers can be identified via in silico modeling stud- ies, experimental studies, and by analogy with other species. However, Anderson noted, establishing biomarker validity requires significant effort, and all methods of hypothesizing biomarkers are extremely failure prone (> 99 percent attrition). There are 109 proteins for which tests have been approved or cleared by FDA, and 96 additional proteins that can be tested for using laboratory- developed tests (that have not been reviewed by FDA). Approval of new protein biomarkers occurs at a very slow rate, Anderson noted: about 1.5 new protein biomarkers per year over the last 15 years. This rate, he said, is insufficient to meet broad clinical needs, without even considering MCM development. Part of this dearth of new biomarkers is caused by the lack of a real pipeline for systematic discovery, development, and marketing of bio - markers. There are technological issues, including the lack of reliable, high-throughput assays for most biomarker candidates and the slow pace of development of new protein assays. There are also challenges in access- ing large, existing sample sets in which to test the clinical relevance of a biomarker. The basic understanding of the mechanism for cross-species extrapolation is also very poor. An ideal biomarker measurement method would provide certainty as to analyte structure, Anderson said. It would include internal stan - dards and would have a method of eliminating interferences. He noted that mass spectrometry allows for very high-specificity measurements of proteins, with quantitative accuracy and internal standards, and it is inherently multiplexable. These assays can be developed very quickly, and there is the potential that the improved science content could allow more rapid approval by FDA. In conclusion, Anderson said the challenges of identifying biomark - ers for development of MCMs are similar to those for biomarkers for general clinical use. He emphasized the fundamental need to develop a
OCR for page 33
33 CUTTING-EDGE EFFORTS biomarker pipeline capable of systematically addressing complex biology. Biomarkers of efficacy for MCMs must be established in advance for the species involved in MCM testing. This requires a systematic evaluation of candidate biomarker homologs across a range of species, something that has not been done thus far. Success of an MCM biomarker also relies heav- ily on parallel mechanisms of disease, treatment efficacy, and recovery across species. Anderson also recommended that biomarker measurement technology be based on a high-confidence, rapidly approvable analytical method. It is now feasible, Anderson said, to make specific, FDA-approvable assays for all 20,000 human proteins. Despite statistical design challenges, it is near feasible, he suggested, to test all possible proteins as candidate biomarkers against a broad range of diseases. If this were done, it could establish broad parallelism between human and animal systems. Marietta Anthony of the Critical Path Institute (C-Path) presented information about efforts of the Predictive Safety Testing Consortium (PSTC) to achieve FDA qualification of seven renal toxicity biomarkers. The specific context of use that FDA allowed was for drug induced kidney injury in GLP rat studies and to support clinical trials. She remarked that the next step for C-Path is to conduct human clinical studies to assess their seven renal biomarkers. If the data are found to be important, they will be submitted to FDA for qualification. Donna Mendrick of the National Center for Toxicological Research (NCTR) at FDA commented that trans - lating biomarkers can be extremely challenging. Mendrick noted that for kidney biomarkers, the gold standard in animal studies is histopathology, while in the clinic, the gold standard is measurement of serum blood urea nitrogen (BUN) and creatinine, which become abnormal at a later stage in disease. Anthony noted that the seven renal biomarkers that were quali - fied reflect histopathology far more effectively than BUN. Vikram Patel of the Office of Testing and Research at FDA’s CDER reminded participants the ultimate proof of efficacy of an MCM only comes when it is used in humans. In that regard, having a biomarker is very important to help assess whether the MCM, or which of several MCMs, is effective in an emergency situation. He expressed concern that very little attention is being paid to development of biomarkers. Animal Models of MCM Efficacy Throughout the workshop a number of participants discussed limita- tions of animal models. Michael Kurilla of NIAID set the stage by noting in his remarks in the session on enterprise stakeholder perspectives (Chapter 1) that animal models are critical to MCM development; however, most animal models
OCR for page 34
34 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT are not suitable for a number of potential reasons. Animal models are infection models, Kurilla reminded participants, not disease models, and some infectious diseases are uniquely human diseases (i.e., there may not be any appropriate animal model). In addition, pathogenesis differs among various species, animals may not fully model host defense responses, and the availability of species-specific reagents may preclude the ability to define correlates. Extensive pathogenesis and natural his - tory studies are necessary to demonstrate the validity of a particular species to replicate a human disease. There are also feasibility issues with conducting pivotal efficacy studies in animal models, including the development of GLP animal models to support licensure. Elizabeth Leffel of PharmAthene provided formal remarks about animal models, and a panel discussion ensued. In developing MCMs under the Animal Rule, stakeholders need to think of animal studies as the equivalent of traditional phase I to II clinical trials, said Leffel. Leffel emphasized that while aspects of animal models can be standardized, ani- mals cannot be “validated,” just as we cannot validate humans in clinical trials. She also noted that both humans and animals are heterogeneous populations, and no model can be 100 percent predictive of what will happen in humans. The primary regulatory science tool for animal models is, of course, the FDA Animal Rule. There is a relatively new draft guidance published to support the Animal Rule, entitled “Qualification Process for Drug Development Tools.” This guidance, Leffel clarified, is not a mechanism to discuss product-specific tools or assays; rather, it is meant to address how animal models can be applied broadly to more than one drug. Leffel identified four key regulatory science needs relative to ani - mal models. First, she said, the essence of the Animal Rule needs to be consistently defined to product sponsors. There are different interpreta- tions across FDA divisions, she noted, and sometimes between review- ers within the same division, of how to apply the Animal Rule. Second, appropriate review of MCMs based on risk and benefit is needed. These are high-risk, life-threatening diseases, about which clinical knowledge is often limited. Third, Leffel noted, there is a need for precompetitive mech- anisms to share basic animal model information quickly. This includes shared proof-of-concept studies to avoid duplication (e.g., for NIAID- sponsored studies, information on basic models for vaccine studies is available in cross-referenced master files for sponsors). Fourth, as noted by others, there must be ways to bridge nonclinical models to expected human outcomes, such as surrogate markers, correlates of protection, clinical observation in animals, and pathology. Moving forward, the first priority, Leffel said, is to develop a strategic plan for applying the Animal Rule. She suggested:
OCR for page 35
35 CUTTING-EDGE EFFORTS • his includes finalizing the draft guidances to reflect current FDA T thinking1 and then applying these standards consistently within and across divisions at FDA and across sponsors. Areas that could be standardized by disease should be identified, and those areas that cannot be should be recognized. The strategy should also include preparing the MCM enterprise to accept more risk, as well as adopting provisions to mitigate risk (by, for example, special licensing conditions such as restricted or conditional licenses). • second priority, Leffel said, should be to leverage existing initia- A tives or form new partnerships to enhance data sharing. There are a lot of partnerships already in existence, she noted, and we need to start using them more effectively. She cited the FDA-NIH regu- latory science initiative as a potential opportunity to allow FDA to leverage scientific resources from NIH and further engage FDA scientists in professional development. • hird, she suggested, licensure review could be expedited by T engaging cross-functional expert teams early on. Specifically, Leffel noted, in addition to meetings between product sponsors and FDA, it might improve communication further if an FDA scientist could also be present at the regular meetings between product spon- sors that have U.S. government contracts and the relevant funding agency, at least at significant time points or milestones. • ublic-private partnerships, such as early development partner- P ships between industry and DoD and NIH labs, could be effective, and cross-industry precompetitive collaboration models should be pursued. Leffel also suggested that the agency should initiate a risk communication strategy to the public and establish dedicated cross-divisional review teams to evaluate MCMs under the Animal Rule. Animal Model Case Study and Discussion Drusilla Burns from the Office of Vaccines Research and Review in FDA’s CBER offered as a case study the pathway to licensure for anthrax vaccines. Animal models were developed, she said, that were thought to be appropriately reflective of human disease. It was demonstrated that an immune marker, anthrax toxin neutralization antibodies, correlated with protection in the animals, and the protective level of antibody was 1 A participant from FDA clarified the status of the draft Animal Rule guidance. Following the public comment meeting in November 2010, the draft guidance is undergoing major revisions and, as such, will not be finalized but will be republished as a draft to allow for another comment period on the revised guidance.
OCR for page 36
36 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT identified. Further studies demonstrated that the assay that is used to measure these antibodies was species independent, allowing for bridg - ing to humans (i.e., in a clinical trial, measuring the antibody levels in humans could be used to predict the potential efficacy in humans). While this may sound simple, Burns said, it was very resource intensive, involv- ing convening a workshop, conducting a literature review and interviews with experts, and forming an interagency animal studies working group that called upon vaccine manufacturers, academicians, and government contractors as needed. She emphasized the importance of the scientific partnerships between FDA scientists and other government scientists or outside scientists, and the involvement of diverse disciplines. Judy Hewitt, chief of the Biodefense Research Resources Section at NIAID, emphasized the importance of qualification of animal models in a product-neutral manner. Patel of FDA suggested having a control animal dataset in a national database to which sponsors could compare their animal test data. Leffel commented that organizations such as the Alliance for Biosecurity, a public-private partnership, have taken steps to pursue development of a shared database of anthrax animal model data; unfortunately, that effort was underfunded. She noted that BARDA has picked up some of this work in anthrax and is in the early stages of working with industry partners to conduct meta-analyses on contributed data. She emphasized that adequate funding is critical to the success of these types of initiatives. In summary discussion, it was noted that there is a clear need for a better understanding of animal models and how to apply them in a vari - ety of settings. One of the most significant challenges is the extrapolation of animal immunological and pathophysiological data to the human set- ting, and participants discussed the need for new approaches to bridge nonclinical models to expected human outcomes (e.g., surrogate markers, correlates of protection). A number of workshop participants noted that it is unlikely one species model will reflect human disease adequately, and a compartmentalization strategy, pooling data from several species models, was proposed. Workshop co-chair, Les Benet, Professor in the Department of Biopharmaceutical Sciences of the University of California, San Francisco, called attention to a series of five forthcoming papers, part of the PhRMA initiative on predicting models of efficacy, safety, and com- pound properties, that found that, for 108 new molecular entities where both human PK and animal data were available, the animal models were poor in predicting (Poulin et al., 2011a,b; Ring et al., 2011). There was also interest in setting up precompetitive mechanisms to share basic animal model information quickly (including proof of concept studies to avoid duplication). Picking up on earlier discussions, Benet suggested that a retrospec -
OCR for page 37
37 CUTTING-EDGE EFFORTS tive look at historical animal data from approved vaccines, anti-infectives, and other products could help inform discussions about the Animal Rule. He proposed looking at the data from animal studies as if that were all that was available, and making a hypothetical approval decision under the Animal Rule criteria, and then comparing how well that correlates with what is known from the human clinical trials that the actual product approval was based on. In other words, asking “Using all of the predictive methodologies that we have available today, if we approved this product under Animal Rule, would we have made the ‘right’ decision?” In discussion about this proposal, Robert M. Nelson, senior pediat - ric ethicist at FDA, noted a concern that most animal work is done for preclinical toxicology purposes, and there may not be a robust enough dataset around the appropriate animal model for this type of exercise. A participant from industry countered that they often conduct proof-of- concept efficacy studies in mice and rats prior to initiating phase II tri - als in humans. Ed Cox, Director of the Office of Antimicrobial Products within the Office of New Drugs of CDER, said to keep in mind there are different types of animal models, those intended to look at an activity (e.g., pharmacokinetic/pharmacodynamic [PK/PD]) and those that are intended to mirror the human condition (involving an actual tissue site where infection would occur and some of the local factors at that site). In addition, there are models of infection and models of disease. Participants also noted the challenge and the importance of comparing “apples to apples” when looking at historical data. Adding to the complexity is the fact that tests are done by different laboratories with different standards. Another participant suggested that an alternative approach could be to conduct a new animal study with a current, approved drug or vaccine, in an appropriate model, and base the predictions on that data. Key Messages: Nonclinical Approaches to Assessing Efficacy In Silico Approaches and Biomarkers • Clinical trial simulations hold promise for modeling a wide range of trial designs and analysis methods and could facilitate reaching programmatic decisions more quickly, more cheaply, and with greater certainty. • There is a need for a biomarker pipeline capable of systematically addressing complex biology. Efforts should include systematic evaluation of candidate biomarker homologs across a range of species. • “Big science” could be envisioned for new projects, such as: A Bayesian, model-based, predictive framework could be applied to create a ■ “virtual human”; such a project would require momentum and collaboration on a large scale. continued
OCR for page 38
38 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT Key Messages Continued Make specific assays for all 20,000 human proteins; statistical design chal- ■ lenges would need to be overcome. Animal Models • Building databases of existing animal models (genome to phenome) could allow for assessment of concordance between existing “well-characterized animal models” and in silico links between molecular systems and animal study endpoints. • A control animal dataset in a national database would permit comparisons by sponsors of their animal test data. • Scientific partnerships, including creation of an “ecosystem” of collaboration and a multidisciplinary approach, is important for addressing difficult regulatory science problems in assessing efficacy. • Funding and substantial resources are essential to sustain interagency, public- private, and other enterprise partnerships and collaborations. SAFETY AND REAL-TIME MONITORING In a public health emergency, some of the MCMs used may be new molecular entities for which efficacy studies in humans were not done, and predeployment safety information is limited, said panel moderator, Carl Peck, of the University of California, San Francisco. He noted that once a new MCM is deployed, it will be especially important to monitor for side effects and to confirm effectiveness (so that use of an MCM that is not effective can be discontinued and further risk of adverse events reduced). Toxicology Markers Robert House, president of DynPort Vaccine Company, presented about toxicology markers from a vaccine development standpoint, noting that there are a variety of primary toxicological concerns. Local toxicity or “reactogenicity,” while not a main concern for small molecules, is a primary concern in developing vaccines. As with any drug, one must also be concerned with systemic toxicity. Toxicity testing is performed under GLP conditions to ensure the cleanest results, using GMP (or GMP-like) material, in a relevant animal model, House said. For vaccines, a stan - dard toxicology profile must also include assessment of immunogenicity. Developmental toxicity and immunotoxicity are also assessed. Vaccine adjuvants must be tested as if they were a new chemical entity (and as such are tested twice, alone and as part of the vaccine). Other additives
OCR for page 39
39 CUTTING-EDGE EFFORTS TABLE 3-1 Prediction of Clinical Outcomes: Preclinical Toxicology Studies vs. Clinical Studies Parameter Toxicology Studies Clinical Studies Survival Yes Yes Difficult to assess Yes Pain upon injection Dependent on animal model Yes Fever No good animal models exist Yes Headache/malaise Yes Yes Injection site reactions Clinical signs Yes Yes Body weights Yes Useful? Clinical pathology Yes Yes/not usually Necropsy, histopathology Yes Generally not Antibodies Yes Yes Immunotoxicity Dependent on animal model Yes/not usually done SOURCE: Robert House. 2011. Presentation at IOM workshop; Advancing Regulatory Sci - ence for Medical Countermeasure Development. that go into vaccines, such as excipients or preservatives, must also be individually assessed for toxicity. Depending on how a vaccine is admin- istered, it may also be necessary to assess the toxicology of the adminis- tration device. Standard preclinical toxicological endpoints include body weights (as a measure of robust health); clinical observations (are the animals behav - ing normally); clinical pathology (including hematology, clinical chemis - try, and other immunogenicity studies); anatomic pathology (including organ weights and histopathology to assess intended effect at the immune system target, as well as any effects at other points in the immune system); and local tolerance. House compared preclinical toxicology studies to clinical studies in their ability to predict clinical outcomes (Table 3-1). He noted that several parameters (in italics)—pain upon injection, fever, headache and malaise, and injection site reactions—are often considered to be rather subjective and can be difficult assess in animal models. Electronic Monitoring of Adverse Events Kenneth Mandl of the Harvard Medical School Center for Biomedical Informatics characterized four main sources of clinical electronic health data:
OCR for page 54
54 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT • arly integration of high-throughput data collection in drug and E vaccine development as a mechanism to understanding global impact, off-target effects, and biomarkers for efficacy. • n silico screening for drug-drug interactions and as a tool for novel I drug discovery. • ncreased interdisciplinary crosstalk between computational scien- I tists and bench scientists to define standards for study designs. In discussion, a participant raised the issue of training the next gen - eration of the workforce to advance regulatory science outside the context of a particular product. Katze noted that universities have started offering interdisciplinary programs in computational biology where previously there was very little interaction between computer science and biology. A participant from FDA added that the agency has been putting resources into science computing capacity and is training agency reviewers and researchers to be able to use them. Platform Technology As an example of the use of platform technology to advance MCM development, Patrick Iversen, senior vice president of research and inno - vation at AVI BioPharma, described his company’s approach for the rapid development RNA-based therapeutics. AVI’s platform is based on the development of translation-suppressing oligomers that target single- stranded RNA (which could be from a host cell or from the pathogen), preventing the assembly of the ribosomal complex on the mRNA tran- script, thereby preventing the production of a specific protein. AVI has developed a predictable way of designing the oligomers, which makes the platform very flexible and allows for very rapid response. They have defined both the optimal position in the transcript, and the optimal length of the oligomer, and are developing a database of oligomers for a growing list of viral and bacterial targets and host genes. This knowledge base, Iversen predicted, would allow AVI to develop a putative solution to a new threat in a matter of hours. Iversen noted that AVI currently has open INDs for oligomers for Ebola and Marburg viruses. Studies in mice, guinea pigs, and nonhu- man primates have shown significant protection (i.e., survival). Crossover studies confirmed the specificity of the oligomers for the intended target (i.e., the Ebola virus oligomer was not effective against Marburg virus, and vice versa). Other endpoints investigated included dose-dependent sur- vival increases, reduction in clinical signs, reduction in viremia, increase in platelet count, and improvement in both hepatic and renal markers of toxicity.
OCR for page 55
55 CUTTING-EDGE EFFORTS In closing, Iverson raised several questions regarding animal stud - ies and human safety testing. For animal models, he asked, how should a viral challenge strain be chosen? For example, would it be better to use Marburg Angola or Marburg Musoke? Quasi-species characteriza- tion could reveal that there are elements or portions of both viruses in every outbreak. And the next outbreak will be a new quasispecies. “Deep sequencing” technology, he suggested, could provide insight into how to choose challenge strains. Iversen also questioned whether the use of healthy volunteers for safety assessment is necessary for MCM development. He noted that in normal healthy volunteers, the dose limiting toxicity may fall below the anticipated therapeutic dose. How should that limitation be interpreted; what distance between anticipated therapeutic benefit and dose limiting toxicity will be tolerable? Also, how should the size of the required human safety database be calculated? He asked, if these MCMs will never be used unless there is an outbreak, and will be used only used under an EUA, is a human safety database needed? Discussion William Fogler, senior director of portfolio planning and analysis at Intrexon Corporation, pointed out that the need for rapid response generally occurs under worst-case scenarios, often in association with compromised infrastructure. While these synthetic, computational, and platform technologies offer tremendous promise to respond rapidly to a pathogenic threat, they must be scalable and deliverable under such a scenario. He suggested that there are additional technologies that exist in terms of generating DNA vaccines, in which modular components can be predesigned, stored, and ready to assemble on short notice. Other modules could be devised in which immune-enhancing agents could be quickly assembled. These modules in the structure of a DNA vaccine can be under the control of inducible promoters, so that following injection of the vaccine, an activating ligand (e.g., a small molecule) would be taken orally to “turn the vaccine on,” and upon removal of the ligand, it would be “turned off.” This also offers the possibility of a needle-free vaccine- boosting mechanism, Fogler said. Mendrick said that researchers at FDA are looking at these emerging technologies and are trying to anticipate and solve some of the questions that may arise. For example, NCTR has a nanotoxicology core facility that is looking at genetic toxicity assays to evaluate the carcinogenicity of nanoparticles. Harvey Rubin, executive director of the Institute for Strategic Threat Analysis and Response at the University of Pennsylvania, emphasized
OCR for page 56
56 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT that computational biology is not simple mathematics. The scale of com - putational biology spans angstroms to kilometers, and nanoseconds to millennia, he said. The processes are very complicated, and include, for example, deterministic, stochastic, continuous, discrete, or hybrid processes. With regard to organization, the system could be structured, unstructured, or homogeneous. There are complexities and interdepen- dencies that make modeling biological systems especially difficult, Rubin said. Motivations to do complicated mathematical modeling include the need to predict something (e.g., protein structure, epidemiologic pat- terns), to design something (e.g., new molecular structures, new control - lers and regulators, new phenotypes), or to interpret something (e.g., data, patterns). Rubin highlighted several research priorities that can help populate some of these mathematical models: • here are many model-specific questions that need to be answered, T such as what are the effects of interventions on infectivity, and what are the effects of disease and interventions on immunocom- promised hosts? • here is also general research needed on organizational structures, T risk communication strategies, interdependencies (e.g., how the environment, economics, or politics impact the model), and health impact information. • lso to be resolved is who should be funding this work—NIH, the A National Science Foundation, DARPA, FDA, or industry. DIAGNOSTICS Significant resources are dedicated to identifying and characterizing an emerging biological threat, said Daniel Wattendorf, program manager in the Defense Sciences Office of DARPA, but rarely is there subsequent broad distribution of new diagnostic assays for the identified threat to point-of-care settings. In cases where the decision to quarantine or treat is time sensitive, the turnaround time to ship samples to a reference labora - tory is prohibitive. Wattendorf cited several barriers to more rapidly fielding diagnos - tics for emerging threats. In some cases, the diagnostics platforms have not been made suitable for use in distributed settings. As an example, Wattendorf pointed out that PCR has been in use since 1983, yet no PCR- based diagnostic test is approved for a physician office setting. Addition- ally, if diagnostic tests are not already in place before an emergency, it is very difficult to get physicians to employ them in a crisis if they do not have prior experience with the test or have not been shown evidence of
OCR for page 57
57 CUTTING-EDGE EFFORTS utility. In the absence of specific diagnostic tests for emerging threats, there is interest in developing panels of early detection biomarkers that could detect a host immune response before an individual begins to exhibit symptoms of a disease. Sample collection is another challenge for diagnostic testing. Cur- rent biospecimen collection generally involves collection of wet samples, such as through test tubes, which requires that the patient have access to medical personnel (e.g., a phlebotomist) who can collect the sample, and which also may require cold storage. There is also the option of tak- ing dried blood spots on filter paper, but according to Wattendorf such samples have limited use. In this regard, Wattendorf suggested that a role for regulatory science would be the development of new formats for simple, self-collected biospecimens, formats that would be optimized for specimen source (blood, urine, etc.) and analyte class (specific proteins, types of RNA, etc.), and would be stable during storage to facilitate func - tional assays. Wattendorf also noted that currently, teams of experts travel to a site, collect samples, and return to CDC or the DoD to run tests and identify the new threat. He suggested that, instead of moving the sample, it could be possible to move the data electronically. The use of highly multiplexed platforms could facilitate local testing, and the data could then be sent to a central facility for analysis. This would be faster and would provide distributed diagnostic capability where there is unmet need. In summary, Wattendorf listed several questions for discussion: • an universal sample storage formats be developed for dried or C near-dried self-collected biospecimens that show equivalence to fresh samples? • an highly multiplexed protein or molecular diagnostic platforms C be developed that are suitable for use in a physician office base setting, from which data could then be sent for interpretation by highly trained laboratorians at a remote site? • re measurements of immune or metabolic status useful in the A absence of a diagnostic test for a specific pathogen? If so, what should be measured? Could it be measured at the point of care? And, as it is not specific to a given disease, what would be the regulatory pathway? In the panel discussion, Charles Daitch, CEO of Akonni Biosystems, said that from a technical perspective, the capability to communicate from remote sites to a central facility already exist, and it would be straightfor- ward to develop and implement ways to communicate using either raw or processed data. Sally Hojvat of CDRH concurred and suggested that
OCR for page 58
58 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT this would be covered under existing regulations that address electronic records and the transfer of data from an instrument at a clinical site to a central facility for analysis (21 CFR 11). She cautioned that it would be necessary to demonstrate the accuracy and traceability of the results of a test performed remotely by an unqualified individual. Panel moderator Bruce Burlington, an independent consultant, ques- tioned how it could be determined that an immune status test was rel - evant for many different illnesses. Would test developers need to under- take a variety of disease challenges? Hojvat responded that it could be considered more of a prognostic type of marker, and such data would be one way FDA could begin to assess the test. With regard to its commercial value, Daitch said that the market for such a test is not obvious. A test that predicts, based on immune status, that someone is in the early stages of an infectious disease might be useful, for example, for astronauts about to go on the space shuttle or for troops about to be deployed, he said. Burlington added that it could also be used in an epidemic for health care workers or other first responders. Participants discussed the potential for commercial assays on mul- tiplex platforms to be used as epidemiological surveillance tools (as opposed to diagnostic tests where results are reported back to the patient). Hojvat suggested that companies could aid the surveillance effort by developing cassettes for biothreats for their multiplex systems. Daitch and David Ecker, founder of Ibis Biosciences, agreed it would be possible, but noted that key challenges would be validation of the test for broad groups of organisms and ensuring that data could be transferred over a secure network to somebody who has the capability to interpret the data correctly. Participants also discussed the concept of an evolving label. Perfor- mance characteristics of a diagnostic test need to be defined in terms of sensitivity and specificity, but a challenge is how to present that infor- mation in the label when the background prevalence of what is being tested for is almost zero. It would be helpful if, as the threat emerges, new information and data based on use could be made available rapidly. Hojvat responded that FDA has the technology to do that, and there is an ongoing electronic labeling project. In summary discussion, participants observed that it is important to remember that diagnostics are also MCMs. Several options for more efficient use of diagnostics were suggested, including the development of new formats for collection, transport, and stable storage of biospecimens, and the development of highly multiplexed testing platforms for local site use, with data then sent electronically to experts at a central facility for analysis. It was also noted that rapid diagnostics could improve the
OCR for page 59
59 CUTTING-EDGE EFFORTS efficiency of antimicrobial trials, allowing for enrichment of the popula- tion with patients infected with resistant organisms. STATISTICAL TECHNIQUES The goal of clinical trial simulation in drug development programs is to reach a decision faster, cheaper, and with greater certainty, explained Stephen Ruberg, distinguished research fellow and scientific leader in advanced analytics at Eli Lilly and Company. Companies seek to “kill” ineffective or unsafe investigational drugs sooner and advance potentially effective drugs as quickly as possible and at the lowest cost possible. Clinical trial simulation allows for examination of a broad range of clinical trial designs, decision rules, and analysis methods. In simulations, models can be used to create virtual patients that are then randomly selected for inclusion into in silico clinical trials using sophisticated software tools. These models for virtual patients can be PK/PD models, empirical sta- tistical models of response over doses and time, or mechanistic disease models. Known design and analysis parameters can be controlled (e.g., sample size, number of doses or visits, analysis strategies for testing hypotheses or estimating key drug effect parameters), and a range of pos- sibilities for unknown parameters and factors that cannot be controlled can be assessed (e.g., drug effect, true dose response curve, adverse event rate, placebo response, dropout rate). Dozens of combinations of factors are typically evaluated with the goal of selecting the design and analysis parameters that will minimize false positive and false negative findings in the drug development program. From a regulatory science perspective, Ruberg said, this will require training of FDA staff on the use of simulation tools, some of which are becoming commercially available. FDA statistical and medical reviewers will have to understand and accept modeling simulation as a tool for study design. Simulation trial designs generated may not look like classic trial designs or may not have theoretically or mathematically described properties, he said. This is of particular concern when designing phase III trials due to the need to control the type I error (false positive) rate at 0.05. As this cannot always be done analytically, Ruberg asked whether FDA will accept simulated results in lieu of analytical proof. He noted that the FDA draft guidance on adaptive designs3 is a substantial step forward in helping the industry understand how best to move forward with innova- tive trial designs. Another topic for consideration is the simulation of the 3 See Guidance for Industry Adaptive Design Clinical Trials for Drugs and Biologics (Draft Guidance) http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatory Information/Guidances/ucm201790.pdf (accessed June 9, 2011).
OCR for page 60
60 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT sequence of clinical trials spanning an entire clinical drug development program, which, Ruberg said, companies could realistically be doing in the next couple of years. A goal in drug development is to use as much data as possible— current or historical—to make decisions on drug safety and efficacy. Cur- rent practice in the vast majority of phase III clinical programs is for each clinical trial to stand on its own as an independent piece of evidence in the evaluation of a drug’s effect. This is a frequentist statistical approach. Eli Lilly, Ruberg said, is currently implementing Bayesian methods for phase I and phase II trial design and analysis. There are many ways in which Bayesian methods can be used in clinical drug development. One example presented by Ruberg is a Bayesian augmented control design, in which control group data from the current prospective study is supple - mented with historical control data. This allows for smaller trials (saving both time and resources) and for more enrolled patients to be allocated to treatment groups. While the use of Bayesian statistical methods is a technical topic, Ruberg opined that the largest barriers to implementation are social. There will need to be changes in philosophy and mind-set within some FDA centers and other regulatory agencies around the world. There are also legitimate scientific debates relative to the choice of historical data to include in analyses and how to weigh those data relative to data gener- ated from a new study, he added. From a regulatory science perspective, Ruberg said that the use of Bayesian approaches for phase III confirma- tory trials would require in-depth sponsor-agency discussions at the end- of-phase-II meeting or sooner. Important to the use of Bayesian approaches is the development of a comprehensive data element dictionary. Data element standards allow for more efficient collection of data and routine use of standardized software. More importantly, common data element standards allow for the simple, rapid integration of data from multiple sources, facilitating more compre- hensive statistical analysis in order to draw the best scientific conclusions possible. Such a dictionary should be maintained by a central authorita - tive group, Ruberg said, and must be free, broadly accessible in electronic form, and downloadable for use within IT systems. He acknowledged the various ongoing standardization efforts (e.g., CDISC, HL7),4 but said that 4 The mission of the global, nonprofit, multidisciplinary Clinical Data Interchange Stan - dards Consortium (CDISC) is to “develop and support global, platform-independent data standards that enable information system interoperability to improve medical research and related areas of health care.” See http://www.cdisc.org/. Health Level Seven International (HL7) is a nonprofit “standards-developing organization dedicated to providing a com - prehensive framework and related standards for the exchange, integration, sharing, and retrieval of electronic health information.” See http://www.hl7.org/ (accessed June 9, 2011).
OCR for page 61
61 CUTTING-EDGE EFFORTS data element standards needs to go deeper in terms of specificity, broader in terms of accommodation of all therapeutic areas and measurements, and faster in terms of development and deployment. In closing, Ruberg offered several ideas to advance the use of trial sim- ulation and Bayesian statistics, and the standardization of data elements: • or study design, Ruberg suggested adaptive/pooled studies as a F way to more rapidly and uniformly test compounds. Such studies use a single trial design with a common control group that allows companies to insert their drug into a perpetually ongoing trial (such as the I-SPY 2 breast cancer clinical trial). • uberg also directed participants to the Drug Information Associa- R tion (DIA) Working Group on Bayesian Methods. Ruberg noted that Bayesian approaches were discussed in a recent National Research Council (NRC) report on how to handle missing data in clinical trials (NRC, 2010), and he suggested that the National Academies conduct a study to evaluate the use of Bayesian methods in clinical trials, with particular emphasis on phase III confirmatory trials. In panel discussion, there was much discussion about the use of Bayes- ian statistical methods for analysis of clinical trials. Jeffrey Wetherington of GSK said that his company has made significant use of Bayesian meth - ods for phase II proof-of-concept studies and dose-ranging studies, and he estimated that use of these methods has saved the company nearly $15 million on study costs over the past year. Similar to Eli Lilly, Wetherington said, GSK uses augmented control groups, decreasing study sample sizes by several hundred people. Bayesian methods provide very interpretable results, he said. Estelle Russek-Cohen, acting director of the Division of Biostatistics at CBER, noted that CDRH frequently uses Bayesian analysis in the con - text of device modification submissions. She added that CBER has seen submissions with Bayesian and adaptive designs, primarily in phase I and II studies, many of which have been oncology studies. For phase II studies, a variety of skill sets are needed when considering the benefits and risks of the analysis approaches (e.g., medical officers, statisticians). Russek-Cohen said a concern with historical controls is how far back to go if the standard of care is changing. A question for consideration is whether, in the context of MCMs, there is a real and compelling need for these approaches. Panel moderator Burlington noted that the toxicology community routinely uses historical controls, pooling data from control animals from many experiments. Russek-Cohen responded that pooling of historical control data is used for safety assessment as there is often not enough
OCR for page 62
62 ADVANCING REGULATORY SCIENCE FOR MCM DEVELOPMENT power in individual studies, but it has not yet been done for efficacy studies, in part because FDA statutes call for adequate and well-controlled trials. In a phase II environment, it makes sense for industry to find ways to pool control information across companies pursuing similar projects. Wetherington said that drugs such as anti-infectives can have small niche markets and limited profit margins. Using Bayesian-type designs for phase III studies, especially when comparing the novel agent to a well-characterized standard of care, could save time and money, and get products to patients more quickly. Participants discussed the potential for use of simulation and Bayes- ian approaches as the basis for approval of an MCM in anticipation of future use. Goodman responded that part of the FDA MCM initiative is to consider novel approaches, and the agency is open to these possibili - ties. He encouraged developers of MCMs to discuss this with their FDA review team as part of their product development planning. Burlington questioned whether FDA could mandate or incentivize companies to submit their data in conformation with data element stan - dards. Russek-Cohen responded that implementing standards is part of the broader FDA initiative. Ruberg suggested that the National Library of Medicine or FDA could take the lead on pushing forward with data ele - ment standards. A participant noted that CDISC is approaching standards development disease by disease. In response, Ruberg suggested that there could be a working group of experts in MCMs to define what generally needs to be measured and start discussing standard data elements. In summary discussion, the statistics of diagnostics and dealing with false positives was also considered. A participant said that CDRH has asked developers of new multiplex diagnostic assays to offer ideas about how to handle false positives, for example if three or four positives were found where one was expected. A participant said to keep in mind the primary question the assay is answering: Is it diagnosing an individual or determining if there is an outbreak? It was noted that there is an inter- agency meeting being planned on this issue. Several workshop presenters and discussants noted that Bayesian statistical methodology can be used for both study design (e.g., supple - menting the control group data with historical control data) and analy - sis (of both actual and simulated trials). Workshop participants offered suggestions for themes and future directions with respect to statistical methodologies and data analysis. The following individual suggestions were made: • raining in Bayesian approaches and causal mechanisms of actions T will be needed for both scientists and the public.
OCR for page 63
63 CUTTING-EDGE EFFORTS • he use of Bayesian approaches would be enhanced by the devel- T opment of common data element standards (e.g., to facilitate pool - ing of data across studies). • hristian Macedonia, medical sciences advisor to Admiral Mullen, C the chair of the Joint Chiefs of Staff, raised the idea of electronically tagging every piece of information obtained in biomedical research (e.g., date, time, group, unique animal identification, institution) so data from large multicenter trials could be traced back, even years later, for further analysis. He likened this to the way electronic data are broken into packets and tagged for transfer across computer networks, to be reassembled at the other end. • here was also interest in platform approaches to health data soft- T ware design, for which many applications or “apps” could be developed. These could be used for collection, management, and analysis of electronic health data, specifically for monitoring of adverse events.
OCR for page 64