Skip to main content

Currently Skimming:

3 Regulatory Science Applications: Using Case Studies to Focus on Approaches to Advance the Discipline
Pages 11-30

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 11...
... (Amur, Lavezzari, Philbert, Sauer, Wagner) • Postmarket safety evaluations offer many opportunities for innovative usage of big data, especially when clinical trial cohorts are not representative of the general population.
From page 12...
... . If the intent is to only use the biomarker for a single drug development application, the sponsor would combine biomarker qualification with a regulatory submission, such as an Investigational New Drug (IND)
From page 13...
... However, Amur said, if the biomarker is intended to be used for development of multiple drugs, it goes through a separate biomarker qualification process. Often, this approach is used when consortia identify a biomarker that each member later intends to use in a separate drug development application.
From page 14...
... CPIM was developed by CDER to address issues in drug development identified in the 2004 FDA publication Innovation or Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products. CPIMs provide a means for CDER and investigators across industry, academia, patient advocacy groups, and government to communicate to improve efficiency and success in drug development.1 FDA's CDER provides an avenue to qualify a biomarker for a "limited" COU in order to expedite the integration of the biomarker in drug development and to possibly generate additional data that can help in qualifying the biomarker for an "expanded" context of use.
From page 15...
... When applying big data to the design of clinical trials, he noted, it is critical to focus on three areas:
From page 16...
... , and assessments of safety and efficacy • Length -- the frequency and duration of the clinical trial or assessment • Depth -- the careful and detailed characterization of trial partici pants' outcomes In light of these principles, Landray also cautioned that accurate data do not necessarily imply that results are reliable; they must be analyzed for errors. Results generated from large enough datasets are remarkably resilient to changes in outcome due to random errors, which do not add bias and can be overcome by adhering to the principles described above, he noted.
From page 17...
... This modification could encompass, for example, moving patients to the most effective treatment arm or dropping less effective arms of the study as data accumulate. Brian Alexander, assistant professor of radiation oncology, Harvard Medical School, highlighted the use of Bayesian trial design to conduct randomized adaptive clinical trials.
From page 18...
... Although developing software and conducting simulations for models such as this are time-intensive, their implementation may be a worthwhile investment and enable better preparation of an overall plan for evidentiary development. More flexible clinical trial designs such as adaptive trial design could provide efficiencies by capturing data that are potentially lost during the extended process of a trial and allow clinical trial researchers to enroll new patients without the time constraints from predetermined clinical trial phase timeframes.
From page 19...
... . 3  With respect to method validation, Salit noted that analytic validation -- the accuracy, preci sion, and reproducibility of a test -- is distinct from clinical validation -- the relevance of the test in an actual clinical condition -- and is a key factor in moving forward with biomarker qualification.
From page 20...
... Data Collection, Curation, and Harmonization Throughout the workshop, many participants emphasized the importance of key principles in data collection, curation, and harmonization. Original data are typically usable only for the purpose for which they were originally collected; repurposing data for other analyses, observed Richard Platt, professor and chair, Harvard Medical School Department of Population Medicine, Harvard Pilgrim Health Care Institute, usually necessitates a great deal of curation.
From page 21...
... Sharing Clinical Trial Data Aggregating data from clinical trials to create bigger datasets is of increasing interest. Kyle Myers, Center for Devices and Radiological Health (CDRH)
From page 22...
... Participating in an organized consortium may be the most successful way of accessing data from multiple sources, Sauer said. Successful data sharing in the future will depend on common privacy standards, common data standards, and incentives to share data.
From page 23...
... Brownstein highlighted the power of these reporting platforms not only to inform adverse drug events, but also to illuminate the illegal use of drugs on the black market and to track the street value of medical products. These black market data, he said, could help inform a greater understanding of the public health impact of certain medical products.
From page 24...
... Because some medical devices may not undergo classical clinical trials, they are typically continuously assessed after they reach the market, said Danica Marinac-Dabic, director of epidemiology, CDRH, FDA. Registries linked to EHRs and unique device identifications will be valuable for continuous surveillance of medical devices.
From page 25...
... This and similar Web search–based tools provide an opportunity to complement traditional sources of adverse drug event reporting. 7  BioPortal is a repository of biomedical ontologies developed by The National Center for Biomedical Ontology; see http://bioportal.bioontology.org (accessed December 23, 2015)
From page 26...
... Holmes cautioned that these data would only be valuable for hypothesis generation, however, as the data are limited by the fact that there is no denominator to the analysis and thus cannot be viewed as rates or proportions. Challenges and Limitations to Web-Based Surveillance Several participants discussed the challenges and limitations inherent in analyzing discussion forum chats, social media, and Web search (see Box 3-4)
From page 27...
... To accurately predict and build models of treatment response for clinical trials, both uncertainties and underlying assumptions should be taken into account, said Sandy Allerheiligen, vice president, Modeling and Simulation, Merck. Uncertainties in data can result from limited dataset size, bias in the sample population, heterogeneous response to treatment, or any number of effects that cannot be predicted or measured.
From page 28...
... Modeling the Placebo Effect The placebo effect, or the measurable change in a patient's health status that cannot be attributed to the treatment being tested, is another source of variability.8 Ariana Anderson, assistant research statistician, University of California, Los Angeles, noted the importance of modeling the placebo response in clinical trials. Modeling holds particular value in the case of rare diseases, where patient numbers are small and statistical power is low, and it may be unethical to assign patients to the placebo group, she said.
From page 29...
... that addresses critical concerns surrounding AD and the performance of treatments in clinical trials for AD, including dropout rates, placebo effect, covariants with the disease, and variability both in patient response and in the methodology used by different data collectors within the consortium. The model allows for users to quantitatively design clinical trials before they begin.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.