Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 106
Emerging Safety Science: Workshop Summary 10 The Future of Safety Science1 During the general discussion sessions at the end of each day of the workshop, participants summarized and synthesized the presentations made, discussed gaps and needs for the future, and suggested next steps. Much of the session on the first day was devoted to issues surrounding prediction, while the session on the second day focused mainly on issues concerning surveillance. There was significant overlap, however, as well as a good deal of discussion of how best to integrate the two areas. In the workshop’s final session, Dr. Krall considered the presentations and discussions that had taken place during the workshop and offered some general observations about the field and the future of safety science in these areas. PREDICTION Prediction of the safety and efficacy of drugs is paramount to revitalizing the present drug development paradigm. Prediction that can detect potential problems in advance of clinical testing or market approval will allow for safer delivery of medicines, vaccines, and medical devices, and even the performance of safer surgeries. Traditional drug safety detection methods have generally depended on animal testing, with the assumption that the results of these tests can be indicative of what will happen 1 This chapter is based on the presentation of Ronald Krall, Senior Vice President and Chief Medical Officer, GlaxoSmithKline, and the contributions of several other workshop participants.
OCR for page 107
Emerging Safety Science: Workshop Summary in humans. Emerging safety science holds promise for enriching this traditional approach. Much of emerging safety science is predicated on going beyond observations in whole animals or in organ systems to look at what is happening within a cell—to understand which pathways are perturbed, for example. Adding this information to the traditional approach could even allow researchers in the future to sidestep the traditional animal experiments altogether because of the ability to predict directly what will happen in humans. One advantage of studying actual human cells and human pathways is that it obviates the need to extrapolate from other species. Furthermore, as various presentations at the workshop demonstrated, this approach offers an explanatory power that is lacking with the traditional animal experimentation. By providing information such as gene transcription data, emerging safety science techniques can offer insight into what pathways, targets, or receptors have been perturbed. This information can then be applied to understand and predict the kinds of events that can be expected in humans who take a drug. Thomas Caskey, of the University of Texas Health Science Center at Houston, identified two areas he believes will be important for prediction in the future but were not discussed as thoroughly at the workshop as some others. The first is the use of protein assays. He argued that although proteomics technology is more challenging and not as well developed as transcriptomics or metabolomics, proteomics assays can be easy to conduct when one knows what to measure, and will eventually prove to be important. The second area is imaging. Alluding to Westwick’s description of an imaging technology used in a human cell–based screening technique (see Chapter 3), he suggested that such imaging technologies will likely prove to be very powerful because they can be extended from cells to whole animals, and thus be used to determine whether what is seen in individual cells is also seen in more complex systems. The techniques described during the workshop, such as gene transcription and metabolomics, are already being used to discriminate among drug candidates. Throughout development, they are being used to help select targets, classes, or doses. There are also examples of their being used to prevent adverse events in humans and of markers being identified to help monitor for effects in humans, thus minimizing the chances of drug-induced injury. Finally, these techniques are being used to gain additional information about cellular pathways and signaling, thereby increasing understanding of why certain events occur and offering insights into potential new targets.
OCR for page 108
Emerging Safety Science: Workshop Summary SURVEILLANCE Traditional surveillance approaches that rely primarily on data collected from spontaneously reported adverse events are a valuable source of information, and with recent advances in information technology, these approaches hold additional potential. Nonetheless, much of the discussion at the workshop focused on active surveillance and the identification of better ways to track clinical experience. Although advances are being made that may help in predicting responses prior to use, active surveillance will always be necessary, since it will never be possible to predict with certainty what will happen when a new drug is introduced in the market and large numbers of people begin taking it. Utility of Active Surveillance Systems A system capable of detecting increases in classic events that could lead to the withdrawal of drugs from the market would be tremendously valuable. Krall referred to programs that GlaxoSmithKline (GSK) has implemented to actively monitor large health care system databases and detect patterns of postmarket adverse events once a drug is on the market. To illustrate the utility of such a system, Krall used the example of a drug that was ultimately withdrawn from the market. Without identifying the drug or the adverse event it caused, he explained that a disproportionality analysis of data from the Adverse Event Reporting System (AERS) database indicated that the event was occurring with this particular drug much more often than with other drugs in the database. The excess was apparent from the first year of marketing and continued for as long as the drug was on the market. To see whether they could detect the same event using large health care system databases, GSK researchers chose two databases—the Integrated Health Care Information Services claims database and an electronic medical records database from General Electric. The databases were large, one containing 40 million people and the other 5 million. The researchers found they were able to detect the event in question, and Krall exhibited a graph that showed the rate calculated in terms of events per 10,000 patients (see Figure 10-1). The confidence intervals were relatively narrow, and the graph showed that the event was taking place at a rate that was at least double that for other drugs in the same class. This example showed it is possible to search for and identify events of interest in large health care databases using a prescribed set of tools and methodologies, just as events are found using data collected from spontaneously reported adverse events. Furthermore, such an analysis can expand on the findings from a spontaneous adverse event reporting system.
OCR for page 109
Emerging Safety Science: Workshop Summary FIGURE 10-1 Use of large health care databases to identify events of interest. The graph describes the rates of occurrence of event X using observational data. The results showed that the event was taking place at a rate that was at least double that for other drugs in the same class. The same results were achieved by performing disproportionality analysis of data from the Adverse Event Reporting System. SOURCE: Krall, 2007. Achieving an Active Surveillance System Because the largest amount of safety information is produced when a drug is on the market, it is important to capture those data. Yet current tools do not make it possible to capitalize adequately on this information. Krall said that stakeholders have an obligation to share their knowledge with the larger society. Looking to the future, Krall and several other workshop participants reviewed a number of ways to meet the challenges involved in realizing the goal of an active surveillance system. Enhancing Data Sharing Peter Corr, retired from Pfizer, echoed the importance of sharing data and technology. He suggested that the best way to move forward quickly would be for companies to combine their efforts. This is already
OCR for page 110
Emerging Safety Science: Workshop Summary happening, for example, in toxicology with the Predictive Safety Testing Consortium. Corr argued that this sort of openness should be expanded to include other areas. Further, while a great deal of meaningful work on individual compounds was described during the workshop, a comprehensive understanding of the relationship between molecular structures and toxicity will demand the study of many diverse compounds. A large amount of data already exists in various pharmaceutical companies—far more than is ever submitted to the U.S. Food and Drug Administration (FDA) with drug applications—but the data are not shared. Acknowledging arguments for keeping the data proprietary, Corr suggested that there are even better arguments for sharing the data. Combining forces would have a huge effect on the diversity of available data and thus on the ability to understand the relationships of interest. Paul Seligman, of the FDA’s Center for Drug Evaluation and Research, echoed the need to share information. One of the greatest challenges facing the field is achieving access to all of the information accumulated, particularly that on products that fail during their development. Dissemination of negative data could enable more efficient drug development paradigms. Almenoff added that obtaining data on failed compounds is crucial. To date, most data mining has been done on “honor role molecules”—those that made it through the various testing phases and have some promising attributes. Data mining with failed compounds would be useful as it could help improve prediction. Standardizing Nomenclature As discussed earlier, different databases and even different records within the same database use varying names for the same drug. They also use varying names or descriptions for the same medical condition. Several participants emphasized the importance of creating standard formats for information about drugs and their biological properties and actions. For example, Ana Szafrman, FDA, called for unique names for drug products. Giving drugs unique names and using those names consistently would make it much easier to link information from different databases. Another workshop participant from GSK expanded on this point. Having been involved over the past 2 years in a GSK initiative aimed at linking quantitative clinical data with basic science information, he has found the biggest challenge to be the lack of data standards within basic science databases and the lack of consistent names for drugs being studied. He noted that even working with data from GSK’s own databases has been difficult, in part because the various companies that merged to form GSK each had their own formats, and standardizing the data has been
OCR for page 111
Emerging Safety Science: Workshop Summary tedious. He suggested that an industrywide FDA-recommended standard for collecting and describing data would help prevent these problems in the future. He emphasized that the field of toxicology in particular could benefit from such standards because currently, some toxicology data are provided in qualitative text strings, which are very difficult to compile. Mary Prince Panaccio, of Merck, spoke to the lack of standardization in the postmarket stage. Researchers must work with spontaneous reports that have no common structure or set of details. Standardizing the terminology used to describe postmarket events would make it easier to feed that information back into the basic science work being done on prediction. Improving the Comprehensiveness and Linkage of Data Sources Data sources seldom provide all the data needed. While there may be individual records of data collected from a hospital stay or an outpatient visit, rarely are the data sets linked. Furthermore, the medical record data may not be linked to X-ray data, laboratory data, or pharmacy data that would indicate whether prescriptions were actually filled. In addition, most data sources are missing medical content—content that is often available only in doctors’ charts or notes and in forms not easy to access. As a result, there is no continuous record of data on an individual in any of these systems, so it is impossible to capture a person’s life experience. Because each of these systems captures only a slice of that experience, it is difficult to create longitudinal records. The multiplicity of data sources also hinders the development of an active surveillance system. Claims data are very different from health record data, and health record sources differ from one organization to another. The way an electronic health record is implemented in a health care system has a great influence on which data actually exist and on how easy it is to find associations in those data. Instituting Electronic Record Keeping Krall suggested that to capture medically important information that is not being captured in current health care record systems, the best approach would be to institute electronic health records. With such records, it would be possible not only to get more from the data that exist, but also to obtain more data. The benefits of having electronic health records were indeed amply demonstrated during the workshop. With such records, it would become possible, for example, to link what is being learned about cellular pathways and cellular signaling with clinical information about a disease and various interventions used to treat
OCR for page 112
Emerging Safety Science: Workshop Summary the disease. Instituting electronic health records will be an enormous challenge but will ultimately pay off in many ways, including ones that cannot even be imagined today. One barrier to be overcome, however, is that the education of health care practitioners generally does not cover how to keep such records. Conducting Research on Analytical Methodology Presentations made throughout the workshop demonstrated the power of various analytical methodologies developed to date, but more work in this area is needed. It is important to continue to learn about how to discern and to evaluate and assess the signals that appear in health care system databases. Research on analytical methodologies should be built into any approach to active drug surveillance. Addressing Issues Inherent in Data Sources Panaccio stressed the importance of understanding how to interpret information collected from a health care plan database. Because the population of that database will not be the same as the general population, it is useful to examine cohorts within the health care plan in an effort to understand what the patients in these cohorts looked like before the drug of interest was marketed—for example, what sorts of adverse events were reported. John Jenkins, of the FDA, reinforced Krall’s comment about using biological understanding to hypothesize the types of events that should be monitored once a drug has been marketed. Having a better idea of what events to monitor for in the postmarket setting can help identify those questions that should be answered before approval and those that can be answered after approval. The focus is generally on serious adverse events, which fall into two categories. The first is rare serious adverse reactions, such as hepatotoxicity, that in general will not be detected in clinical trials; the AERS does a fairly good job of picking these up, although it could probably be improved with a more active monitoring program. The second category consists of drugs that cause an increase in the rate of common adverse events, such as heart attacks or strokes. Determining the best way to detect these types of events will require serious thought and discussion, weighing an active postmarket surveillance system against very large controlled clinical trials.
OCR for page 113
Emerging Safety Science: Workshop Summary Incorporating Phenotyping into Routine Clinical Trials Krall suggested that one improvement to the current surveillance system would be to gather more information about patients. In general, research on the effects of drugs has been approached from the point of view of the clinical trial, where researchers compare the results for a test group with those for a control group and look for differences between the two. It would be very valuable, however, to phenotype all of the subjects in those trials in such a way that it would be possible to discriminate among groups and identify biomarkers that could be used to predict how different people will respond to a drug. On a similar note, Caskey said it will be important to use “scanning markers” in postmarket surveillance programs as a way of picking up signals of impending toxicity. These will be different from the biomarkers used in the premarket phase, when researchers are studying a particular target and looking for a response. In the postmarket surveillance phase, it may not be clear which targets may be involved, so it will be necessary to have some general scanning markers that measure various aspects of metabolism. Over time, as data are accumulated, it should become possible to zero in on markers that are associated with—and preferably predictive of—the eventual appearance of an adverse event. Integrating Basic and Clinical Science Referring to the presentation by Almenoff, who described her company’s Molecular Clinical Safety Program (see Chapter 9), Edward Holmes, of the A*Star Biomedical Research Council, asked how many other examples exist of attempts to link basic science data with clinical data. Integration of these two areas remains at this point more hope than reality, but there was some discussion of what might be needed to achieve such integration in the future. Mikhail Gishizky, of Entelos, said that one of the major challenges will be dealing with the overwhelming amount of data. This sort of data challenge has been met in other industries, he said, and it will be important to look at these other industries and learn how they have been successful through the use of computer modeling and other technologies. Gishizky also suggested that an appropriate metaphor for what is needed to link the basic sciences to the clinical setting is the Rosetta stone: researchers must find some way to translate information from the basic sciences into the clinical setting and vice versa. Once again, a number of researchers commented that a vital first step in this process of translation will be to develop standardized terminology. If information is to be shared across the life cycle of a drug, basic researchers and clinicians must at the very least be speaking the same language.
OCR for page 114
Emerging Safety Science: Workshop Summary Robert Califf questioned whether the integration of the basic sciences and the clinical setting could ever be realized. The current system identifies many preventable adverse drug events, events caused by situations that are very well described, yet adequate clinical systems to deal with them are not in place. In prescribing antithrombotic drugs, for example, the wrong dose is given about a third of the time. In such cases, one must administer a test whose results are not available for 2 days, and then figure out how to deal with the problem. Woodcock disagreed, suggesting that if a new technology is introduced with explicit instructions for its use, the health care system will apply it. As an example, she pointed to the experience with abacavir (see Chapter 6). Along the same lines, Frazier commented that health care providers will not actively follow the search for biomarkers and adopt each as it is discovered. Instead, when a biomarker is validated as being clinically useful, doctors will adopt it. If doctors are provided with a useful bottom line, they will apply it. Caskey said that while analyzing their compounds, many of the large pharmaceutical companies use different tests to measure the same outcome. Thus the decision that is made at Pfizer will not be the same as that made at Merck or that made at Abbott. Caskey suggested that the FDA undertake a research initiative to determine which of these tests are most effective in predicting clinical safety. When a drug was approved and launched, it could be subjected to the testing systems proposed by each of the companies, and the actual clinical results could be compared with those of the various testing systems. Caskey suggested that partial funding for these efforts could come from the National Institutes of Health. SUMMARY In her concluding comments, Woodcock said it will be important to keep an eye on the long-term goal. That goal is not just to fix problems that occur when a drug enters the market. Rather, it is to move medicine to a more scientific basis, something for which the necessary tools exist. What is lacking is the system to make it happen. Summarizing the workshop’s take-away messages, Woodcock said that efforts to create standards should be greatly intensified, especially in areas in which data from different sources will be linked. She emphasized that the science is emerging, and the community needs to ensure that it is put to the best use as quickly as possible; the next steps need to be considered and discussed now.