Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
6 Transforming the Speed and Reliability of New Evidence INTRODUCTION The medical profession has long viewed randomized controlled Âtrials (RCTs) as the best available evidence for determining whether specific medical interventions work. However, as previous chapters suggest, the speed and complexity with which new medical interventions and scientific knowledge are being developed often make RCTs difficult or even impos- sible to conduct. The capacity of healthcare informatics to collect, analyze, and include variability and comparison of data in health care is promis- ing. Many healthcare practitioners are looking toward electronic medical r Â ecords (EMRs) and clinical data registries as new sources of evidence be- cause the information would be instantly accessible, include a broad cross section of the general population, and offer important longitudinal data often lacking in RCTs. In the area of drug development, a combination of regulatory and market pressures is making new sources of information even more critical. This chapter examines how electronic medical records and clinical data registries could be used to expand the evidence base in many areas, as well as the unique problems facing pharmaceutical companies as they begin to develop individually tailored medicines. In his presentation, George C. Halvorson identifies many areas in which EMRs could greatly enrich research. For instance, massive data sets could be built that could be used to support structured clinical trials and track the longitudinal consequences of medical interventions. The data could also be used in new ways, finding unforeseen correlations. Health information technology can provide the large data sets, longitudinal data, and instant 127
128 EVIDENCE-BASED MEDICINE data that will allow researchers to make the kinds of breakthroughs needed in the coming decades. Eric D. Peterson notes that provider-led efforts to develop data regis- tries could capture clinical information at major points and allow patients to be tracked over the long term. The data also could be used to generate new evidence and drive it into clinical practice more quickly. Professional societies such as the Society of Thoracic Surgeons, the American Heart AssoÂ ciation, and the American College of Cardiology National Cardiovascular Data Registry all have rich data sets on patients with coronary disease, heart failure, and stroke. These data registries could capture standard data elements that could be linked, allowing cross-sectional and longitudinal information to be gathered from insurance claims or laboratory and phar- macy databases. The information could be used to track diseases, treat- ments, and outcomes. In his paper, Steven M. Paul identifies the challenges that pharmaÂ ceutical companies face in using evidence to develop drugs that are tailored to individuals. In contrast, most pharmaceutical products today are devel- oped as a one-size-fits-all product, but only about 50 percent of patients respond to any given drug therapy. A few drugs have been developed that are literally targeted to the molecular underpinnings of diseases. However, it will be difficult to develop such drugs for more complex and common diseases such as diabetes. In addition to the technical and scientific chal- lenges that drug companies face, issues such as shorter patent lives of drugs, the slow Food and Drug Administration (FDA) approval process, and a lack of new molecules entering the pipeline are making the development of tailored drugs more appealing. The ability to use biomarkers in developing drugs has been helpful in reducing drug approval costs and shortening the process, but sustaining profitability becomes challenging when only a small subset of patients benefits from a drug. ELECTRONIC MEDICAL RECORDS AND THE PROSPECT OF REAL-TIME EVIDENCE DEVELOPMENT George C. Halvorson, Kaiser Permanente The importance of the development and adoption of EMRs to func- tional improvements in the healthcare system and in patient health has been widely supported. This discussion focuses on the use of EMRs in medical research. Hopefully, there will be something in these comments that will be new or at least useful to some readers. Kaiser Permanente (Kaiser) is currently spending about $4 billion put- ting its own EMRs and physician support tools in place. One of the ma-
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 129 jor reasons we are doing this entire EMR project is to facilitate medical r Â esearch. We are doing it both to deliver better patient care and to do some serious medical research. We are committed to that agenda. However, I am not speaking just from Kaiserâs perspective or our version of EMRs. Overall, as all caregivers manage to get data transferred from paper files into electronic records, I strongly believe that EMRs should and will revo- lutionize medical research. Done well, done adequately, compiled appro- priately, and supported appropriately, EMRs should open up a Golden Age of healthcare research. Think about the key advantages of the EMR for medical researchâ instant, comprehensive data. Instead of researchersâ spending weeks, months, and years gathering pieces of data and pulling together sets of data, EMRs provide instant access to comprehensive data in real time. All patient medical information will be available electronically and true longitudinal data will be possible. Instead of data that are limited to the very narrow time frame of each study, if the database is constructed appropriately, data will go back years into history and extend indefinitely into the future. Current medical research is built around very small numbers of patientsâa couple of thousand patients here, a couple of thousand thereâ each in a very finite study. Using EMRs, the opportunity exists to have instant access to massive data sets comprised of millions and millions of patientsâ data. There is also great flexibility in data utilization with elec- tronic data, and there will be a growing ability to use the data in various ways. With electronic data, studies can be reconfigured in ways that canât even be dreamt of when using a paper-based research system. So how can this resource be used? In many ways. It will be ideal for highly structured clinical trials. In particular, classic clinical trials can be far better supported if the data are electronic. Also, electronic data could help with extended follow-up work for issues such as post-market tracking, and EMRs could be used to track progress and care results into the future. For example, if a patient has a stent put in, EMRs can help determine the consequences of that action, 3 years, 5 years, 15 years out, an impossible task using the time-limited, population-limited, classic paper-based research approach. Population health analyses can be carried out in whole new ways, with the prospect of identifying the impact of various kinds of care approaches on broad populations. Unforeseen correlations will increasingly be detected, as it becomes possible to sort through electronic data sets and troll for correlations of age, ethnicity, or diabetes, for example, with other conditions. That type of statistical correlation searching and research cannot be done in any meaningful way with paper, but it can be done relatively easily if you put together the right electronic database. Just-in-time learning and treatment
130 EVIDENCE-BASED MEDICINE searches also become possible with an EMR. A caregiver can identify what works for a given condition and what the most current patterns of treat- ment happen to be. There are all kinds of levels of electronic research that can be done in the context of current science. In the next wave of exciting research, DNA correlations will be commonplace, and it will be the norm to check a patientâs genetics and reach some conclusions about patient care. Genomic and genetics research is developing in some exciting ways, as high- lighted in several papers in this publication, and with electronic data, it will be possible to carry out this research much more broadly and much more effectively. Currently, such a project is under way at Kaiser, and a DNA database is being developed to support our research efforts. Kaiser has conducted data file research using our electronic databases that illustrates the potential of EMR data. One analysis revealed that Vioxx was causing problems for a number of Kaiser patients and was conducted by sorting through our database. This original identificationâconducted using a level 1 electronic databaseâwas enough to trigger an alarm bell and lead to the initiation of an assessment process. However a level 1 database can only indicate that a percentage of patients are being harmed; the specif- ics of gender, age, ethnicity, and other conditions remained a mystery. Our new full EMR level 2 database, which is going into place now, will enable the additional step of identifying exactly which patients are harmed and which are benefited by a drug. Kaiser has also initiated similar data work relative to both hormone replacement therapy and the follow-up care of patients that had heart stents. We identified the fact that there were some problems with particular stents. Again, this is the kind of results-based longitudinal data that can come from an EMR quickly and easily and be used to reach conclusions about approaches to care. The basic, rudimentary level 1 database provides one set of conclusions, but level 2 will allow researchers to drill down through the various layers of data and determine some additional findings and conclusions. What does this mean for electronic data and EMRs in the future? Any- one who is going down the EMR pathway should begin with the end in mind and design data sets to support clinical trials. As EMRs are designed, medical research must be identified as one of the outcomes of the process so that the data fields and data sets necessary are included for that purpose. Likewise, data sets need to be designed to facilitate analysis of outcomes and care patterns. For example, relevant demographics should be built into the data set to enable evaluation of race, ethnicity, gender, economic status, and geography. From the outset, these types of capabilities must be built into the data set to allow that level of research over time. Kaiser has spent significant time on this particular issue. We started with a dozen different ethnicities and then expanded to a couple of hun-
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 131 dred. We are now working backward to try to figure out what a workable number isâ200 is too many. A broad category such as âAsian Americanâ raises the fact that there are obvious differences between Korean Americans, Japanese Americans, and Chinese Americans. One category is not sufficient, yet a dozen is unmanageable. That is still a work in progress. The goal is to sort through all the data sets so you can say that these are relevant dif- ferences relative to ethnicity, behavior, and culture. Knowing that, we need to decide where we draw the definitional ethnic and racial line. Some of these issues are going to be on a learning curve for a while, and they must be addressed as we move forward. Issues such as economic sta- tus, geography, and gender will all have to be part of our electronic research data set. Thenâas a major next stepâthe data strategy should incorporate genetic components appropriately into the research agenda. Obviously, only a computer can do some of this work. It cannot be done effectively with paper files or stand-alone data sets. The computer is needed to create large data sets, longitudinal data, and instant data. If this work is done well, it could usher in the Golden Age of medical research. Having said that, data must be widely available in order to truly reform health care in America. The key to real reform will be to focus the atten- tion of the country on major and very specific healthcare opportunities. The standard model of reform right now, from a care delivery perspective, is highly disorganized. Our current approach is to do many separate and isolated projects all over the country and then hope that the cumulative impact of those local projects somehow magically adds up to better care. That model is not likely to work. A second model proposed by quite a few people is to simply jump to conclusions about what might work and then micromanage bits and pieces of the care delivery process from the inside, to recruit more primary care doctors into local practice, for example, hoping that somehow more pri- mary care doctors will result at some later time in a better set of healthcare outcomes for patients. That kind of reform model is also dependent on some categories of magical thinking and is somewhat unlikely to work to achieve real systematic reform. Others think that financial approaches are needed and believe that mi- cromanaging bits and pieces of caregiversâ incentives will somehow result in improved health care. That model is also currently not well organized or focused enough to work. What is likely to actually work to achieve real reform would be if the nation took a hard look at the fact that five medical conditions drive more than half of our healthcare costs. Americans could greatly improve the care infrastructure for patients with those five conditions, which should be viewed as a huge opportunity. If we focus on patients with those con- ditions and then work backward to align benefit sets, payment models,
132 EVIDENCE-BASED MEDICINE structure, focus, attention, tools, data reporting, community priorities, and health education on those five conditions, the cost trajectory of American health care could be dramatically changed. Care could improve, and real and logistical pieces could be set in place that are directly aligned with the right outcome of real care reform and health care. Healthcare reform in America has been approached backwardâfrom the bottom up, starting with local bits and pieces. That whole agenda needs to be turned around. It is necessary to set a common goalâa practi- cal and reasonable goalâand then to work backward, changing the total infrastructure as needed to align the functional system of care with that goal. Building the right electronic data sets and making medical research a direct tool of medical reform could result in massive improvements in healthcare delivery. What is most acutely needed is focus, followed by the development of these tools. I will end by saying, âBe well and if you are not well, be careful.â RESEARCH METHODS TO SPEED THE DEVELOPMENT OF BETTER EVIDENCEâTHE REGISTRIES EXAMPLE Eric D. Peterson, Duke University The cycle of evidence development and adoption in medicine is far from ideal. Many current day care decisions must be made in the absence of empirical evidence, and where evidence exists, it is often incomplete. While RCTs have become the gold standard for therapeutic evaluation, such studies often determine treatment efficacy, measured only by short-term surrogate markers, rather than more meaningful long-term clinical events. Randomized trials tend to be carried out predominantly with younger, healthy patients who are treated under protocol conditions by highly trained specialists at leading medical centers. Thus, a full measure of their safety and effectiveness is realized only after the therapy reaches the market and is used in real-world patients and caregivers (Califf and DeMets, 2002). Even when good evidence is available, the speed and completeness of uptake of this information by clinicians is delayed and flawed by frequent errors of omission and commission (Balas, 2001). Large-scale, provider-led clinical registries offer the potential both to augment medical evidence development and to speed evidence adoption into practice. A provider-led clinical registry can be defined as a clinician- organized network for collecting detailed patient information in a uniform fashion for a given population, often defined by a particular disease or medical treatment, and used for addressing research, quality assessment, and/or policy purposes. The concept for these registries can be traced back
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 133 to Eugene Stead, the first chairman of the Department of Medicine at Duke University. Forty years ago this year, he outlined the idea of a âliving textbook of medicine,â extolling physicians to routinely collect and record data on the treatment and outcomes of their patients in order to better care for those in the future (Pryor et al., 1985). The Duke Database for CardioÂvascular Disease, the worldâs first longitudinal cardiovascular data registry, was spawned by these ideas and lives on in a number of national, collaborative, provider-led clinical data registries. This paper outlines the desired operating and functional characteristics of an ideal clinical registry. We then take a more in-depth look at the lead- ing edge of clinical registries as exemplified by those in cardiovascular dis- ease. Through these registries we explore their current and future planned capacities, as well as their many applications for evidence development and dissemination. We end by discussing the challenges and opportunities faced by such registry efforts moving forward. Characteristics of Ideal Clinical Data Registries In a perfect world, data registries would accurately capture detailed clinical information at âkey points and eventsâ in a patientâs life. Such data should be linkable within and among data sources, such that one could con- struct a longitudinal record of a patientâs care and outcomes. For research purposes, these clinical data registries could also be supplemented when needed with other specialized information such as genomic, biomarker, and/or imaging information. This ideal registry should be readily accessible to researchers for scientific discovery; to outcomes researchers for studying healthcare delivery; and to frontline clinicians, giving them timely feedback on their care processes and outcomes to stimulate quality improvement. Clinical registries should also have several important functionalities that have recently been summarized in an Agency for Healthcare Research and Quality supported Users Guide to Registries Evaluating Patient Out- comes. This document outlines good clinical practice policies for establish- ing or evaluating an existing registry, including the design and purpose, data sources, data elements, ownership and privacy issues, patient and provider recruitment, data collection and quality processes, and analysis and inter- pretation (AHRQ, 2007). Briefly, an ideal clinical registry should enroll representative patients, providers, and settings; collect information using standardized data elements and definitions; contain patient identifiers that allow linking of encounter records within and among data registries; have data quality and auditing systems in place to promote the accuracy and completeness of data entered; be flexible enough to allow rapid addition or deletion of variables to meet ever-changing clinical and research needs;
134 EVIDENCE-BASED MEDICINE be analyzed by using state-of-the-art methodologies (Vandenbroucke et al., 2007); and be actionable, integrated with quality assessment and improve- ment efforts. Size and Scope of Existing Cardiovascular Provider-Led Registries While the characteristics and features of an ideal registry may seem futuristic, the majority of these features are now present or planned for by the major cardiovascular provider-led registries. Table 6-1 provides a brief description of the Society of Thoracic Surgeons (STS) National Cardiac Database, the American College of Cardiology (ACC) National CardioÂvascular Data Registry (NCDR), and the American Heart Asso- ciation (AHA) Get with the Guidelines programs (American College of Cardiology, 2007; American Heart Association, 2007; Society of Thoracic Surgeons, 2007). As demonstrated, the size and scope of these programs are quite substantial. Current participation in these cardiovascular registries is voluntary, yet a growing number of external forces are beginning to provide strong incen- tives for clinician engagement. For example, one large healthcare insurer encourages registry participation by making involvement a condition for âpremium provider statusâ (United Healthcare, 2007). Certain states have begun requiring registry participation as part of state-based certificate of need and quality assurance programs (Massachusetts Data Analysis ÂCenter, TABLE 6-1â Selected Provider-Led Cardiovascular Clinical Data Registries Years of Data No. of Sites No. of Patients or Procedures STS CABG 1990-2007 1,000 2,768,688 Valve 1990-2007 1,000 709,088 Thoracic 1999-2006 59 49,496 Congenital heart 1998-2006 59 84,072 AHA CAD 2000-2007 594 426,414 Stroke 2001-2007 1,040 494,815 Heart failure 2005-2007 397 130,489 ACC-NCDR Cath/PCI 1997-2007 971 Cath: 4,113,911 PCI: 2,003,719 ACS 2007 295 37,632 ICD 2005-2007 1,490 179,572 NOTE: ACS = acute coronary syndrome; CABG = isolated coronary artery bypass graft sur- gery; CAD = admissions for coronary artery disease; Cath = diagnostic coronary angiography; PCI = percutaneous coronary intervention; Valve = any valve procedure.
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 135 2007). Most recently, the Centers for Medicare and Medicaid Services (CMS) facilitated complete âvoluntaryâ participation in an Implantable Cardioverter Defibrillator Device (ICD) Registry by requiring it as a condi- tion for payment (CMS, 2007). The scope of conditions and procedures covered by such registries is also rapidly expanding. For instance, within the past year, the ACC NCDR has launched three new registry efforts in ICD, carotid stenting, and acute coronary syndromes, and it is planning several more within the next few years, including congenital heart disease, cardiovascular imaging, and am- bulatory cardiac care. The latter exemplifies the trend for many provider-led registries to expand beyond in-hospital settings and follow cardiac patients across the care continuum. Modernization of Cardiovascular Provider-Led Registry Operations Provider-led registries are also changing as we enter the electronic age of medical care. In particular, progress in five key areas is promoting the potential for more integrated and cross-purpose clinical registries. These include the standardization of data elements and definitions; the clarifi- cation of patient privacy rules; the development of new data harvesting technologies; the creation of longitudinally linked hybrid databases; and the growing collaboration among professional societies, insurers, and gov- ernment regulators. Data Standards Efforts While the development of standards for medicine terminology has tra- ditionally been elusive, cardiovascular clinical registries are now making great progress toward this goal. The AHA and ACC created a Data Stan- dards Committee to develop cardiovascular (CV) elements and definitions that are used in all their society-based guidelines and registries. Similarly, the STS and ACC have worked to harmonize the nomenclature for their respective cardiac revascularization registries. Most recently, the National Heart, Lung, and Blood Institute sponsored a 2-day retreat to further insti- tutionalize these standards across clinical trials and registries (U.S. Depart- ment of Health and Human Services, 2007). Clarification of Patient Privacy Rules In 1996, the U.S. Department of Health and Human Services issued the Health Insurance Portability and Accountability Act (HIPAA). While HIPAA was designed to protect misuse of patientsâ health information, (mis)interpretation of this complex ruling has created significant challenges
136 EVIDENCE-BASED MEDICINE for registries and clinical research in general (Ness, 2007). More recently, the pendulum of HIPAA concerns appears to be swinging towards a more neutral position. Briefly, provider-led registries now are seen as compli- ant with HIPAA when using a business associate agreement with registry participants that permits data gathering and sharing for the purposes of quality assurance (Society of Thoracic Surgeons, 2007). Aggregated data within the warehouse can then be âde-identifiedâ and used for research. In this manner, the burden and bias resulting from trying to gain informed consent from all patients in a registry can often be avoided (Alexander et al., 1998). Data Harvesting Advances Once data are more uniformly collected, it becomes possible to ex- change among various electronic databases. Participants in clinical registries have traditionally entered clinical data using registry-specific software or, more recently, Web-based data capture systems. However, more and more hospitals already capture certain clinical data in the EMR. To capitalize on this, novel data harvesting and warehouse systems are now being Âdeveloped that will permit providers to seamlessly map any existing stored patient information into a given clinical registry, thereby âpre-populatingâ the reg- istry case report form and limiting redundant data entry. Additionally, data warehouses are moving toward the development of Web-based modular augmentation tools that will allow registries to rapidly collect new clini- cal information when needed. As such, registries are no longer locked into the usual 3-year or longer delay required for registry database upgrades. Rather, they now can respond nearly instantaneously to a new research, patient safety, or policy issue. Longitudinal Linked Databases Registries have traditionally collected cross-sectional information (e.g., in-hospital events) and have had limited functionality to study longitudinal patient outcomes. Yet longitudinal patient events (including hospitaliza- tions, outpatient visits, and death) are routinely captured and stored in administrative claims databases such as those of Medicare or private insur- ers. To potentially access this valuable resource, the major CV provider-led registries are all currently working to link their clinical databases with claims sources. In a similar manner, the provider-led registries are also working together to develop a common standard for patient identifiers so as to facilitate cross-registry matching and analysis. These clinical claims and cross-registry hybrid analytic databases create unique research and quality improvement tools for future generations.
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 137 Collaborative Leadership The above-noted progress is greatly facilitated when the major parties all work together. Whereas in the past, multiple registries competed to Âenroll similar patients, the field has recently consolidated, with the goal being to create one national, representative registry for each domain. Additionally, in 2006, the major cardiovascular provider organizations held a series of meetings with healthcare insurers and government agencies that resulted in a commitment by all parties to create the National Consortium for Clinical Databases to promote interregistry cooperation and collaboration. Applications of Clinical Registry for Evidence Development There are several means whereby clinical registries can augment evi- dence development (Box 6-1). These can be grouped into epidemiological investigations and those that specifically evaluate the effectiveness of medi- cal therapeutics. Epidemiological and Surveillance Studies Clinical registries, if large, detailed, and representative, can be unique resources for national epidemiological and health services research. For BOX 6-1 Means for Clinical Registries to Support Evidence Development Epidemiological and Surveillance Studies â¢ rack disease conditions and medical treatments in community- T based, âreal-worldâ settings. â¢ Large longitudinal genomic studies. â¢ Conduct post-market evaluation of drugs and devices. â Study rare events, late outcomes, and âoff-label indications.â Comparative Effectiveness Studies â¢ Support more efficient randomized clinical trials. â Identify patients and investigators; streamline data collection. â¢ Observational treatment comparisons. â Evaluate generalizability of trial findings in real world. â xamine clinical issues where RCT is either not possible or not E feasible.
138 EVIDENCE-BASED MEDICINE example, the Surveillance, Epidemiology, and End Results Program of the National Cancer Institute provides an excellent source of information on cancer incidence and survival in the United States (National Cancer InstiÂ tute, 2007). In a similar manner, cardiovascular clinical registries have been used to summarize national variability in disease treatment among providers (Peterson et al., 2006), disparities in care among specific patient subgroups (Blomkalns et al., 2005; Sonel et al., 2005), and temporal trends in treatments over time (Rogers et al., 2000). Genomics Studies Genomic association studies represent a cutting-edge potential use of clinical registries. Studies that attempt to link a given allelic variation such as a single nuclide polymorphism (SNP) to a disease state offer incred- ible potential to better predict patientsâ risk for disease, as well as their response to therapies (Damani and Topol, 2007). A major challenge with these Â studies is that a high number of statistical tests are often carried out on a relatively small patient sample. As a result, researchers run a high risk that any correlation between a given SNP and a phenotype may be Â spurious. Clinical registries, however, offer the opportunities to have d Â etailed Âphenotypic and longitudinal outcomes information on a very large cohort of patients. If blood samples are routinely obtained, the potential to carry out more reliable genome-wide association studies, as well as to replicate promising SNP associations is enormous. Post-Market Surveillance Studies As noted earlier, the pre-market evaluation of drugs and devices is im- perfect, limited in the total number and types of patients studied, the end points evaluated, and the duration of this evaluation. As a result, there are several recent examples where marketed therapies are subsequently found to be ineffective or unsafe. Clinical registries can be used to track the acute and long-term outcomes of therapies used in diverse patient populations, in on- and off-label indications, and under routine community clinical conditions and settings (OâShea et al., 2004; Peterson et al., 2004). Such rich data sources can therefore uncover heretofore undiscovered rare side effects (Vioxx, Avandia) and drug-device (Eisenstein et al., 2007) and d Â evice-Âoperator interactions (Al-Khatib et al., 2005). Supporting Clinical Randomized Controlled Trials Clinical registries also offer the ability to support the conduct of RCTs. During the study design phase, information from clinical registries can pro-
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 139 vide important information on the size of potential populations (informing inclusion-exclusion selection decisions), as well as expected clinical event rates in the study population (thereby facilitating sample size calculations to adequately power the RCT). During the enrollment period, comparison of trial versus registry populations can give a clue as to any patient selec- tion bias that could limit the generalizability of trial findings. Registries also offer the potential to help augment assessments of long-term costs and cost effectiveness of a given therapy when used in routine clinical practice (Mark et al., 1995). Beyond augmenting the design, conduct, and interpretation of RCTs, clinical registries could improve the actual efficiency of RCTs. Specifically, registries could be used to rapidly identify care providers who may be inter- ested in being site investigators, as well as identify patients who are eligible for trial enrollment. In theory, the data collected for a registry could have a dual use in reducing the burden of data collected for a given trial. In the future, huge âpractical clinical trialsâ may themselves be embedded within clinical registries (Tunis et al., 2003). In the extreme, qualified patients in a registry could be offered the option of trial participation. If interested, the patient would simply be randomized to one therapy or another, with all data collection and outcome assessment needed for the trial being con- ducted as part of routine registry operations. Comparative Effectiveness In situations where randomized treatment comparisons are not ethical or practical, or simply have not been conducted, observational compara- tive effectiveness studies of registry data provide a secondary source for evidence development. The potential need for evidence augmentation using existing databases is so great that some called for the formation of an entire new government agency to oversee comparative effectiveness studies (Wilensky, 2006), while Congress recently introduced a bill that would have provided up to $3 billion to fund this new agency over the next five years. While the idea that comparative effectiveness studies may facilitate wiser and more efficient use of medical resources and has generated much excitement, the selection of one treatment versus another is almost never a ârandom eventâ in real life. Thus, observational comparison studies have the potential for selection biases as a major limitation. Fortunately, several statistical methodologies have been developed to adjust nonrandomized treatment comparisons for selection bias including multivariable regression modeling, propensity analyses, and instrumental variable analysis. Unfortu- nately, several studies have demonstrated that the analytic technique used for adjustment can impact study conclusions (Kurth et al., 2006; Stukel
140 EVIDENCE-BASED MEDICINE et al., 2007), and there is no strong consensus about which technique is superior to another (Cepeda et al., 2003). When the results of an observational treatment comparison confirm those available from a trial, one gains general assurance that the treatment is effective and safe even when used in broader patient groups (Peterson et al., 2003). However, discord between observational treatment compari- son studies and RCTs can also prompt new insights. For example, one study of anticoagulants used in the care of patients with myocardial infarction from a large registry population demonstrated higher bleeding risks than those seen in the trial. Further exploration revealed that clinicians often gave their patients excessive doses of the medication in routine community care, which in part led to the unexpectedly high bleeding risks (Alexander et al., 2005). Thus, although both the trial and the registry were techni- cally âright,â they addressed separate questions. Within the controlled trial environment with its protocol-driven care, one effect was seen comparing these drugs. However, in community care, comparative risk benefit ratio of these two therapies was altered due to dosing errors. Quality Assessment and Quality Improvement This later point is indicative of a final important role that clinical reg- istries can play, namely to ensure that evidence is fully and appropriately translated into clinical practice. The provider-led clinical registries were developed first and foremost as tools to support quality assessment and improvement. In this capacity, clinical registries have consistently uncov- ered issues of underuse, overuse, or misuse of proven therapies in routine clinical practice. Yet, beyond being solely a means to document provider performance problems, the registries themselves can be part of the solution. Specifically, coupled with timely feedback, clinical registries can supply pro- viders with important information on areas where practice improvement is needed, as well as on how their care compares with peers. The impact of such feedback on subsequent care and patient outcomes has been consistently demonstrated (Ferguson et al., 2003). Yet, the major- ity of these quality improvement (QI) studies employed time-series study designs and, thus, provide only indirect support that registry participation itself led to changes in care. More recent QI studies, however, actually em- ploy cluster randomization at the participant level or other more rigorous designs to evaluate the impact of registry-based QI. In one study, surgeons participating in the STS national database randomized to receive a simple âcall to actionâ and ongoing feedback led to significantly faster adoption of a guideline-based care process than by those not receiving this feedback (Ferguson et al., 2003).
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 141 Given this evidence of effectiveness, the tools used by registries to stimulate practice change also need to be refined. Many of the provider-led registries are now working on means of improving the QI process itself. Time delay between care delivery and provider feedback has been pro- gressively shortened, with online, as well as hard-copy reports. Feedback reports are becoming more streamlined, customized, and individualized to the needs of the caregiver. Many such reports provide clinicians with multiple comparative benchmarks, as well as highlighting for the clinician specific care processes that need attention within his or her practice. Finally, provider-led registries are now supplying clinicians with specific tools to help show them not only âwhatâ they are doing wrong, but âhowâ to practice better care. The Future of Clinical Registries Based on the promise and uses briefly described in this chapter, one might imagine that the future of provider-led clinical registries is extremely bright. Yet, this future is not without potential peril. In particular, partici- pation in these registries remains largely voluntary and hospitals need to prioritize resources for registries in the face of shrinking clinical margins. Growing demands from government and insurers for collection of alterna- tive performance assessment data threaten to further limit resources avail- able for âoptionalâ clinical registries. Additionally, an ever-litigious climate in medicine makes clinicians worry whether such clinical information may someday be âdiscoverableâ and used against them. The answers to these threats are not simple and will require a multilevel and persistent response. Clinicians need to make a strong case that clinical registries are best run and most valuable when they remain in the hands of clinicians. Such registries first were developed by clinicians to support dis- covery and ensure the quality of care. Practitioners who remain in the group most clearly understand what the most relevant research and practice issues are; they are directly responsible for the data collection and thus should be in control of the quantity and quality of the data collected; and they are the agents of change when new findings require it. While governments and insurers are charged with ensuring quality of care, clinicians are charged with delivering it. This last bit of logic has led external agencies to consider forgoing their external measurement systems and, instead, using provider- led clinical registries as their assessment tools. Such a development could lead to the assurance of clinician involvement in provider-led registries as well as the resources needed to run them. If so, the remaining challenges for provider-led clinicians will be to remain true to their research and QI roots, as well as to live up to the promise outlined above.
142 EVIDENCE-BASED MEDICINE PRODUCT INNOVATIONâTHE TAILORED THERAPIES EXAMPLE Steven M. Paul, Eiry W. Roberts, and Christine Gathers, Eli Lilly and Company Introduction Healthcare systems in the developed world currently face increasing challenges, coupled with unprecedented opportunities. In recent years, investments in biomedical research have resulted in a broad spectrum of advances in the areas of disease diagnosis and therapeutic intervention. As a result, clinicians and patients can now make choices among an expanded (and ever-increasing) array of options for the treatment of many common and chronic diseases, including mental illness, cardiovascular disease, and even some cancers. In addition to these therapeutic advances, evolving health information technology promises to deliver much improved access to information for clinicians, patients, and other stakeholders (including payers and governments) in the very near future. Such âreal-timeâ access to information creates opportunities to facilitate more informed therapeutic decisions and to enable more rapid integration of complex information in a way that ensures improved efficiency and effectiveness in the delivery of health care to the population at large. Despite these advances, however, the way in which health care is cur- rently delivered by a large proportion of healthcare providers, and experi- enced by most patients, remains largely empirical. Therapeutic interventions are frequently applied in a âone-size-fits-allâ approach, and the means by which individual patients are matched to therapeutic interventions often occurs by âtrial and error.â While it is important not to underestimate the impact of this historical approach to treating and managing many diseases, it is also clear that this rather empirical approach must evolve to embrace the principles of comparative effectiveness and evidence-based medicine. The ultimate goal, of course, is to provide healthcare decision makers (e.g., patients, clinicians, payers, and policy makers) with up-to-date, Â evidence- based information about treatment options so that they can make informed healthcare decisions. Virtually all stakeholders in the healthcare system Âtoday are demanding improved, more predictable, meaningful, and objectively demonstrable âhealth outcomesâ from all types of medical interventions, including the use of biopharmaceuticals. Moreover, patients and clinicians are demanding better information about where and when to useâand when not to useâa given biopharmaceutical, including complete transparency in terms of its safety and efficacy. These heightened expectations, coupled with the explosion of technological advances in recent years, create a unique set of challenges and opportunities for the biopharmaceutical industry, particu- larly with respect to the discovery and development of new medicines.
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 143 The focus of biopharmaceutical research and development is currently shifting away from a sole preoccupation with traditional measures of âsafety and efficacyâ to more clinically relevant measures of âeffectivenessâ and to a better, more integrated understanding of benefit-risk for patients (i.e., providing âmeaningful, improved patient health outcomesâ). This more recent focus on health outcomes puts additional burden and complexity on biopharmaceutical research and development (R&D) when applied within the current R&D model, with a resultant increase in drug development time lines and overall costs. At the same time, the biopharmaceutical industry is experiencing unprecedented challenges to its fundamental business model, and some have even predicted the demise of the industry in its current form, given (1) decreasing R&D productivity overall, (2) the reliance that individual companies place on deriving profit from a small number of one-size-fits-all medicines, and (3) the large number of patent expirations looming for these medicines over the coming decade. How might this tension between heightened expectations and demands on the part of consumers (patients, providers, and payers) and the enor- mous costs and risks inherent in biopharmaceutical R&D be reconciled? How does the well-recognized interindividual variability in drug response (for both efficacy and safety) among the general population complicate studies of comparative effectiveness of biopharmaceuticals, and how will it impact biopharmaceutical R&D in the future? How might the treatment and management of disease by biopharmaceuticals be best approached in an era of comparative effectiveness and EBM? Obviously, such challenges are multidimensional in nature and will require multiple interventions by all of the various stakeholders. We believe that to move forward success- fully, the biopharmaceutical industry, with appropriate collaboration from all relevant stakeholders, must redouble its focus and reinforce its efforts to truly understand the needs of patients and to deliver new medicines that offer not only improved, but also meaningful, patient outcomes. As part of our commitment to this goal, we have recently developed and implemented a business model (for both R&D and commercialization) that we refer to as âtailored therapeutics.â In short, tailored therapies give greater assurance that the âright drugâ will be prescribed for the âright patient at the right dose and at the right time and with the right information and supporting tools.â A critical success factor for delivering tailored therapeutics is our evolving and much greater understanding of the considerable heterogeneity that exists in the etiology and pathophysiology of disease and in the phar- macological response (both beneficial and adverse) to biopharmaceuticals. While many new tools (e.g., genomics, proteomics, computational ap- proaches to disease state modeling) currently exist to explore the biological substrates of disease heterogeneity and the interindividual variability in the clinical response to biopharmaceuticals, the application of this technology
144 EVIDENCE-BASED MEDICINE to predict the relevant health outcomes afforded by drugs remains a daunt- ing challenge. The rationale for how tailored therapies can potentially impact the discovery and development of biopharmaceuticals, as well as help to define and establish their comparative effectiveness in the marketplace, is outlined briefly below. Challenges to the Current Drug Development Paradigm Over the past 50 years, a large number of effective (and safe) medicines have been introduced to treat and manage many acute (e.g., infectious diseases, myocardial infarction) and chronic (e.g., hypertension, diabetes) diseases. These drugs have beneficially impacted longevity, contributing to an ever-increasing life span, as well as to the quality of life in both devel- oped and developing countries. Despite these successes, however, and the very significant, virtually unprecedented, advances in biomedical research that have been made over the past two to three decades, the number of new drugs approved by the FDA over the past 5 years has decreased dramatically (50 percent fewer drugs than in the previous 5 years). In 2007, for example, only 19 new molecular entities (NMEs) (including biologics) were approved by the FDA, the fewest number of new drugs approved since 1983 (www. fda.gov). This reduction in the introduction of new medicines is all the more troubling when one considers the enormous R&D investments currently made by the biopharmaceutical industry, now estimated to be in excess of $50 billion annually (Mathieu, 2007). In fact, it has been estimated con- servatively that each new NME costs in excess of $1.5 billion to develop and introduce (Tufts Center for the Study of Drug Development, 2006). Diminished patent life, complicated by tougher regulatory requirements and enormous global pricing pressures, have all contributed to concerns about the viability of the current biopharmaceutical business model. Finally, it is widely expected that the use of generic drugs will dramatically increase over the next decade given the many scheduled near-term patent expirations. Demonstrating âcomparative effectivenessâ for a patent-protected drug versus a generic, in addition to monitoring a branded drugâs safety profile in the post-marketing (generic) environment, will require considerably more resources and attention from the healthcare system. Improving R&D productivity remains arguably the most important challenge facing the biopharmaceutical industry. The latter can be achieved by improving three of the most challenging elements of drug discovery and development: unit costs, cycle time, and most importantly, attrition. These â Medicines are broadly defined to include traditional small-molecule drugs, bioproducts (proteins and peptides), and vaccines.
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 145 three dimensions of R&D âproductivityâ are intimately related to one another, and if each could be improved even modestly, R&D productivity would increase substantially, thus reducing the overall cost of developing a new medicine. A full discussion of R&D productivity is well beyond the scope of this paper. However, the challenges posed by the enormous attri- tion rates for drug candidates as they move through development must be underscored. Currently, about 50 percent of drugs in phase III (the final and most expensive phase of drug development) fail to make it to market, primarily because of unacceptable benefit-risk profiles. Phase II attrition (a phase in which safety is confirmed and efficacy is first established) is even more daunting: currently, 70 percent of potential new drugs entering phase II do not make it to phase III. Reducing the attrition of drugs that are in the late phase of development will be essential to improving R&D productivity. The use of biomarkers focused on the early identification of efficacy and/or safety signals, together with the use of markers focused on patient stratification strategies via a tailored therapy approach during late- stage clinical development, have already proven useful in this regard (and are discussed below). Importantly, the substantial late-stage attrition that characterizes drug development at present also complicates and confounds the timing and initiation of health outcome and comparative effective- ness studies, an essential âcomponentâ of future drug development and evidence-based medicine. Tailored Therapies Enable a Paradigm Shift for Drug Development For a variety of common diseases, only about 50 percent of patients will respond favorably to a given biopharmaceutical agent (Spear et al., 2001). Moreover, such response rates in individual patients are often highly variable in both their magnitude and their duration. In one sense, when it comes to âcustomerâ expectations, there appears to be an âefficacy gapâ for many marketed one-size-fits-all biopharmaceuticals. It is also important to emphasize that even if a patient experiences no (or little) therapeutic benefit from a given drug, he or she is still at risk for potential side effects and/or serious adverse events. Furthermore, several studies have shown that the burden of adverse drug reactions on the healthcare system is high, accounting for considerable mortality, morbidity, and extra cost (Lazarou et al., 1998). Side effects and/or serious adverse events in this context can often relate to the therapyâs being inadvertently prescribed for the wrong patient or at the wrong dose for that patient. In many circumstances, inter- actions between concomitantly prescribed medicines also contribute heavily to the occurrence or severity of such events (often due to issues of compet- ing or impaired drug metabolism). Most of these drug-drug interactions can be minimized or potentially avoided altogether.
146 EVIDENCE-BASED MEDICINE Thus, individual differences in drug response (both good and bad) within the population of patients treated pose obvious challenges to drug development, as well as to the way medicines are used clinically and mar- keted by manufacturers. Such individual differences in treatment response also make it considerably more challenging to compare the effectiveness of one drug with another in a given class, since the benefit-risk ratio may differ dramatically for each agent (i.e., among subgroups of patients with the same disease). Thus, it is possible, perhaps even likely, that comparative effectiveness studies of drugs if carried out in large heterogeneous patient populations may miss subgroups of patients in whom a given drug may actually prove to be superior with respect to either its efficacy or its safety, or both. Identifying such subgroups especially in real-world situations will be essential for optimal utilization of any such drug and for establishing meaningful (evidence-based) comparisons between drugs (as well as with nonpharmaceutical interventions, for that matter). Tailored therapy is an approach to optimizing the benefit and risks of a given drug for individual groups of patients. Tailored therapies exist across a continuum from the least tailored one-size-fits-all biopharmaceutical to the truly targeted therapy. The degree of tailoring possible will depend on a number of factors such as drug characteristics, underlying disease biol- ogy (e.g., genetics), available monitoring tools (e.g., diagnostic or imaging technologies), and a number of environmental variables (e.g., diet, culture). Currently, the most extreme examples of tailoring include a number of highly targeted cancer drugs (e.g., Gleevec, Herceptin) that work directly on the underlying biology or genetic etiology of the cancer itself. The predict- ability of a beneficial treatment response with such targeted agents, given that they work on the molecular underpinnings of the disease, is very high. Nonetheless, targeted drugs such as these are still fairly rare and exist at the extreme of the tailoring continuum. The term âpersonalized medicineâ also broadly implies the ability a priori to match a particular therapy to an individual patient, often through pharmacogenomic approaches, which are used either to understand exposure at the individual patient level or to predict and/or measure efficacy or safety. As such, personalized medicines also represent a subset of the range of opportunities within the continuum of tailored therapies. In clinical practice, this type of personalized, phar- macogenomic approach has so far been very rarely applied (Lazarou et al., 1998) despite well-established genetic polymorphisms (e.g., SNPs) and available genotyping methods (Figure 6-1). The reasons for this are mani- fold, but include the lack of large prospective studies to evaluate the impact of genetic variation on drug therapy. Most importantly, the vast majority of the more common diseases are undoubtedly genetically complex and polygenic in nature (e.g., diabetes, obesity, hypertension, coronary heart
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 147 Preclinical Phase I Phase II Phase III Phase IV comparative studies Adaptive trial designs in animals Patient enrichment approaches Rolling dose studies Imaging â PET/MRI/CT Target gene SNP profiling Phenotypic or genetic marker efficacy/responder profiling Metabolic profiling (CYP450) DNA, Serum banking Bioinformatics â systems/pathway mapping Disease based modeling FIGURE 6-1â What are we doing differently? NOTE: CT = computed tomography; MRI = magnetic resonance imaging; PET = positron emission tomography. 6-1 new.eps disease), so whether targeted or more personalized drugs can routinely be developed for these disorders is far from certain (Need et al., 2005). The concept of tailored therapies is certainly not new. For years, phy- sicians have used biomarkers such as blood pressure or hemoglobin A1c (HbA1c) to monitor the effectiveness of antihypertensive and diabetes drugs, respectively. Compelling health outcome data exist for only a handful of biomarkers that allow physicians (and patients) to know the likely and pre- dictable benefits of a given drug for a given patient. Two notable examples are the reduction in low-density lipoprotein cholesterol and HbA1c resulting from treatment with hydroxymethylglutaryl-coenzyme A Âreductase inhibi- tors (statins) and certain antidiabetes medications (e.g., insulin), respec- tively. Both biomarkers are reliable predictors of beneficial health outcome (reduced morbidity and mortality) following treatment with these drugs. However, whether these biomarkers will afford the same degree of pre- dictability for other cholesterol-lowering or diabetes medications is far from certain. This rather sobering possibility has recently been emphasized with the use of oral antidiabetic thiazolidenediones and other (non-statin) c Â holesterol-lowering agents. The movement toward tailored (or âpersonalizedâ) medicine has undoubtedly been accelerated by a whole range of new tools (see Fig- ure 6-1). Some of these tools aid the discovery and development of drug can-
148 EVIDENCE-BASED MEDICINE didates, yet other emerging diagnostic and prognostic tools (e.g., genom- ics, imaging) will also ultimately benefit healthcare delivery. For example, in discovery, disease state modeling is utilized as a tool to compare new drug candidates with existing medicines in the marketplace. In essence, these models enable the selection of drug candidates that will demonstrate improved health outcomes. Other tools have impact that span all phases of development. For example, pharmacogenomics, or the ability to define genes or alleles that determine the response to drugs, is an exciting prospect for improving the predictability of tailored therapies. To date, there have been a few notable pharmacogenomic studies, particularly with respect to mutations or polymorphisms in drug-metabolizing enzymes (Evans and McLeod, 2003). These studies have proven highly informative in predicting the benefit, as well as the adverse event profile or liability of a number of important drugs. One of the most well known of these examples relates to the study of cytochrome P (CYP) 2C9 polymorphisms and their relation- ship to bleeding risk in patients treated with warfarin (Higashi et al., 2002). Research has led to the identification of two common polymorphisms of the CYP2C9 gene that appear to be associated with an increased risk of over- anticoagulation and bleeding events among patients treated with warfarin. Discussions are currently under way at the FDA to consider the inclusion of these pharmacogenomic data in the prescribing information for warfarin, but even in this relatively well established case, there is much debate about the âclinical validityâ and utility of the diagnostic test and the applicability of the data for dosing recommendations for warfarin therapy. The focus of tailored therapies is on the predictability of the health outcome afforded by a given drug in an individual patient. In many cir- cumstances, this may also involve an ability to determine whether there is sufficient exposure to the drug in any given patient to even create the op- portunity for favorable clinical response. One such example includes the evaluation of the CYP2D6 genotype in psychiatric patients treated with antidepressants that are substrates of CYP2D6 (Meyer, 2004). Clearly in this population, genotyping can improve efficacy, prevent adverse drug reactions, and lower the cost of therapy overall. This knowledge has led to the recent, relatively broad, adoption of this approach in academic psychia- try units across the United States. Beyond these relatively straightforward examples related to drug me- tabolism, however, clinical response for the vast majority of drugs is, as stated earlier, likely to be polygenic in nature, with multiple genes or alleles each contributing a small or very modest effect. The utility therefore of knowing these genes (i.e., to categorically predict the response to a given drug in an individual patient) is far from certain (Meyer, 2004). Moreover, for many drugs, nonbiological factors, including environmental factors (e.g., diet, exercise) that vary over time, may contribute as much or more
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 149 as genes to the ultimate effect of a drug. These caveats not withstanding, it is highly likely that a range of predictive tools will undoubtedly prove invaluable in tailoring therapies to individual patients or subpopulations of patients in the future (The Royal Society, 2005). While the choice of the drug itself is essential, the dose, timing, and especially the duration of treatment are often critical in determining the ultimate health benefit for the patient. Thus, the broad concept of tailoring also includes various approaches to ensuring adequate compliance or adher- ence, including the use of biomarkers to assess the degree of drug efficacy (or lack thereof) and/or whether the patient is actually compliant with his or her treatment regimen to achieve optimal health outcomes. Again, in the real worldâoften in sharp contrast to the clinical trials required to establish safety and efficacy in the first placeâsuch factors will in good measure determine the effectiveness and ultimate health outcome for any biopharmaceutical. Impact of Tailored Therapies on Drug Development and Comparative Effectiveness Tailoring therapies to the patients who will most benefit from them could improve R&D productivity by having an impact on the three im- portant productivity levers (i.e., cost, time lines, attrition). For example, if one can identify a priori that the target or pathway under study is directly related to an important clinical outcome for at least a subgroup of patients with a given disease, then the âdrugâ can be tailored to impact that path- way and the attrition associated with drug candidates operating through that pathway should be reduced substantially. Moreover, if a subgroup of patients with any given disease or syndrome who are most likely to respond to a given drug can be identified using a biomarker, theoretically the num- ber of patients (and thus the expense and cycle time) needed to demonstrate a clinically meaningful impact on efficacy and/or safety in late-stage clinical trials can also be reduced. We have used modeling to understand the rela- tionship between response rate (relative to a placebo or a comparator) and sample size for clinical trials and have found that the use of a biomarker that increases drug response rates only modestly (20-30 percent) could dramatically reduce the number of patients required for late-stage clini- cal trials. The latter will therefore not only reduce the costs of expensive late-stage clinical trials, but also decrease the number of patients exposed to a drug that is unlikely to bring them benefit. Biomarkers can therefore also be used to avoid exposing patients who are most likely to have a serious adverse event or side effect (e.g., immunogenicity bioÂmarkers for bioÂproducts). Moreover, attrition rates resulting from type II errors (false negative studies of active drug versus placebo or active comparators) will
150 EVIDENCE-BASED MEDICINE be reduced by eliminating those patients who are unlikely to respond to a given drug and thus will reduce the statistical power (add to the ânoiseâ) inherent in any clinical study. Ideally, such biomarkers could also be used to stratify patients once the drug is approved and marketed. This, of course, is already the case with the targeted cancer agents cited above and in our view will eventually be the ârule not the exceptionâ for the majority of drugs across the continuum of tailored therapies. Consequently, Lilly and other biopharmaceutical companies are employing biomarker strategies for virtually all drug candidates early in their development, first to help deter- mine whether these drugs prove safe and efficaciousâpreferably in phase I or II (i.e., to reduce late-stage phase III attrition)âand then eventually to potentially stratify patient populations once the drug reaches the market. Lilly anticipates that some of these biomarkers will also eventually be vali- dated and used as companion diagnostic or prognostic tests to increase the predictability of a beneficial response and to ensure the effectiveness of a given drug in real-world clinical settings. If successful, such an approach will dramatically increase the therapeutic benefit and thus the value propo- sition afforded by biopharmaceuticals in the treatment and management of disease. In parallel with efforts focused on identifying the âright patient, right dose, and right timeâ for therapeutic intervention, it is imperative to utilize the principles of tailored therapeutics to improve relevant patient outcomes to establish comparative effectiveness among all treatment options. An equal effort must be focused on understanding which outcomes are rel- evant and value-added for patients, either at an individual or at a popula- tion level. Historically, much of the biopharmaceutical industry focus in this regard has been on the evaluation of clinical trial end points defined predominantly by the regulatory requirements to gain marketing approval. Although important, these end points are often far removed from the out- come measures that are meaningful to patients, providers, and payers. Such examples might include the distinction between the improvement in positive and negative symptoms observed in schizophrenic patients treated with antiÂ psychotic drugs in Âpivotal clinical trials and the measurement of more valu- able âfunctional-basedâ outcomes, such as whether the patient can maintain an independent living arrangement or maintain employment. If the biophar- maceutical industry is to deliver valuable medicines in the future, there needs to be increased collaboration across healthcare stakeholders to evaluate and âclinically validateâ some of these important functional outcome measures so that they can be effectively incorporated into the development of new therapeutics, preferably even before approval and launch. Comparative effectiveÂness Â studies and their eventual adoption by providers and payers will thus need to consider all relevant and meaningful health outcomes. Nonetheless, the tools currently being developed in support of tailored
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 151 therapies, if applied appropriately, could allow for the design of compara- tive effectiveness (Califf, 2004) studies that consider the biological (as well as nonbiological) substrates and heterogeneity of drug response, allowing for meaningful comparisons between drugs, or between drug and non-drug therapies, in subgroups of patients who are more likely to benefit from their use, as well as avoiding treatments (including drugs) of limited effectiveness. Only in such a setting where true confounders of outcome (such as those we have highlighted above) are recognized, fully understood, and taken into consideration, can comparative effectiveness assessments of biopharmaceu- ticals be truly informative and meaningful. REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2007. Registries for evaluating patient outcomes: A userâs guide. http://effectivehealthcare.ahrq.gov/repFiles/PatOutcomes.pdf (accessed October 8, 2007). Al-Khatib, S. M., K. J. Anstrom, E. L. Eisenstein, E. D. Peterson, J. G. Jollis, D. B. Mark, Y. Li, C. M. OâConnor, L. K. Shaw, and R. M. Califf. 2005. Clinical and economic implications of the multicenter automatic defibrillator implantation trial-II. Annals of Internal Medicine 142(8):593-600. Alexander, K. P., E. D. Peterson, C. B. Granger, C. Casas, F. Van de Werf, P. W. Armstrong, A. Guerci, E. J. Topol, and R. M. Califf. 1998. Potential impact of evidence-based medi- cine in acute coronary syndromes: Insights from GUSTO-IIb. Global Use of Strategies to Open Occluded Arteries in Acute Coronary Syndromes Trial. Journal of the American College of Cardiology 32:2023-2030. Alexander, K. P., A. Y. Chen, M. T. Roe, L. K. Newby, C. M. Gibson, N. M. Allen-LaPointe, C. Pollack, W. B. Gibler, E. M. Ohman, and E. D. Peterson. 2005. Excess dosing of antiplatelet and antithrombin agents in the treatment of non-st-segment elevation acute coronary syndromes. JAMA 294(24):3108-3116. American College of Cardiology. 2007. National cardiac data registries (NCDR). http://www. accncdr.com/WebNCDR/Common/ (accessed October 8, 2007). American Heart Association. 2007. Get with the guidelines (GWTG). http://www.americanheart. org/presenter.jhtml?identifier=1165 (accessed October 8, 2007). Balas, E. A. 2001. Information systems can prevent errors and improve quality. Journal of the American Medical Informatics Association 8(4):398-399. Blomkalns, A. L., A. Y. Chen, J. S. Hochman, E. D. Peterson, K. Trynosky, D. B. Diercks, G. X. Brogan, Jr., W. E. Boden, M. T. Roe, E. M. Ohman, W. B. Gibler, and L. K. Newby. 2005. Gender disparities in the diagnosis and treatment of non-st-segment elevation acute coronary syndromes: Large-scale observations from the crusade (can rapid risk stratification of unstable angina patients suppress adverse outcomes with early implemen- tation of the American College of Cardiology/American Heart Association guidelines) national quality improvement initiative. Journal of the American College of Cardiology 45(6):832-837. Califf, R. M. 2004. Defining the balance of risk and benefit in the era of genomics and pro- teomics. Health Affairs 23(1):77-87. Califf, R. M., and D. L. DeMets. 2002. Principles from clinical trials relevant to clinical prac- tice: Part I. Circulation 106(8):1015-1021.
152 EVIDENCE-BASED MEDICINE Cepeda, M. S., R. Boston, J. T. Farrar, and B. L. Strom. 2003. Comparison of logistic regres- sion versus propensity score when the number of events is low and there are multiple confounders. American Journal of Epidemiology 158(3):280-287. CMS (Centers for Medicare and Medicaid Services). 2007. Implantable cardioverter device (ICD) registry. http://www.cms.hhs.gov/MedicareApprovedFacilitie/04_ICDregistry.asp (accessed October 8, 2007). Damani, S. B., and E. J. Topol. 2007. Future use of genomics in coronary artery disease. Journal of the American College of Cardiology 50(20):1933-1940. Eisenstein, E. L., K. J. Anstrom, D. F. Kong, L. K. Shaw, R. H. Tuttle, D. B. Mark, J. M. Kramer, R. A. Harrington, D. B. Matchar, D. E. Kandzari, E. D. Peterson, K. A. Schulman, and R. M. Califf. 2007. Clopidogrel use and long-term clinical outcomes after drug-eluting stent implantation. JAMA 297:E1-E10. Evans, W. E., and H. L. McLeod. 2003. Pharmacogenomicsâdrug disposition, drug targets, and side effects. New England Journal of Medicine 348(6):538-549. Ferguson, T. B., Jr., E. D. Peterson, L. P. Coombs, M. Eiken, M. Carey, F. L. Grover, and E.Â R. DeLong. 2003. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: A randomized controlled trial. JAMA 290(1):49-56. Higashi, M. K., D. L. Veenstra, L. M. Kondo, A. K. Wittkowsky, S. L. Srinouanprachanh, F. M. Farin, and A. E. Rettie. 2002. Association between CYP2c9 genetic variants and anticoagulation-related outcomes during warfarin therapy. JAMA 287(13):1690-1698. Kurth, T., A. M. Walker, R. J. Glynn, K. A. Chan, J. M. Gaziano, K. Berger, and J. M. Robins. 2006. Results of multivariable logistic regression, propensity matching, propensity ad- justment, and propensity-based weighting under conditions of nonuniform effect. Ameri- can Journal of Epidemiology 163(3):262-270. Lazarou, J., B. H. Pomeranz, and P. N. Corey. 1998. Incidence of adverse drug reactions in hos- pitalized patients: A meta-analysis of prospective studies. JAMA 279(15):1200-1205. Mark, D. B., M. A. Hlatky, R. M. Califf, C. D. Naylor, K. L. Lee, P. W. Armstrong, G. I. Barbash, H. White, M. L. Simoons, C. L. Nelson, N. E. Clapp-Channing, J. D. Knight, F. E. Harrell, Jr., J. Simes, and E. J. Topol. 1995. Cost effectiveness of thromobolytic therapy with tissue plasminogen activator as compared with streptokinase for acute myocardial infarction. New England Journal of Medicine 332(21):1418-1424. Massachusetts Data Analysis Center. 2007. http://www.massdac.org/ (accessed October 8, 2007). Mathieu, M. P., ed. 2007. Parexelâs pharmaceutical R&D statistical sourcebook. Boston, MA: Barnett International. Meyer, U. A. 2004. Pharmacogeneticsâfive decades of therapeutic lessons from genetic diver- sity. Nature Reviews Genetics 5(9):669-676. National Cancer Institute. 2007. Surveillance, epidemiology, and end results (SEER) program. http://seer.cancer.gov (accessed October 8, 2007). Need, A. C., A. G. Motulsky, and D. B. Goldstein. 2005. Priorities and standards in pharma- cogenetic research. Nature Genetics 37(7):671-681. Ness, R. B. 2007. Influence of the HIPAA privacy rule on health research. JAMA 298(18): 2164-2170. OâShea, J. C., J. M. Kramer, R. M. Califf, and E. D. Peterson. 2004. Part I: Identifying holes in the safety net. American Heart Journal 147(6):977-984. Peterson, E. D., C. V. Pollack, Jr., M. T. Roe, L. S. Parsons, K. A. Littrell, J. G. Canto, and H. V. Barron. 2003. Early use of glycoprotein iib/iiia inhibitors in non-ST elevation acute myocardial infarction: Observations from the national registry of myocardial infarction 4. Journal of the American College of Cardiology 42(1):45-53.
TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 153 Peterson, E. D., J. W. Hirshfeld, Jr., T. B. Ferguson, J. M. Kramer, R. M. Califf, and L. G. Kessler. 2004. Part II: Sealing holes in the safety net. American Heart Journal 147(6):985-990. Peterson, E. D., A. Y. Chen, K. P. Alexander, N. M. Allen LaPointe, E. S. Fraulo, L. K. Newby, M. T. Roe, W. B. Gibler, and E. M. Ohman. 2006. The association between hospital guideline adherence, dosing safety, and patient outcomes: Results from the crusade qual- ity improvement initiative. Journal of the American College of Cardiology 47(4):255A. Pryor, D. B., R. M. Califf, F. E. Harrell, Jr., M. A. Hlatky, K. L. Lee, D. B. Mark, and R. A. Rosati. 1985. Clinical data bases. Accomplishments and unrealized potential. Medical Care 23(5):623-647. Rogers, W. J., J. G. Canto, C. T. Lambrew, A. J. Tiefenbrunn, B. Kinkaid, D. A. Shoultz, P. D. Frederick, and N. Every. 2000. Temporal trends in the treatment of over 1.5 million patients with myocardial infarction in the U.S. from 1990 through 1999. The national registry of myocardial infarction 1, 2, and 3. Journal of the American College of Cardi- ology 36:2056-2063. The Royal Society. 2005. Personalised medicine hopes and realities. London, UK: Publishing Section of the Royal Society. Society of Thoracic Surgeons. 2007. National Cardiac Database (NCD). http://www.sts.org/ sections/stsnationaldatabase/ (accessed October 8, 2007). Sonel, A. F., C. B. Good, J. Mulgund, M. T. Roe, W. B. Gibler, S. C. Smith, Jr., M. G. Cohen, C. V. Pollack, Jr., E. M. Ohman, and E. D. Peterson. 2005. Racial variations in treat- ment and outcomes of black and white patients with high-risk non-ST elevation acute coronary syndromes: Insights from crusade (Can rapid risk stratification of unstable angina patients suppress adverse outcomes with early implementation of the ACC/AHA guidelines?). Circulation 111(10):1225-1232. Spear, B. B., M. Heath-Chiozzi, and J. Huff. 2001. Clinical application of pharmacogenetics. Trends in Molecular Medicine 7(5):201-204. Stukel, T. A., E. S. Fisher, D. E. Wennberg, D. A. Alter, D. J. Gottlieb, and M. J. Vermeulen. 2007. Analysis of observational studies in the presence of treatment selection bias: Effects of invasive cardiac management on ami survival using propensity score and instrumental variable methods. JAMA 297(3):278-285. Tufts Center for the Study of Drug Development. 2006. http://csdd.tufts.edu/ (accessed July 11, 2008). Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. JAMA 290(12): 1624-1632. United Healthcare. 2007. Cardiac programs. https://www.unitedhealthcareonline.com/b2c/ CmaAction.do?channelId=d3d03d7872bd4110VgnVCM1000007740dc0a_&searchStr= ACC (accessed October 8, 2007). U.S. Department of Health and Human Services. 2007. Summary of the HIPAA privacy rule. http://www.hhs.gov/ocr/privacysummary.pdf (accessed October 8, 2007). Vandenbroucke, J. P., E. von Elm, D. G. Altman, P. C. Gotzsche, C. D. Mulrow, S. J. Pocock, C. Poole, J. J. Schlesselman, and M. Egger. 2007. Strengthening the reporting of obser- vational studies in epidemiology (STROBE): Explanation and elaboration. Annals of Internal Medicine 147(8):W163-W194. Wilensky, G. R. 2006. Developing a center for comparative effectiveness information. Health Affairs (Millwood) 25(6):w572-w585.