6
Transforming the Speed and Reliability of New Evidence

INTRODUCTION

The medical profession has long viewed randomized controlled trials (RCTs) as the best available evidence for determining whether specific medical interventions work. However, as previous chapters suggest, the speed and complexity with which new medical interventions and scientific knowledge are being developed often make RCTs difficult or even impossible to conduct. The capacity of healthcare informatics to collect, analyze, and include variability and comparison of data in health care is promising. Many healthcare practitioners are looking toward electronic medical records (EMRs) and clinical data registries as new sources of evidence because the information would be instantly accessible, include a broad cross section of the general population, and offer important longitudinal data often lacking in RCTs. In the area of drug development, a combination of regulatory and market pressures is making new sources of information even more critical. This chapter examines how electronic medical records and clinical data registries could be used to expand the evidence base in many areas, as well as the unique problems facing pharmaceutical companies as they begin to develop individually tailored medicines.

In his presentation, George C. Halvorson identifies many areas in which EMRs could greatly enrich research. For instance, massive data sets could be built that could be used to support structured clinical trials and track the longitudinal consequences of medical interventions. The data could also be used in new ways, finding unforeseen correlations. Health information technology can provide the large data sets, longitudinal data, and instant



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 127
6 Transforming the Speed and Reliability of New Evidence INTRODUCTION The medical profession has long viewed randomized controlled trials (RCTs) as the best available evidence for determining whether specific medical interventions work. However, as previous chapters suggest, the speed and complexity with which new medical interventions and scientific knowledge are being developed often make RCTs difficult or even impos- sible to conduct. The capacity of healthcare informatics to collect, analyze, and include variability and comparison of data in health care is promis- ing. Many healthcare practitioners are looking toward electronic medical records (EMRs) and clinical data registries as new sources of evidence be- cause the information would be instantly accessible, include a broad cross section of the general population, and offer important longitudinal data often lacking in RCTs. In the area of drug development, a combination of regulatory and market pressures is making new sources of information even more critical. This chapter examines how electronic medical records and clinical data registries could be used to expand the evidence base in many areas, as well as the unique problems facing pharmaceutical companies as they begin to develop individually tailored medicines. In his presentation, George C. Halvorson identifies many areas in which EMRs could greatly enrich research. For instance, massive data sets could be built that could be used to support structured clinical trials and track the longitudinal consequences of medical interventions. The data could also be used in new ways, finding unforeseen correlations. Health information technology can provide the large data sets, longitudinal data, and instant 27

OCR for page 127
2 EVIDENCE-BASED MEDICINE data that will allow researchers to make the kinds of breakthroughs needed in the coming decades. Eric D. Peterson notes that provider-led efforts to develop data regis- tries could capture clinical information at major points and allow patients to be tracked over the long term. The data also could be used to generate new evidence and drive it into clinical practice more quickly. Professional societies such as the Society of Thoracic Surgeons, the American Heart Asso- ciation, and the American College of Cardiology National Cardiovascular Data Registry all have rich data sets on patients with coronary disease, heart failure, and stroke. These data registries could capture standard data elements that could be linked, allowing cross-sectional and longitudinal information to be gathered from insurance claims or laboratory and phar- macy databases. The information could be used to track diseases, treat- ments, and outcomes. In his paper, Steven M. Paul identifies the challenges that pharma- ceutical companies face in using evidence to develop drugs that are tailored to individuals. In contrast, most pharmaceutical products today are devel- oped as a one-size-fits-all product, but only about 50 percent of patients respond to any given drug therapy. A few drugs have been developed that are literally targeted to the molecular underpinnings of diseases. However, it will be difficult to develop such drugs for more complex and common diseases such as diabetes. In addition to the technical and scientific chal- lenges that drug companies face, issues such as shorter patent lives of drugs, the slow Food and Drug Administration (FDA) approval process, and a lack of new molecules entering the pipeline are making the development of tailored drugs more appealing. The ability to use biomarkers in developing drugs has been helpful in reducing drug approval costs and shortening the process, but sustaining profitability becomes challenging when only a small subset of patients benefits from a drug. ELECTRONIC MEDICAL RECORDS AND THE PROSPECT OF REAL-TIME EVIDENCE DEVELOPMENT George C. Halvorson, Kaiser Permanente The importance of the development and adoption of EMRs to func- tional improvements in the healthcare system and in patient health has been widely supported. This discussion focuses on the use of EMRs in medical research. Hopefully, there will be something in these comments that will be new or at least useful to some readers. Kaiser Permanente (Kaiser) is currently spending about $4 billion put- ting its own EMRs and physician support tools in place. One of the ma-

OCR for page 127
2 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE jor reasons we are doing this entire EMR project is to facilitate medical research. We are doing it both to deliver better patient care and to do some serious medical research. We are committed to that agenda. However, I am not speaking just from Kaiser’s perspective or our version of EMRs. Overall, as all caregivers manage to get data transferred from paper files into electronic records, I strongly believe that EMRs should and will revo- lutionize medical research. Done well, done adequately, compiled appro- priately, and supported appropriately, EMRs should open up a Golden Age of healthcare research. Think about the key advantages of the EMR for medical research— instant, comprehensive data. Instead of researchers’ spending weeks, months, and years gathering pieces of data and pulling together sets of data, EMRs provide instant access to comprehensive data in real time. All patient medical information will be available electronically and true longitudinal data will be possible. Instead of data that are limited to the very narrow time frame of each study, if the database is constructed appropriately, data will go back years into history and extend indefinitely into the future. Current medical research is built around very small numbers of patients—a couple of thousand patients here, a couple of thousand there— each in a very finite study. Using EMRs, the opportunity exists to have instant access to massive data sets comprised of millions and millions of patients’ data. There is also great flexibility in data utilization with elec- tronic data, and there will be a growing ability to use the data in various ways. With electronic data, studies can be reconfigured in ways that can’t even be dreamt of when using a paper-based research system. So how can this resource be used? In many ways. It will be ideal for highly structured clinical trials. In particular, classic clinical trials can be far better supported if the data are electronic. Also, electronic data could help with extended follow-up work for issues such as post-market tracking, and EMRs could be used to track progress and care results into the future. For example, if a patient has a stent put in, EMRs can help determine the consequences of that action, 3 years, 5 years, 15 years out, an impossible task using the time-limited, population-limited, classic paper-based research approach. Population health analyses can be carried out in whole new ways, with the prospect of identifying the impact of various kinds of care approaches on broad populations. Unforeseen correlations will increasingly be detected, as it becomes possible to sort through electronic data sets and troll for correlations of age, ethnicity, or diabetes, for example, with other conditions. That type of statistical correlation searching and research cannot be done in any meaningful way with paper, but it can be done relatively easily if you put together the right electronic database. Just-in-time learning and treatment

OCR for page 127
0 EVIDENCE-BASED MEDICINE searches also become possible with an EMR. A caregiver can identify what works for a given condition and what the most current patterns of treat- ment happen to be. There are all kinds of levels of electronic research that can be done in the context of current science. In the next wave of exciting research, DNA correlations will be commonplace, and it will be the norm to check a patient’s genetics and reach some conclusions about patient care. Genomic and genetics research is developing in some exciting ways, as high- lighted in several papers in this publication, and with electronic data, it will be possible to carry out this research much more broadly and much more effectively. Currently, such a project is under way at Kaiser, and a DNA database is being developed to support our research efforts. Kaiser has conducted data file research using our electronic databases that illustrates the potential of EMR data. One analysis revealed that Vioxx was causing problems for a number of Kaiser patients and was conducted by sorting through our database. This original identification—conducted using a level 1 electronic database—was enough to trigger an alarm bell and lead to the initiation of an assessment process. However a level 1 database can only indicate that a percentage of patients are being harmed; the specif- ics of gender, age, ethnicity, and other conditions remained a mystery. Our new full EMR level 2 database, which is going into place now, will enable the additional step of identifying exactly which patients are harmed and which are benefited by a drug. Kaiser has also initiated similar data work relative to both hormone replacement therapy and the follow-up care of patients that had heart stents. We identified the fact that there were some problems with particular stents. Again, this is the kind of results-based longitudinal data that can come from an EMR quickly and easily and be used to reach conclusions about approaches to care. The basic, rudimentary level 1 database provides one set of conclusions, but level 2 will allow researchers to drill down through the various layers of data and determine some additional findings and conclusions. What does this mean for electronic data and EMRs in the future? Any- one who is going down the EMR pathway should begin with the end in mind and design data sets to support clinical trials. As EMRs are designed, medical research must be identified as one of the outcomes of the process so that the data fields and data sets necessary are included for that purpose. Likewise, data sets need to be designed to facilitate analysis of outcomes and care patterns. For example, relevant demographics should be built into the data set to enable evaluation of race, ethnicity, gender, economic status, and geography. From the outset, these types of capabilities must be built into the data set to allow that level of research over time. Kaiser has spent significant time on this particular issue. We started with a dozen different ethnicities and then expanded to a couple of hun-

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE dred. We are now working backward to try to figure out what a workable number is—200 is too many. A broad category such as “Asian American” raises the fact that there are obvious differences between Korean Americans, Japanese Americans, and Chinese Americans. One category is not sufficient, yet a dozen is unmanageable. That is still a work in progress. The goal is to sort through all the data sets so you can say that these are relevant dif- ferences relative to ethnicity, behavior, and culture. Knowing that, we need to decide where we draw the definitional ethnic and racial line. Some of these issues are going to be on a learning curve for a while, and they must be addressed as we move forward. Issues such as economic sta- tus, geography, and gender will all have to be part of our electronic research data set. Then—as a major next step—the data strategy should incorporate genetic components appropriately into the research agenda. Obviously, only a computer can do some of this work. It cannot be done effectively with paper files or stand-alone data sets. The computer is needed to create large data sets, longitudinal data, and instant data. If this work is done well, it could usher in the Golden Age of medical research. Having said that, data must be widely available in order to truly reform health care in America. The key to real reform will be to focus the atten- tion of the country on major and very specific healthcare opportunities. The standard model of reform right now, from a care delivery perspective, is highly disorganized. Our current approach is to do many separate and isolated projects all over the country and then hope that the cumulative impact of those local projects somehow magically adds up to better care. That model is not likely to work. A second model proposed by quite a few people is to simply jump to conclusions about what might work and then micromanage bits and pieces of the care delivery process from the inside, to recruit more primary care doctors into local practice, for example, hoping that somehow more pri- mary care doctors will result at some later time in a better set of healthcare outcomes for patients. That kind of reform model is also dependent on some categories of magical thinking and is somewhat unlikely to work to achieve real systematic reform. Others think that financial approaches are needed and believe that mi- cromanaging bits and pieces of caregivers’ incentives will somehow result in improved health care. That model is also currently not well organized or focused enough to work. What is likely to actually work to achieve real reform would be if the nation took a hard look at the fact that five medical conditions drive more than half of our healthcare costs. Americans could greatly improve the care infrastructure for patients with those five conditions, which should be viewed as a huge opportunity. If we focus on patients with those con- ditions and then work backward to align benefit sets, payment models,

OCR for page 127
2 EVIDENCE-BASED MEDICINE structure, focus, attention, tools, data reporting, community priorities, and health education on those five conditions, the cost trajectory of American health care could be dramatically changed. Care could improve, and real and logistical pieces could be set in place that are directly aligned with the right outcome of real care reform and health care. Healthcare reform in America has been approached backward—from the bottom up, starting with local bits and pieces. That whole agenda needs to be turned around. It is necessary to set a common goal—a practi- cal and reasonable goal—and then to work backward, changing the total infrastructure as needed to align the functional system of care with that goal. Building the right electronic data sets and making medical research a direct tool of medical reform could result in massive improvements in healthcare delivery. What is most acutely needed is focus, followed by the development of these tools. I will end by saying, “Be well and if you are not well, be careful.” RESEARCH METHODS TO SPEED THE DEVELOPMENT OF BETTER EVIDENCE—THE REGISTRIES EXAMPLE Eric D. Peterson, Duke University The cycle of evidence development and adoption in medicine is far from ideal. Many current day care decisions must be made in the absence of empirical evidence, and where evidence exists, it is often incomplete. While RCTs have become the gold standard for therapeutic evaluation, such studies often determine treatment efficacy, measured only by short-term surrogate markers, rather than more meaningful long-term clinical events. Randomized trials tend to be carried out predominantly with younger, healthy patients who are treated under protocol conditions by highly trained specialists at leading medical centers. Thus, a full measure of their safety and effectiveness is realized only after the therapy reaches the market and is used in real-world patients and caregivers (Califf and DeMets, 2002). Even when good evidence is available, the speed and completeness of uptake of this information by clinicians is delayed and flawed by frequent errors of omission and commission (Balas, 2001). Large-scale, provider-led clinical registries offer the potential both to augment medical evidence development and to speed evidence adoption into practice. A provider-led clinical registry can be defined as a clinician- organized network for collecting detailed patient information in a uniform fashion for a given population, often defined by a particular disease or medical treatment, and used for addressing research, quality assessment, and/or policy purposes. The concept for these registries can be traced back

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE to Eugene Stead, the first chairman of the Department of Medicine at Duke University. Forty years ago this year, he outlined the idea of a “living textbook of medicine,” extolling physicians to routinely collect and record data on the treatment and outcomes of their patients in order to better care for those in the future (Pryor et al., 1985). The Duke Database for Cardiovascular Disease, the world’s first longitudinal cardiovascular data registry, was spawned by these ideas and lives on in a number of national, collaborative, provider-led clinical data registries. This paper outlines the desired operating and functional characteristics of an ideal clinical registry. We then take a more in-depth look at the lead- ing edge of clinical registries as exemplified by those in cardiovascular dis- ease. Through these registries we explore their current and future planned capacities, as well as their many applications for evidence development and dissemination. We end by discussing the challenges and opportunities faced by such registry efforts moving forward. Characteristics of Ideal Clinical Data Registries In a perfect world, data registries would accurately capture detailed clinical information at “key points and events” in a patient’s life. Such data should be linkable within and among data sources, such that one could con- struct a longitudinal record of a patient’s care and outcomes. For research purposes, these clinical data registries could also be supplemented when needed with other specialized information such as genomic, biomarker, and/or imaging information. This ideal registry should be readily accessible to researchers for scientific discovery; to outcomes researchers for studying healthcare delivery; and to frontline clinicians, giving them timely feedback on their care processes and outcomes to stimulate quality improvement. Clinical registries should also have several important functionalities that have recently been summarized in an Agency for Healthcare Research and Quality supported Users Guide to Registries Evaluating Patient Out- comes. This document outlines good clinical practice policies for establish- ing or evaluating an existing registry, including the design and purpose, data sources, data elements, ownership and privacy issues, patient and provider recruitment, data collection and quality processes, and analysis and inter- pretation (AHRQ, 2007). Briefly, an ideal clinical registry should enroll representative patients, providers, and settings; collect information using standardized data elements and definitions; contain patient identifiers that allow linking of encounter records within and among data registries; have data quality and auditing systems in place to promote the accuracy and completeness of data entered; be flexible enough to allow rapid addition or deletion of variables to meet ever-changing clinical and research needs;

OCR for page 127
 EVIDENCE-BASED MEDICINE be analyzed by using state-of-the-art methodologies (Vandenbroucke et al., 2007); and be actionable, integrated with quality assessment and improve- ment efforts. Size and Scope of Existing Cardiovascular Provider-Led Registries While the characteristics and features of an ideal registry may seem futuristic, the majority of these features are now present or planned for by the major cardiovascular provider-led registries. Table 6-1 provides a brief description of the Society of Thoracic Surgeons (STS) National Cardiac Database, the American College of Cardiology (ACC) National Cardiovascular Data Registry (NCDR), and the American Heart Asso- ciation (AHA) Get with the Guidelines programs (American College of Cardiology, 2007; American Heart Association, 2007; Society of Thoracic Surgeons, 2007). As demonstrated, the size and scope of these programs are quite substantial. Current participation in these cardiovascular registries is voluntary, yet a growing number of external forces are beginning to provide strong incen- tives for clinician engagement. For example, one large healthcare insurer encourages registry participation by making involvement a condition for “premium provider status” (United Healthcare, 2007). Certain states have begun requiring registry participation as part of state-based certificate of need and quality assurance programs (Massachusetts Data Analysis Center, TABLE 6-1 Selected Provider-Led Cardiovascular Clinical Data Registries Years of Data No. of Sites No. of Patients or Procedures STS CABG 1990-2007 1,000 2,768,688 Valve 1990-2007 1,000 709,088 Thoracic 1999-2006 59 49,496 Congenital heart 1998-2006 59 84,072 AHA CAD 2000-2007 594 426,414 Stroke 2001-2007 1,040 494,815 Heart failure 2005-2007 397 130,489 ACC-NCDR Cath/PCI 1997-2007 971 Cath: 4,113,911 PCI: 2,003,719 ACS 2007 295 37,632 ICD 2005-2007 1,490 179,572 NOTE: ACS = acute coronary syndrome; CABG = isolated coronary artery bypass graft sur- gery; CAD = admissions for coronary artery disease; Cath = diagnostic coronary angiography; PCI = percutaneous coronary intervention; Valve = any valve procedure.

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE 2007). Most recently, the Centers for Medicare and Medicaid Services (CMS) facilitated complete “voluntary” participation in an Implantable Cardioverter Defibrillator Device (ICD) Registry by requiring it as a condi- tion for payment (CMS, 2007). The scope of conditions and procedures covered by such registries is also rapidly expanding. For instance, within the past year, the ACC NCDR has launched three new registry efforts in ICD, carotid stenting, and acute coronary syndromes, and it is planning several more within the next few years, including congenital heart disease, cardiovascular imaging, and am- bulatory cardiac care. The latter exemplifies the trend for many provider-led registries to expand beyond in-hospital settings and follow cardiac patients across the care continuum. Modernization of Cardiovascular Provider-Led Registry Operations Provider-led registries are also changing as we enter the electronic age of medical care. In particular, progress in five key areas is promoting the potential for more integrated and cross-purpose clinical registries. These include the standardization of data elements and definitions; the clarifi- cation of patient privacy rules; the development of new data harvesting technologies; the creation of longitudinally linked hybrid databases; and the growing collaboration among professional societies, insurers, and gov- ernment regulators. Data Standards Efforts While the development of standards for medicine terminology has tra- ditionally been elusive, cardiovascular clinical registries are now making great progress toward this goal. The AHA and ACC created a Data Stan- dards Committee to develop cardiovascular (CV) elements and definitions that are used in all their society-based guidelines and registries. Similarly, the STS and ACC have worked to harmonize the nomenclature for their respective cardiac revascularization registries. Most recently, the National Heart, Lung, and Blood Institute sponsored a 2-day retreat to further insti- tutionalize these standards across clinical trials and registries (U.S. Depart- ment of Health and Human Services, 2007). Clarification of Patient Privacy Rules In 1996, the U.S. Department of Health and Human Services issued the Health Insurance Portability and Accountability Act (HIPAA). While HIPAA was designed to protect misuse of patients’ health information, (mis)interpretation of this complex ruling has created significant challenges

OCR for page 127
 EVIDENCE-BASED MEDICINE for registries and clinical research in general (Ness, 2007). More recently, the pendulum of HIPAA concerns appears to be swinging towards a more neutral position. Briefly, provider-led registries now are seen as compli- ant with HIPAA when using a business associate agreement with registry participants that permits data gathering and sharing for the purposes of quality assurance (Society of Thoracic Surgeons, 2007). Aggregated data within the warehouse can then be “de-identified” and used for research. In this manner, the burden and bias resulting from trying to gain informed consent from all patients in a registry can often be avoided (Alexander et al., 1998). Data Harvesting Advances Once data are more uniformly collected, it becomes possible to ex- change among various electronic databases. Participants in clinical registries have traditionally entered clinical data using registry-specific software or, more recently, Web-based data capture systems. However, more and more hospitals already capture certain clinical data in the EMR. To capitalize on this, novel data harvesting and warehouse systems are now being developed that will permit providers to seamlessly map any existing stored patient information into a given clinical registry, thereby “pre-populating” the reg- istry case report form and limiting redundant data entry. Additionally, data warehouses are moving toward the development of Web-based modular augmentation tools that will allow registries to rapidly collect new clini- cal information when needed. As such, registries are no longer locked into the usual 3-year or longer delay required for registry database upgrades. Rather, they now can respond nearly instantaneously to a new research, patient safety, or policy issue. Longitudinal Linked Databases Registries have traditionally collected cross-sectional information (e.g., in-hospital events) and have had limited functionality to study longitudinal patient outcomes. Yet longitudinal patient events (including hospitaliza- tions, outpatient visits, and death) are routinely captured and stored in administrative claims databases such as those of Medicare or private insur- ers. To potentially access this valuable resource, the major CV provider-led registries are all currently working to link their clinical databases with claims sources. In a similar manner, the provider-led registries are also working together to develop a common standard for patient identifiers so as to facilitate cross-registry matching and analysis. These clinical claims and cross-registry hybrid analytic databases create unique research and quality improvement tools for future generations.

OCR for page 127
7 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE Collaborative Leadership The above-noted progress is greatly facilitated when the major parties all work together. Whereas in the past, multiple registries competed to enroll similar patients, the field has recently consolidated, with the goal being to create one national, representative registry for each domain. Additionally, in 2006, the major cardiovascular provider organizations held a series of meetings with healthcare insurers and government agencies that resulted in a commitment by all parties to create the National Consortium for Clinical Databases to promote interregistry cooperation and collaboration. Applications of Clinical Registry for Evidence Development There are several means whereby clinical registries can augment evi- dence development (Box 6-1). These can be grouped into epidemiological investigations and those that specifically evaluate the effectiveness of medi- cal therapeutics. Epidemiological and Surveillance Studies Clinical registries, if large, detailed, and representative, can be unique resources for national epidemiological and health services research. For BOX 6-1 Means for Clinical Registries to Support Evidence Development Epidemiological and Surveillance Studies • rack disease conditions and medical treatments in community- T based, “real-world” settings. • Large longitudinal genomic studies. • Conduct post-market evaluation of drugs and devices. – Study rare events, late outcomes, and “off-label indications.” Comparative Effectiveness Studies • Support more efficient randomized clinical trials. – Identify patients and investigators; streamline data collection. • Observational treatment comparisons. – Evaluate generalizability of trial findings in real world. – xamine clinical issues where RCT is either not possible or not E feasible.

OCR for page 127
 EVIDENCE-BASED MEDICINE to predict the relevant health outcomes afforded by drugs remains a daunt- ing challenge. The rationale for how tailored therapies can potentially impact the discovery and development of biopharmaceuticals, as well as help to define and establish their comparative effectiveness in the marketplace, is outlined briefly below. Challenges to the Current Drug Development Paradigm Over the past 50 years, a large number of effective (and safe) medicines1 have been introduced to treat and manage many acute (e.g., infectious diseases, myocardial infarction) and chronic (e.g., hypertension, diabetes) diseases. These drugs have beneficially impacted longevity, contributing to an ever-increasing life span, as well as to the quality of life in both devel- oped and developing countries. Despite these successes, however, and the very significant, virtually unprecedented, advances in biomedical research that have been made over the past two to three decades, the number of new drugs approved by the FDA over the past 5 years has decreased dramatically (50 percent fewer drugs than in the previous 5 years). In 2007, for example, only 19 new molecular entities (NMEs) (including biologics) were approved by the FDA, the fewest number of new drugs approved since 1983 (www. fda.gov). This reduction in the introduction of new medicines is all the more troubling when one considers the enormous R&D investments currently made by the biopharmaceutical industry, now estimated to be in excess of $50 billion annually (Mathieu, 2007). In fact, it has been estimated con- servatively that each new NME costs in excess of $1.5 billion to develop and introduce (Tufts Center for the Study of Drug Development, 2006). Diminished patent life, complicated by tougher regulatory requirements and enormous global pricing pressures, have all contributed to concerns about the viability of the current biopharmaceutical business model. Finally, it is widely expected that the use of generic drugs will dramatically increase over the next decade given the many scheduled near-term patent expirations. Demonstrating “comparative effectiveness” for a patent-protected drug versus a generic, in addition to monitoring a branded drug’s safety profile in the post-marketing (generic) environment, will require considerably more resources and attention from the healthcare system. Improving R&D productivity remains arguably the most important challenge facing the biopharmaceutical industry. The latter can be achieved by improving three of the most challenging elements of drug discovery and development: unit costs, cycle time, and most importantly, attrition. These 1 Medicines are broadly defined to include traditional small-molecule drugs, bioproducts (proteins and peptides), and vaccines.

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE three dimensions of R&D “productivity” are intimately related to one another, and if each could be improved even modestly, R&D productivity would increase substantially, thus reducing the overall cost of developing a new medicine. A full discussion of R&D productivity is well beyond the scope of this paper. However, the challenges posed by the enormous attri- tion rates for drug candidates as they move through development must be underscored. Currently, about 50 percent of drugs in phase III (the final and most expensive phase of drug development) fail to make it to market, primarily because of unacceptable benefit-risk profiles. Phase II attrition (a phase in which safety is confirmed and efficacy is first established) is even more daunting: currently, 70 percent of potential new drugs entering phase II do not make it to phase III. Reducing the attrition of drugs that are in the late phase of development will be essential to improving R&D productivity. The use of biomarkers focused on the early identification of efficacy and/or safety signals, together with the use of markers focused on patient stratification strategies via a tailored therapy approach during late- stage clinical development, have already proven useful in this regard (and are discussed below). Importantly, the substantial late-stage attrition that characterizes drug development at present also complicates and confounds the timing and initiation of health outcome and comparative effective- ness studies, an essential “component” of future drug development and evidence-based medicine. Tailored Therapies Enable a Paradigm Shift for Drug Development For a variety of common diseases, only about 50 percent of patients will respond favorably to a given biopharmaceutical agent (Spear et al., 2001). Moreover, such response rates in individual patients are often highly variable in both their magnitude and their duration. In one sense, when it comes to “customer” expectations, there appears to be an “efficacy gap” for many marketed one-size-fits-all biopharmaceuticals. It is also important to emphasize that even if a patient experiences no (or little) therapeutic benefit from a given drug, he or she is still at risk for potential side effects and/or serious adverse events. Furthermore, several studies have shown that the burden of adverse drug reactions on the healthcare system is high, accounting for considerable mortality, morbidity, and extra cost (Lazarou et al., 1998). Side effects and/or serious adverse events in this context can often relate to the therapy’s being inadvertently prescribed for the wrong patient or at the wrong dose for that patient. In many circumstances, inter- actions between concomitantly prescribed medicines also contribute heavily to the occurrence or severity of such events (often due to issues of compet- ing or impaired drug metabolism). Most of these drug-drug interactions can be minimized or potentially avoided altogether.

OCR for page 127
 EVIDENCE-BASED MEDICINE Thus, individual differences in drug response (both good and bad) within the population of patients treated pose obvious challenges to drug development, as well as to the way medicines are used clinically and mar- keted by manufacturers. Such individual differences in treatment response also make it considerably more challenging to compare the effectiveness of one drug with another in a given class, since the benefit-risk ratio may differ dramatically for each agent (i.e., among subgroups of patients with the same disease). Thus, it is possible, perhaps even likely, that comparative effectiveness studies of drugs if carried out in large heterogeneous patient populations may miss subgroups of patients in whom a given drug may actually prove to be superior with respect to either its efficacy or its safety, or both. Identifying such subgroups especially in real-world situations will be essential for optimal utilization of any such drug and for establishing meaningful (evidence-based) comparisons between drugs (as well as with nonpharmaceutical interventions, for that matter). Tailored therapy is an approach to optimizing the benefit and risks of a given drug for individual groups of patients. Tailored therapies exist across a continuum from the least tailored one-size-fits-all biopharmaceutical to the truly targeted therapy. The degree of tailoring possible will depend on a number of factors such as drug characteristics, underlying disease biol- ogy (e.g., genetics), available monitoring tools (e.g., diagnostic or imaging technologies), and a number of environmental variables (e.g., diet, culture). Currently, the most extreme examples of tailoring include a number of highly targeted cancer drugs (e.g., Gleevec, Herceptin) that work directly on the underlying biology or genetic etiology of the cancer itself. The predict- ability of a beneficial treatment response with such targeted agents, given that they work on the molecular underpinnings of the disease, is very high. Nonetheless, targeted drugs such as these are still fairly rare and exist at the extreme of the tailoring continuum. The term “personalized medicine” also broadly implies the ability a priori to match a particular therapy to an individual patient, often through pharmacogenomic approaches, which are used either to understand exposure at the individual patient level or to predict and/or measure efficacy or safety. As such, personalized medicines also represent a subset of the range of opportunities within the continuum of tailored therapies. In clinical practice, this type of personalized, phar- macogenomic approach has so far been very rarely applied (Lazarou et al., 1998) despite well-established genetic polymorphisms (e.g., SNPs) and available genotyping methods (Figure 6-1). The reasons for this are mani- fold, but include the lack of large prospective studies to evaluate the impact of genetic variation on drug therapy. Most importantly, the vast majority of the more common diseases are undoubtedly genetically complex and polygenic in nature (e.g., diabetes, obesity, hypertension, coronary heart

OCR for page 127
7 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE Preclinical Phase I Phase II Phase III Phase IV comparative studies Adaptive trial designs in animals Patient enrichment approaches Rolling dose studies Imaging – PET/MRI/CT Target gene SNP profiling Phenotypic or genetic marker efficacy/responder profiling Metabolic profiling (CYP450) DNA, Serum banking Bioinformatics – systems/pathway mapping Disease based modeling FIGURE 6-1 What are we doing differently? NOTE: CT = computed tomography; MRI = magnetic resonance imaging; PET = 6-1 new.eps positron emission tomography. disease), so whether targeted or more personalized drugs can routinely be developed for these disorders is far from certain (Need et al., 2005). The concept of tailored therapies is certainly not new. For years, phy- sicians have used biomarkers such as blood pressure or hemoglobin A1c (HbA1c) to monitor the effectiveness of antihypertensive and diabetes drugs, respectively. Compelling health outcome data exist for only a handful of biomarkers that allow physicians (and patients) to know the likely and pre- dictable benefits of a given drug for a given patient. Two notable examples are the reduction in low-density lipoprotein cholesterol and HbA1c resulting from treatment with hydroxymethylglutaryl-coenzyme A reductase inhibi- tors (statins) and certain antidiabetes medications (e.g., insulin), respec- tively. Both biomarkers are reliable predictors of beneficial health outcome (reduced morbidity and mortality) following treatment with these drugs. However, whether these biomarkers will afford the same degree of pre- dictability for other cholesterol-lowering or diabetes medications is far from certain. This rather sobering possibility has recently been emphasized with the use of oral antidiabetic thiazolidenediones and other (non-statin) cholesterol-lowering agents. The movement toward tailored (or “personalized”) medicine has undoubtedly been accelerated by a whole range of new tools (see Fig- ure 6-1). Some of these tools aid the discovery and development of drug can-

OCR for page 127
 EVIDENCE-BASED MEDICINE didates, yet other emerging diagnostic and prognostic tools (e.g., genom- ics, imaging) will also ultimately benefit healthcare delivery. For example, in discovery, disease state modeling is utilized as a tool to compare new drug candidates with existing medicines in the marketplace. In essence, these models enable the selection of drug candidates that will demonstrate improved health outcomes. Other tools have impact that span all phases of development. For example, pharmacogenomics, or the ability to define genes or alleles that determine the response to drugs, is an exciting prospect for improving the predictability of tailored therapies. To date, there have been a few notable pharmacogenomic studies, particularly with respect to mutations or polymorphisms in drug-metabolizing enzymes (Evans and McLeod, 2003). These studies have proven highly informative in predicting the benefit, as well as the adverse event profile or liability of a number of important drugs. One of the most well known of these examples relates to the study of cytochrome P (CYP) 2C9 polymorphisms and their relation- ship to bleeding risk in patients treated with warfarin (Higashi et al., 2002). Research has led to the identification of two common polymorphisms of the CYP2C9 gene that appear to be associated with an increased risk of over- anticoagulation and bleeding events among patients treated with warfarin. Discussions are currently under way at the FDA to consider the inclusion of these pharmacogenomic data in the prescribing information for warfarin, but even in this relatively well established case, there is much debate about the “clinical validity” and utility of the diagnostic test and the applicability of the data for dosing recommendations for warfarin therapy. The focus of tailored therapies is on the predictability of the health outcome afforded by a given drug in an individual patient. In many cir- cumstances, this may also involve an ability to determine whether there is sufficient exposure to the drug in any given patient to even create the op- portunity for favorable clinical response. One such example includes the evaluation of the CYP2D6 genotype in psychiatric patients treated with antidepressants that are substrates of CYP2D6 (Meyer, 2004). Clearly in this population, genotyping can improve efficacy, prevent adverse drug reactions, and lower the cost of therapy overall. This knowledge has led to the recent, relatively broad, adoption of this approach in academic psychia- try units across the United States. Beyond these relatively straightforward examples related to drug me- tabolism, however, clinical response for the vast majority of drugs is, as stated earlier, likely to be polygenic in nature, with multiple genes or alleles each contributing a small or very modest effect. The utility therefore of knowing these genes (i.e., to categorically predict the response to a given drug in an individual patient) is far from certain (Meyer, 2004). Moreover, for many drugs, nonbiological factors, including environmental factors (e.g., diet, exercise) that vary over time, may contribute as much or more

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE as genes to the ultimate effect of a drug. These caveats not withstanding, it is highly likely that a range of predictive tools will undoubtedly prove invaluable in tailoring therapies to individual patients or subpopulations of patients in the future (The Royal Society, 2005). While the choice of the drug itself is essential, the dose, timing, and especially the duration of treatment are often critical in determining the ultimate health benefit for the patient. Thus, the broad concept of tailoring also includes various approaches to ensuring adequate compliance or adher- ence, including the use of biomarkers to assess the degree of drug efficacy (or lack thereof) and/or whether the patient is actually compliant with his or her treatment regimen to achieve optimal health outcomes. Again, in the real world—often in sharp contrast to the clinical trials required to establish safety and efficacy in the first place—such factors will in good measure determine the effectiveness and ultimate health outcome for any biopharmaceutical. Impact of Tailored Therapies on Drug Development and Comparative Effectiveness Tailoring therapies to the patients who will most benefit from them could improve R&D productivity by having an impact on the three im- portant productivity levers (i.e., cost, time lines, attrition). For example, if one can identify a priori that the target or pathway under study is directly related to an important clinical outcome for at least a subgroup of patients with a given disease, then the “drug” can be tailored to impact that path- way and the attrition associated with drug candidates operating through that pathway should be reduced substantially. Moreover, if a subgroup of patients with any given disease or syndrome who are most likely to respond to a given drug can be identified using a biomarker, theoretically the num- ber of patients (and thus the expense and cycle time) needed to demonstrate a clinically meaningful impact on efficacy and/or safety in late-stage clinical trials can also be reduced. We have used modeling to understand the rela- tionship between response rate (relative to a placebo or a comparator) and sample size for clinical trials and have found that the use of a biomarker that increases drug response rates only modestly (20-30 percent) could dramatically reduce the number of patients required for late-stage clini- cal trials. The latter will therefore not only reduce the costs of expensive late-stage clinical trials, but also decrease the number of patients exposed to a drug that is unlikely to bring them benefit. Biomarkers can therefore also be used to avoid exposing patients who are most likely to have a serious adverse event or side effect (e.g., immunogenicity biomarkers for bioproducts). Moreover, attrition rates resulting from type II errors (false negative studies of active drug versus placebo or active comparators) will

OCR for page 127
0 EVIDENCE-BASED MEDICINE be reduced by eliminating those patients who are unlikely to respond to a given drug and thus will reduce the statistical power (add to the “noise”) inherent in any clinical study. Ideally, such biomarkers could also be used to stratify patients once the drug is approved and marketed. This, of course, is already the case with the targeted cancer agents cited above and in our view will eventually be the “rule not the exception” for the majority of drugs across the continuum of tailored therapies. Consequently, Lilly and other biopharmaceutical companies are employing biomarker strategies for virtually all drug candidates early in their development, first to help deter- mine whether these drugs prove safe and efficacious—preferably in phase I or II (i.e., to reduce late-stage phase III attrition)—and then eventually to potentially stratify patient populations once the drug reaches the market. Lilly anticipates that some of these biomarkers will also eventually be vali- dated and used as companion diagnostic or prognostic tests to increase the predictability of a beneficial response and to ensure the effectiveness of a given drug in real-world clinical settings. If successful, such an approach will dramatically increase the therapeutic benefit and thus the value propo- sition afforded by biopharmaceuticals in the treatment and management of disease. In parallel with efforts focused on identifying the “right patient, right dose, and right time” for therapeutic intervention, it is imperative to utilize the principles of tailored therapeutics to improve relevant patient outcomes to establish comparative effectiveness among all treatment options. An equal effort must be focused on understanding which outcomes are rel- evant and value-added for patients, either at an individual or at a popula- tion level. Historically, much of the biopharmaceutical industry focus in this regard has been on the evaluation of clinical trial end points defined predominantly by the regulatory requirements to gain marketing approval. Although important, these end points are often far removed from the out- come measures that are meaningful to patients, providers, and payers. Such examples might include the distinction between the improvement in positive and negative symptoms observed in schizophrenic patients treated with anti- psychotic drugs in pivotal clinical trials and the measurement of more valu- able “functional-based” outcomes, such as whether the patient can maintain an independent living arrangement or maintain employment. If the biophar- maceutical industry is to deliver valuable medicines in the future, there needs to be increased collaboration across healthcare stakeholders to evaluate and “clinically validate” some of these important functional outcome measures so that they can be effectively incorporated into the development of new therapeutics, preferably even before approval and launch. Comparative effectiveness studies and their eventual adoption by providers and payers will thus need to consider all relevant and meaningful health outcomes. Nonetheless, the tools currently being developed in support of tailored

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE therapies, if applied appropriately, could allow for the design of compara- tive effectiveness (Califf, 2004) studies that consider the biological (as well as nonbiological) substrates and heterogeneity of drug response, allowing for meaningful comparisons between drugs, or between drug and non-drug therapies, in subgroups of patients who are more likely to benefit from their use, as well as avoiding treatments (including drugs) of limited effectiveness. Only in such a setting where true confounders of outcome (such as those we have highlighted above) are recognized, fully understood, and taken into consideration, can comparative effectiveness assessments of biopharmaceu - ticals be truly informative and meaningful. REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2007. Registries for evaluating patient outcomes: A user’s guide. http://effectivehealthcare.ahrq.gov/repFiles/PatOutcomes.pdf (accessed October 8, 2007). Al-Khatib, S. M., K. J. Anstrom, E. L. Eisenstein, E. D. Peterson, J. G. Jollis, D. B. Mark, Y. Li, C. M. O’Connor, L. K. Shaw, and R. M. Califf. 2005. Clinical and economic implications of the multicenter automatic defibrillator implantation trial-II. Annals of Internal Medicine 142(8):593-600. Alexander, K. P., E. D. Peterson, C. B. Granger, C. Casas, F. Van de Werf, P. W. Armstrong, A. Guerci, E. J. Topol, and R. M. Califf. 1998. Potential impact of evidence-based medi- cine in acute coronary syndromes: Insights from GUSTO-IIb. Global Use of Strategies to Open Occluded Arteries in Acute Coronary Syndromes Trial. Journal of the American College of Cardiology 32:2023-2030. Alexander, K. P., A. Y. Chen, M. T. Roe, L. K. Newby, C. M. Gibson, N. M. Allen-LaPointe, C. Pollack, W. B. Gibler, E. M. Ohman, and E. D. Peterson. 2005. Excess dosing of antiplatelet and antithrombin agents in the treatment of non-st-segment elevation acute coronary syndromes. JAMA 294(24):3108-3116. American College of Cardiology. 2007. National cardiac data registries (NCDR). http://www. accncdr.com/WebNCDR/Common/ (accessed October 8, 2007). American Heart Association. 2007. Get with the guidelines (GWTG). http://www.americanheart. org/presenter.jhtml?identifier=1165 (accessed October 8, 2007). Balas, E. A. 2001. Information systems can prevent errors and improve quality. Journal of the American Medical Informatics Association 8(4):398-399. Blomkalns, A. L., A. Y. Chen, J. S. Hochman, E. D. Peterson, K. Trynosky, D. B. Diercks, G. X. Brogan, Jr., W. E. Boden, M. T. Roe, E. M. Ohman, W. B. Gibler, and L. K. Newby. 2005. Gender disparities in the diagnosis and treatment of non-st-segment elevation acute coronary syndromes: Large-scale observations from the crusade (can rapid risk stratification of unstable angina patients suppress adverse outcomes with early implemen- tation of the American College of Cardiology/American Heart Association guidelines) national quality improvement initiative. Journal of the American College of Cardiology 45(6):832-837. Califf, R. M. 2004. Defining the balance of risk and benefit in the era of genomics and pro- teomics. Health Affairs 23(1):77-87. Califf, R. M., and D. L. DeMets. 2002. Principles from clinical trials relevant to clinical prac- tice: Part I. Circulation 106(8):1015-1021.

OCR for page 127
2 EVIDENCE-BASED MEDICINE Cepeda, M. S., R. Boston, J. T. Farrar, and B. L. Strom. 2003. Comparison of logistic regres- sion versus propensity score when the number of events is low and there are multiple confounders. American Journal of Epidemiology 158(3):280-287. CMS (Centers for Medicare and Medicaid Services). 2007. Implantable cardioverter device (ICD) registry. http://www.cms.hhs.gov/MedicareApprovedFacilitie/04_ICDregistry.asp (accessed October 8, 2007). Damani, S. B., and E. J. Topol. 2007. Future use of genomics in coronary artery disease. Journal of the American College of Cardiology 50(20):1933-1940. Eisenstein, E. L., K. J. Anstrom, D. F. Kong, L. K. Shaw, R. H. Tuttle, D. B. Mark, J. M. Kramer, R. A. Harrington, D. B. Matchar, D. E. Kandzari, E. D. Peterson, K. A. Schulman, and R. M. Califf. 2007. Clopidogrel use and long-term clinical outcomes after drug-eluting stent implantation. JAMA 297:E1-E10. Evans, W. E., and H. L. McLeod. 2003. Pharmacogenomics—drug disposition, drug targets, and side effects. New England Journal of Medicine 348(6):538-549. Ferguson, T. B., Jr., E. D. Peterson, L. P. Coombs, M. Eiken, M. Carey, F. L. Grover, and E. R. DeLong. 2003. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: A randomized controlled trial. JAMA 290(1):49-56. Higashi, M. K., D. L. Veenstra, L. M. Kondo, A. K. Wittkowsky, S. L. Srinouanprachanh, F. M. Farin, and A. E. Rettie. 2002. Association between CYP2c9 genetic variants and anticoagulation-related outcomes during warfarin therapy. JAMA 287(13):1690-1698. Kurth, T., A. M. Walker, R. J. Glynn, K. A. Chan, J. M. Gaziano, K. Berger, and J. M. Robins. 2006. Results of multivariable logistic regression, propensity matching, propensity ad- justment, and propensity-based weighting under conditions of nonuniform effect. Ameri- can Journal of Epidemiology 163(3):262-270. Lazarou, J., B. H. Pomeranz, and P. N. Corey. 1998. Incidence of adverse drug reactions in hos- pitalized patients: A meta-analysis of prospective studies. JAMA 279(15):1200-1205. Mark, D. B., M. A. Hlatky, R. M. Califf, C. D. Naylor, K. L. Lee, P. W. Armstrong, G. I. Barbash, H. White, M. L. Simoons, C. L. Nelson, N. E. Clapp-Channing, J. D. Knight, F. E. Harrell, Jr., J. Simes, and E. J. Topol. 1995. Cost effectiveness of thromobolytic therapy with tissue plasminogen activator as compared with streptokinase for acute myocardial infarction. New England Journal of Medicine 332(21):1418-1424. Massachusetts Data Analysis Center. 2007. http://www.massdac.org/ (accessed October 8, 2007). Mathieu, M. P., ed. 2007. Parexel’s pharmaceutical R&D statistical sourcebook. Boston, MA: Barnett International. Meyer, U. A. 2004. Pharmacogenetics—five decades of therapeutic lessons from genetic diver- sity. Nature Reviews Genetics 5(9):669-676. National Cancer Institute. 2007. Surveillance, epidemiology, and end results (SEER) program. http://seer.cancer.gov (accessed October 8, 2007). Need, A. C., A. G. Motulsky, and D. B. Goldstein. 2005. Priorities and standards in pharma- cogenetic research. Nature Genetics 37(7):671-681. Ness, R. B. 2007. Influence of the HIPAA privacy rule on health research. JAMA 298(18): 2164-2170. O’Shea, J. C., J. M. Kramer, R. M. Califf, and E. D. Peterson. 2004. Part I: Identifying holes in the safety net. American Heart Journal 147(6):977-984. Peterson, E. D., C. V. Pollack, Jr., M. T. Roe, L. S. Parsons, K. A. Littrell, J. G. Canto, and H. V. Barron. 2003. Early use of glycoprotein iib/iiia inhibitors in non-ST elevation acute myocardial infarction: Observations from the national registry of myocardial infarction 4. Journal of the American College of Cardiology 42(1):45-53.

OCR for page 127
 TRANSFORMING THE SPEED AND RELIABILITY OF NEW EVIDENCE Peterson, E. D., J. W. Hirshfeld, Jr., T. B. Ferguson, J. M. Kramer, R. M. Califf, and L. G. Kessler. 2004. Part II: Sealing holes in the safety net. American Heart Journal 147(6):985-990. Peterson, E. D., A. Y. Chen, K. P. Alexander, N. M. Allen LaPointe, E. S. Fraulo, L. K. Newby, M. T. Roe, W. B. Gibler, and E. M. Ohman. 2006. The association between hospital guideline adherence, dosing safety, and patient outcomes: Results from the crusade qual- ity improvement initiative. Journal of the American College of Cardiology 47(4):255A. Pryor, D. B., R. M. Califf, F. E. Harrell, Jr., M. A. Hlatky, K. L. Lee, D. B. Mark, and R. A. Rosati. 1985. Clinical data bases. Accomplishments and unrealized potential. Medical Care 23(5):623-647. Rogers, W. J., J. G. Canto, C. T. Lambrew, A. J. Tiefenbrunn, B. Kinkaid, D. A. Shoultz, P. D. Frederick, and N. Every. 2000. Temporal trends in the treatment of over 1.5 million patients with myocardial infarction in the U.S. from 1990 through 1999. The national registry of myocardial infarction 1, 2, and 3. Journal of the American College of Cardi- ology 36:2056-2063. The Royal Society. 2005. Personalised medicine hopes and realities. London, UK: Publishing Section of the Royal Society. Society of Thoracic Surgeons. 2007. National Cardiac Database (NCD). http://www.sts.org/ sections/stsnationaldatabase/ (accessed October 8, 2007). Sonel, A. F., C. B. Good, J. Mulgund, M. T. Roe, W. B. Gibler, S. C. Smith, Jr., M. G. Cohen, C. V. Pollack, Jr., E. M. Ohman, and E. D. Peterson. 2005. Racial variations in treat- ment and outcomes of black and white patients with high-risk non-ST elevation acute coronary syndromes: Insights from crusade (Can rapid risk stratification of unstable angina patients suppress adverse outcomes with early implementation of the ACC/AHA guidelines?). Circulation 111(10):1225-1232. Spear, B. B., M. Heath-Chiozzi, and J. Huff. 2001. Clinical application of pharmacogenetics. Trends in Molecular Medicine 7(5):201-204. Stukel, T. A., E. S. Fisher, D. E. Wennberg, D. A. Alter, D. J. Gottlieb, and M. J. Vermeulen. 2007. Analysis of observational studies in the presence of treatment selection bias: Effects of invasive cardiac management on ami survival using propensity score and instrumental variable methods. JAMA 297(3):278-285. Tufts Center for the Study of Drug Development. 2006. http://csdd.tufts.edu/ (accessed July 11, 2008). Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. JAMA 290(12): 1624-1632. United Healthcare. 2007. Cardiac programs. https://www.unitedhealthcareonline.com/b2c/ CmaAction.do?channelId=d3d03d7872bd4110VgnVCM1000007740dc0a_&searchStr= ACC (accessed October 8, 2007). U.S. Department of Health and Human Services. 2007. Summary of the HIPAA privacy rule. http://www.hhs.gov/ocr/privacysummary.pdf (accessed October 8, 2007). Vandenbroucke, J. P., E. von Elm, D. G. Altman, P. C. Gotzsche, C. D. Mulrow, S. J. Pocock, C. Poole, J. J. Schlesselman, and M. Egger. 2007. Strengthening the reporting of obser- vational studies in epidemiology (STROBE): Explanation and elaboration. Annals of Internal Medicine 147(8):W163-W194. Wilensky, G. R. 2006. Developing a center for comparative effectiveness information. Health Affairs (Millwood) 25(6):w572-w585.

OCR for page 127