1
Evidence Development for Healthcare Decisions: Improving Timeliness, Reliability, and Efficiency

INTRODUCTION

The rapid growth of medical research and technology development has vastly improved the health of Americans. Nonetheless, a significant knowledge gap affects their care, and it continues to expand: the gap in knowledge about what approaches work best, under what circumstances, and for whom. The dynamic nature of product innovation and the increased emphasis on treatments tailored to the individual—whether tailored for genetics, circumstances, or patient preferences—present significant challenges to our capability to develop clinical effectiveness information that helps health professionals provide the right care at the right time for each individual patient.

Developments in health information technology, study methods, and statistical analysis, and the development of research infrastructure offer opportunities to meet these challenges. Information systems are capturing much larger quantities of data at the point of care; new techniques are being tested and used to analyze these rich datasets and to develop insights on what works for whom; and research networks are being used to streamline clinical trials and conduct studies previously not feasible. An examination of how these innovations might be used to improve understanding of clinical effectiveness of healthcare interventions is central to the Roundtable on Value & Science-Driven Health Care’s aim to help transform how evidence is developed and used to improve health and health care.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 51
1 Evidence Development for Healthcare Decisions: Improving Timeliness, Reliability, and Efficiency INTRODUCTION The rapid growth of medical research and technology development has vastly improved the health of Americans. Nonetheless, a significant knowledge gap affects their care, and it continues to expand: the gap in knowledge about what approaches work best, under what circum- stances, and for whom. The dynamic nature of product innovation and the increased emphasis on treatments tailored to the individual—whether tailored for genetics, circumstances, or patient preferences—present sig- nificant challenges to our capability to develop clinical effectiveness infor- mation that helps health professionals provide the right care at the right time for each individual patient. Developments in health information technology, study methods, and statistical analysis, and the development of research infrastructure offer opportunities to meet these challenges. Information systems are capturing much larger quantities of data at the point of care; new techniques are being tested and used to analyze these rich datasets and to develop insights on what works for whom; and research networks are being used to streamline clinical trials and conduct studies previously not feasible. An examination of how these innovations might be used to improve understanding of clini- cal effectiveness of healthcare interventions is central to the Roundtable on Value & Science-Driven Health Care’s aim to help transform how evidence is developed and used to improve health and health care. 

OCR for page 51
2 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM EBM AND CLINICAL EFFECTIVENESS RESEARCH The Roundtable has defined evidence-based medicine (EBM) broadly to mean that, “to the greatest extent possible, the decisions that shape the health and health care of Americans—by patients, providers, payers, and policy makers alike—will be grounded on a reliable evidence base, will account appropriately for individual variation in patient needs, and will support the generation of new insights on clinical effectiveness.” This defi- nition embraces and emphasizes the dynamic nature of the evidence base and the research process, noting not only the importance of ensuring that clinical decisions are based on the best evidence for a given patient, but that the care experience be reliably captured to generate new evidence. The need to find new approaches to accelerate the development of clinical evidence and to improve its applicability drove discussion at the Roundtable’s workshop on December 12–13, 2007, Redesigning the Clini- cal Effectiveness Research Paradigm. The issues motivating the meeting’s discussions are noted in Box 1-1, the first of which is the need for a deeper and broader evidence base for improved clinical decision making. But also important are the needs to improve the efficiency and applicability of the process. Underscoring the timeliness of the discussion is recognition of the challenges presented by the expense, time, and limited generalizability of current approaches, as well as of the opportunities presented by innovative research approaches and broader use of electronic health records that make clinical data more accessible. The overall goal of the meeting was to explore these issues, identify potential approaches, and discuss possible strategies BOX 1-1 Issues Motivating the Discussion • Need for substantially improved understanding of the comparative clinical effectiveness of healthcare interventions. • Strengths of the randomized controlled trial muted by constraints in time, cost, and limited applicability. • Opportunities presented by the size and expansion of potentially interoperable administrative and clinical datasets. • Opportunities presented by innovative study designs and statistical tools. • Need for innovative approaches leading to a more practical and reli- able clinical research paradigm. • Need to build a system in which clinical effectiveness research is a more natural by-product of the care process.

OCR for page 51
 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS Randomized Controlled Double Blind Studies Randomized Controlled Studies Cohort Studies Case Control Studies Case Series Case Reports Ideas, Opinions FIGURE 1-1 The classic evidence hierarchy. SOURCE: DeVoto, E., and B. S. Kramer. 2005. Evidence-Based Approach to Oncol- ogy. In Oncology an Evidence-Based Approach. Edited by A. Chang. New York: Figure S-1.eps Springer. Modified and reprinted with permission of Springer SBM. for their engagement. Key contextual issues covered in the presentations and open workshop discussions are reviewed in this chapter. Background: Current Research Context Starting points for the workshop’s discussion reside in the presentation of what has come to be viewed as the traditional clinical research model, depicted as a pyramid in Figure 1-1. In this model, the strongest level of evidence is displayed at the peak of the pyramid: the randomized controlled double blind study. This is often referred to as the “gold standard” of clini- cal research, and is followed, in a descending sequence of strength or qual- ity, by randomized controlled studies, cohort studies, case control studies, case series, and case reports. The base of the pyramid, the weakest evidence, is reserved for undocumented experience, ideas and opinions. Noted at the workshop was the fact that, as currently practiced the randomized con- trolled and blinded trial is not the gold standard for every circumstance. The development in recent years of a broad range of clinical research approaches, along with the identification of problems in generalizing research results to populations broader than those enrolled in tightly con- trolled trials, as well as the impressive advances in the potential avail- ability of data through expanded use of electronic health records, have all

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM prompted re-consideration of research strategies and opportunities (Kravitz, 2004; Schneeweiss, 2004; Liang, 2005; Lohr, 2007; Rush, 2008). Table 1-1 provides brief descriptions of the many approaches to clinical effectiveness research discussed during the workshop—and these methods can be generally characterized as either experimental or non-experimental. Experimental studies are those in which the choice and assignment of the intervention is under control of the investigator; and the results of a test intervention are compared to the results of an alternative approach by actively monitoring the respective experience of either individuals or groups receiving the intervention or not. Non-experimental studies are those in which either manipulation or randomization is absent, the choice of an intervention is made in the course of clinical care, and existing data, that was collected in the course of the care process, is used to draw conclusions about the relative impact of different circumstances or interventions that vary between and among identified groups, or to construct mathematical models that seek to predict the likelihood of events in the future based on variables identified in previous studies. The data used to reach study con- clusions, can be characterized as primary (generated during the conduct of the study); or secondary (originally generated for other purposes, e.g., administrative or claims data). While not an exhaustive catalog of methods, Table 1-1 provides a sense of the range of clinical research approaches that can be used to improve understanding of clinical effectiveness. Noted at the workshop was the fact that each method has the potential to advance understanding on dif- ferent aspects of the many questions that emerge throughout a product or intervention’s lifecycle in clinical practice. The issue is therefore not one of whether internal or external validity should be the overarching priority for research, but rather which approach is most appropriate to the particular need. In each case, careful attention to design and execution studies are vital. Bridging the Research–Practice Divide A key theme of the meeting was that it is important to draw clinical research closer to practice. Without this capacity, the need to personalize clinical care will be limited. For example, information on possible heteroge- neity of treatment effects in patient populations—due to individual genetics, circumstance, or co-morbidities—is rarely available in a form that is timely, readily accessible, and applicable. To address this issue, the assessment of a healthcare intervention must go beyond determinations of efficacy (whether an intervention can work under ideal circumstances) to an understanding of effectiveness (how an intervention works in practice), which compels grounding of the assessment effort in practice records. To understand effec-

OCR for page 51
 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS TABLE 1-1 Selected Examples of Clinical Research Study Designs for Clinical Effectiveness Research Approach Description Data types Randomization Randomized Experimental design in which Primary, Required Controlled patients are randomly allocated to may include Trial (RCT) intervention groups (randomized) secondary and analysis estimates the size of difference in predefined outcomes, under ideal treatment conditions, between intervention groups. RCTs are characterized by a focus on efficacy, internal validity, maximal compliance with the assigned regimen, and, typically, complete follow-up. When feasible and appropriate, trials are “double blind”—i.e., patients and trialists are unaware of treatment assignment throughout the study. Pragmatic Experimental design that is a subset Primary, Required Clinical Trial of RCTs because certain criteria are may include (PCT) relaxed with the goal of improving the secondary applicability of results for clinical or coverage decision making by accounting for broader patient populations or conditions of real-world clinical practice. For example, PCTs often have fewer patient inclusion/exclusion criteria, and longer term, patient- centered outcome measures. Delayed Experimental design in which a subset Primary, Required (or Single- of study participants is randomized may include Crossover) to receive the intervention at the secondary Design Trial start of the study and the remaining participants are randomized to receive the intervention after a pre-specified amount of time. By the conclusion of the trial, all participants receive the intervention. This design can be applied to conventional RCTs, cluster randomized and pragmatic designs. continued

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM TABLE 1-1 Continued Approach Description Data types Randomization Adaptive Experimental design in which the Primary, some Required Design treatment allocation ratio of an RCT secondary is altered based on collected data. Bayesian or Frequentist analyses are based on the accumulated treatment responses of prior participants and used to inform adaptive designs by assessing the probability or frequency, respectively, with which an event of interest occurs (e.g., positive response to a particular treatment). Cluster Experimental design in which groups Often secondary Required Randomized (e.g., individuals or patients from Controlled entire clinics, schools, or communities), Trial instead of individuals, are randomized to a particular treatment or study arm. This design is useful for a wide array of effectiveness topics but may be required in situations in which individual randomization is not feasible. N of 1 trial Experimental design in which an Primary Required individual is repeatedly switched between two regimens. The sequence of treatment periods is typically determined randomly and there is formal assessment of treatment response. These are often done under double blind conditions and are used to determine if a particular regimen is superior for that individual. N of 1 trials of different individuals can be combined to estimate broader effectiveness of the intervention. Interrupted Study design used to determine how Primary or Approach Time Series a specific event affects outcomes of secondary dependent interest in a study population. This design can be experimental or non- experimental depending on whether the event was planned or not. Outcomes occurring during multiple periods before the event are compared to those occurring during multiple periods following the event.

OCR for page 51
 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS TABLE 1-1 Continued Approach Description Data types Randomization Cohort Non-experimental approach in which Primary No Registry data are prospectively collected Study on individuals and analyzed to identify trends within a population of interest. This approach is useful when randomization is infeasible. For example, if the disease is rare, or when researchers would like to observe the natural history of a disease or real world practice patterns. Ecological Non-experimental design in which the Primary or No Study unit of observation is the population secondary or community and that looks for associations between disease occurrence and exposure to known or suspected causes. Disease rates and exposures are measured in each of a series of populations and their relation is examined. Natural Non-experimental design that examines Primary or No Experiment a naturally occurring difference Secondary between two or more populations of interest—i.e., instances in which the research design does not affect how patients are treated. Analyses may be retrospective (retrospective data analysis) or conducted on prospectively collected data. This approach is useful when RCTs are infeasible due to ethical concerns, costs, or the length of a trial will lead to results that are not informative. Simulation Non-experimental approach that uses Secondary No and existing data to predict the likelihood Modeling of outcome events in a specific group of individuals or over a longer time horizon than was observed in prior studies. continued

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM TABLE 1-1 Continued Approach Description Data types Randomization Meta The combination of data collected in Secondary No Analysis multiple, independent research studies (that meet certain criteria) to determine the overall intervention effect. Meta analyses are useful to provide a quantitative estimate of overall effect size, and to assess the consistency of effect across the separate studies. Because this method relies on previous research, it is only useful if a broad set of studies are available. SOURCE: Adapted, with the assistance of Danielle Whicher of the Center for Medical Technology Policy and Richard Platt from Harvard Pilgrim Healthcare, from a white paper developed by Tunis, S. R., Strategies to Improve Comparative Effectiveness Research Methods and Data Infrastructure, for June 2009 Brookings workshop, Implementing Comparative Ef- fectiveness Research: Priorities, Methods, and Impact. tiveness, feedback is crucial on how well new products and interventions work in broad patient populations, including who those populations are and under what circumstances they are treated. Redesigning the Clinical Effectiveness Research Paradigm Growing opportunities for practice-based clinical research are pre- sented by work to develop information systems and data repositories that enable greater learning from practice. Moreover, there is a need to develop a research approach that can address the questions that arise in the course of practice. As noted in Table 1-1, many research methods can be used to improve understanding of clinical effectiveness, but their use must be care- fully tailored to the circumstances. For example, despite the increased exter- nal validity offered by observational approaches, the uncertainty inherent in such studies due to bias and confounding often undermine confidence in these approaches. Likewise, the limitations of the randomized controlled trial (RCT) often mute its considerable research value. Those limitations may be a sample size that is too small; a drug dose that is too low to fully assess the drug’s safety; follow-up that is too short to show long-term benefits; underrepresentation or exclusion of vulnerable patient groups, including elderly patients with multiple co-morbidities, children, and young women; conduct of the trial in a highly controlled environment; and/or high cost and time investments. The issue is not one of RCTs versus non-

OCR for page 51
9 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS experimental studies but one of which is most appropriate to the particular need. Retrospective population-level cohorts using administrative data, clini- cal registries, and longitudinal prospective cohorts have, for example, been valuable in assessing effectiveness and useful in helping payers to make cov- erage decisions, assessing quality improvement opportunities, and providing more realistic assessments of interventions. Population-based registries— appropriately funded and constructed with clinician engagement—offer a compromise to the strengths and limitations of, for example, cohort studies, and can assess “real-world” health and economic outcomes to help guide decision making for patient care and policy setting. Furthermore, they are a valuable tool for assessing and driving improvements in the performance of physicians and institutions. When trials, quasi-experimental studies, and even epidemiologic studies are not possible, researchers may also be able to use simulation methods, if current prototypes prove broadly applicable. Physiology-based models, for example, have the potential to augment knowledge gained from trials and can be used to fill in “gaps” that are difficult or impractical to answer using clinical trial methods. In particular, they will be increasingly useful to provide estimates of key biomarkers and clinical findings. When properly constructed, they replicate the results of the studies used to build them, not only at an outcome level but also at the level of change in biomarkers and clinical findings. Physiology-based modeling has been used to enhance and extend existing clinical trials, to validate RCT results, and to conduct virtual comparative effectiveness trials. In part, this is a taxonomy and classification challenge. To strengthen these various methods, participants suggested work to define the “state of the art” for their design, conduct, reporting, and validation; improve the quality of data used; and identify strategies to take better advantage of the complementary nature of results obtained. As participants observed, these methods can enhance understanding of an intervention’s value in many dimensions—exploring effects of variation (e.g., practice setting, pro- viders, patients) and extending assessment to long-term outcomes related to benefits, rare events, or safety risks—collectively providing a more com- prehensive assessment of the trade-offs between potential risks and benefits for individual patients. It is also an infrastructure challenge. The efficiency, quality, and reliabil- ity of research requires infrastructure improvements that allow greater data linkage and collaboration by researchers. Research networks offer a unique opportunity to begin to build an integrated, learning healthcare system. As the research community hones its capacity to collect, store, and study data, enormous untapped capacity for data analysis is emerging. Thus, the mining of large databases has become the focus of considerable interest

OCR for page 51
0 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM and enthusiasm in the research community. Researchers can approach such data using clinical epidemiologic methods—potentially using data collected over many years, on millions of patients, to generate insights on real-world intervention use and health outcomes. It was this potential that set the stage for the discussion. PERSPECTIVES ON CLINICAL EFFECTIVENESS RESEARCH Keynote addresses opened discussions during the 2-day workshop. Together the addresses and discussions provide a conceptual framework for many of the meeting’s complex themes. IOM President Harvey V. Fineberg provides an insightful briefing on how clinical effectiveness research has evolved over the past 2.5 centuries and offers compelling questions for the workshop to consider. Urging participants to stay focused on better under- standing patient needs and to keep the fundamental values of health care in perspective, Fineberg proposes a meta-experimental strategy, advocating for experiments with experiments to better understand their respective utilities, power, and applicability as well as some key elements of a system to support patient care and research. Carolyn M. Clancy, director of the Agency for Healthcare Research and Quality, offers a vision for 21st- century health care in which actionable information is available to clini- cians and patients and evidence is continually refined as care is delivered. She provides a thoughtful overview of how emerging methods will expand the research arsenal and can address many key challenges in clinical effec- tiveness research. Emphasis is also given to the potential gains in quality and effectiveness of care, with greater focus on how to translate research findings into practice. CLINICAL EFFECTIVENESS RESEARCH: PAST, PRESENT, AND FUTURE Harvey V. Fineberg, M.D., Ph.D. President, Institute of Medicine An increasingly important focus of the clinical effectiveness research paradigm is the efficient development of relevant and reliable information on what works best for individual patients. A brief look at the past, pres- ent, and future of clinical effectiveness research establishes some informa- tive touchstones on the development and evolution of the current research paradigm, as well as on how new approaches and directions might dramati- cally improve our ability to generate insights into what works in a clinical context.

OCR for page 51
 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS Evolution of Clinical Effectiveness Research Among the milestones in evidence-based medicine, one of the earliest examples of the use of comparison groups in a clinical experiment is laid out in a summary written in 1747 by James Lind detailing what works and what does not work in the treatment of scurvy. With 12 subjects, Lind tried to make a systematic comparison to discern what agents might be helpful to prevent and treat the disease. Through experimentation, he learned that the intervention that seemed to work best to help sailors recover most quickly from scurvy was the consumption of oranges, limes, and other citrus fruits. Many other interventions, including vinegar and sea water, were also tested, but only the citrus fruits demonstrated benefit. What is interesting about that experiment and relevant for our discussions of evidence-based medicine today is that it took the Royal Navy more than a century to adopt a policy to issue citrus to its sailors. When we talk about the delay between new knowledge and its application in practice in clinical medicine, we there- fore have ample precedent, going back to the very beginning of systematic comparisons. Another milestone comes in the middle of the 19th century, with the first systematic use of statistics in medicine. During the Crimean War (1853–1856), Florence Nightingale collected mortality statistics in hospitals and used those data to help discern where the problems were and what might be done to improve performance and outcomes. Nightingale’s tables were the first systematic collection in a clinical setting of extensive data on outcomes in patients that were recorded and then used for the purpose of evaluation. It was not until the early part of the 20th century that statistics in its modern form began to take hold. The 1920s and 1930s saw the develop- ment of statistical methods and accounting for the role of chance in sci- entific studies. R. A. Fisher (Fisher, 1953) is widely credited as one of the seminal figures in the development of statistical science. His classic work, The Design of Experiments (1935), focused on agricultural comparisons but articulated many of the critical principles in the design of controlled trials that are a hallmark of current clinical trials. It would not be until after World War II, that the first clinical trial on a medical intervention would be recorded. A 1948 study by Bradford Hill on the use of streptomycin in the treatment of tuberculosis was the original randomized controlled trial. Interestingly, the contemporary use of penicillin to treat pneumonia was never subjected to similar, rigorous testing—perhaps owing to the therapy’s dramatic benefit to patients. Over the ensuing decades, trials began to appear in the literature with increased frequency. Along the way they also became codified and almost deified as the standard for care. In 1962, after the thalidomide scandals,

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM Observational Quality of RCTs Studies Evidence Very strong High Well-designed association studies Strong, consistent Moderate Study flaws association with no plausible confounders Inconsistent Dose-response Indirect Low Well-designed Sparse data studies Publication bias Few or Very Low inconsistent studies FIGURE 1-4 Evidence levels—Grades of Recommendation, Assessment, Develop- Figure 1-4.eps ment and Evaluation (GRADE). Grading Evidence and Recommendations A notable and exciting development is the GRADE (Grades of Recom- mendation, Assessment, Development and Evaluation) collaborative, whose goal is to promote a more consistent and transparent approach to grading evidence and recommendations.1 This approach considers that how well a study is done is at least as important as the type of study it is. GRADE evidence levels, as summarized in Figure 1-4, suggest that randomized trials that are flawed in their execution should not be at the top of the pyramid in any hierarchy of evidence. Similarly, observational studies that meet the criteria shown in the figure (and perhaps others as well), and which are done very well, might in some instances be considered better evidence than a randomized trial, if a randomized trial is poorly done. These standards are being adopted by the American Colleges of Physicians, American College of Chest Physicians, National Institute for Clinical Excellence, and World Health Organization, among others. All of us imagine a near-term future where there is going to be much greater access to high-quality data. However, in order to take full advantage of that, we need to continue to advance work in improving methodological research. Why is this necessary? We need more comprehensive data to guide Medicare coverage decisions and to understand the wider range of out- comes. We need to address the gap when data from results of well-designed RCTs are either not available or incomplete. Finally, there are significant 1 See www.gradeworkinggroup.org.

OCR for page 51
 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS quality, eligibility, and cost implications of coverage decisions (e.g., consider implantable cardioverter defibrillators). To help advance the agenda for improving methodology, a series of 23 articles on emerging methods in comparative effectiveness and safety were published in October 2007 in a special supplement to the journal Med- ical Care. These papers are a valuable new resource for scientists who are committed to advancing the comparative effectiveness and safety research, and this is an area in which AHRQ intends to continue to push.2 Approaches to Turning Evidence Into Action The Agency for Healthcare Research and Quality has several programs directed at turning evidence into action. AHRQ’s program on comparative effectiveness was authorized by Congress as part of the Medicare Modern- ization Act and funded through an appropriation starting in 2005. This Effective Health Care Program (EHCP) is essentially trying to produce evidence for a variety of audiences, based on unbiased information, so that people can make head-to-head comparisons as they endeavor to understand which interventions add value, which offer minimal benefit above current choices, which fail to reached their potential, and which work for some patients but not for others. The overarching goal is to develop and dis- seminate better evidence about benefits and risks of alternative treatments, which is also important for policy discussions. The statute is silent on cost effectiveness, although it does say that the Medicare program may not use the information to deny coverage. Less clear is whether prescription drug plans can use EHCP information in such a way; again, the statute is silent. The AHRQ EHCP has three core components. One is synthesizing existing evidence through Evidence-Based Practice Centers (EPCs), which AHRQ has supported since 1997. The purpose is to systematically review, synthesize, and compare existing evidence on treatment effectiveness, and to identify relevant knowledge gaps. (Anyone who has ever conducted a systematic or even casual review knows that if you are searching through a pile of studies, inevitably you will have unanswered questions—questions that are related to but not quite the main focus of the particular search that you are doing.) The second component is to generate evidence—to develop new sci- entific knowledge to address knowledge gaps—and to accelerate practical studies. To address critical unanswered questions or to close particular 2 All of the articles are available for free download at the website www.effectivehealthcare. ahrq.gov/reports/med-care-report.cfm or can be ordered as Pub. No. OM07-0085 from AHRQ’s clearinghouse.

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM research gaps, AHRQ relies on the DEcIDE (Developing Evidence to Inform Decisions about Effectiveness) network, a group of research partners who work under task-order contracts and who have access to large electronic clinical databases of patient information. The Centers for Education & Research on Therapeutics (CERTs) is a peer-reviewed program that con- ducts state-of-the-art research to increase awareness of new uses of drugs, biological products, and devices; to improve the effective use of drugs, bio- logical products, and devices; to identify risks of new uses; and to identify risks of combinations of drugs and biological products. Finally, AHRQ also works to advance the communication of evidence and its translation into care improvements. Many researchers will recall that our colleague John Eisenberg always talked about telling the story of health services research. Named in his honor, the John M. Eisenberg Clini- cal Decisions and Communications Science Center, based at Oregon Health Sciences University, is devoted to developing tools to help consumers, clini- cians, and policy makers make decisions about health care. The Eisenberg Center translates knowledge about effective health care into summaries that use plain, easy-to-understand, and actionable language, which can be used to assess treatments, medications, and technologies. The guides are designed to help people to use scientific information to maximize the benefits of health care, minimize harm, and optimize the use of healthcare resources. Center activities also focus on decision support and other approaches to getting information to the point of care for clinicians, as well as on making information relevant and useful to patients and consumers. The Eisenberg Center is developing two new translational guides, the Guide to Comparative Effectiveness Reviews and Effectiveness and Off- Label Use of Recombinant Factor VIIa. In April 2007, AHRQ also published Registries for Evaluating Patient Outcomes: A User’s Guide, co-funded by AHRQ and the Centers for Medicare & Medicaid Services (CMS), the first government-supported handbook for establishing, managing, and analyz- ing patient registries. This resource is designed so that patient registry data can be used to evaluate the real-life impact of healthcare treatments and can truly be considered a milestone in growing efforts to better understand what treatments actually work best and for whom (Agency for Healthcare Research and Quality, 2008c). Clearly, there are a variety of problems that no healthcare system is large enough or has sufficient data to address on its own. Many researchers envision creation of a common research infrastructure, a federated network prototype that would support the secure analyses of electronic informa- tion across multiple organizations to study risks, effects, and outcomes of various medical therapies. This would not be a centralized database—data would stay with individual organizations. However, through the use of common research definitions and terms, the collaborative would create a

OCR for page 51
9 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS large network that would expand capabilities far beyond the capacity of any one individual system. The long-term goal is a coordinated partnership of multiple research networks that provide information that can be quickly queried and ana- lyzed for conducting comparative effectiveness research. There are enor- mous opportunities here, but to come to fruition the effort will take considerable difficult work upfront. In that regard, AHRQ has funded contracts to support two important models of distributed research net- works. One model being evaluated leverages partnerships of a practice- based research network to study utilization and outcomes of diabetes treatment in ambulatory care. This project is led by investigators from the University of Colorado DEcIDE center and the American Academy of Family Physicians to develop the Distributed Ambulatory Research in Therapeutics Network (DARTNet), using electronic health record data from 8 organizations representing more than 200 clinicians and over 350,000 patients (Agency for Healthcare Research and Quality, 2008a). The second model is established within a consortium of managed care organizations to study therapies for hypertension. This project is led by the HMO Research Network (HMORN) and the University of Pennsylva- nia DEcIDE centers (Agency for Healthcare Research and Quality, 2008a). It will develop a “Virtual Data Warehouse” to assess the effectiveness and safety of different anti-hypertensive medications used by 5.5 to 6 million individuals cared for by six health plans. Both projects will be conducted in four phases over a period of approxi- mately 18 months, with quarterly reports posted on AHRQ’s website. These reports will describe the design specifications for each network prototype; the evaluation of the prototype; research findings from the hypertension and diabetes studies; and the major features of each prototype in the for- mat of a prospectus or blueprint so that the model may be replicated and publicly evaluated. In addition to the AHRQ efforts, others are also supporting activities in this arena. Under the leadership of Mark McClellan, the Quality Alliance Steering Committee at the Engelberg Center for Health Care Reform at the Brookings Institution is engaged in work to effectively aggregate data across multiple health insurance plans for the purposes of reporting on physician performance. Effectively the plans will each be producing information on a particular physician, and its weighted average will be computed and added to the same information derived from using Medicare data. The strategy is that data would stay with individual plans, but would be accessed using a common algorithm. As recent efforts to aggregate data for the purposes of quality measurement across plans have found, this is truly difficult but important work. Among other efforts, the nonprofit eHealth Initiative Foundation has

OCR for page 51
0 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM started a research program designed to improve drug safety for patients. The eHI Connecting Communities for Drug Safety Collaboration is a public- and private-sector effort designed to test new approaches and to develop replicable tools for assessing both the risks and the benefits of new drug treatments through the use of health information technology. Results will be placed in the public domain to accelerate the timeliness and effectiveness of drug safety efforts. Another important ongoing effort is the Food and Drug Administration’s work to link private- and public-sector postmarket safety efforts to create a virtual, integrated, electronic “Sentinel Network.” Such a network would integrate existing and planned efforts to collect, analyze, and disseminate medical product safety information to healthcare practitioners and patients at the point of care. These efforts underscore the commitment by many in the research community to creat- ing better data and linking those data with better methods to translate them into more effective health care. Health Care in the 21st Century We must make sure that we do not lose sight of the importance of translating evidence into practice. For all of our excitement about current and anticipated breakthroughs leading to a world of personalized health care in the next decade, probably larger gain in terms of saving lives and reducing morbidity is likely to come from more effective translation. Researcher Steven Woolf and colleagues published interesting observa- tions on this topic in 2005 (Figure 1-5) (Woolf and Johnson, 2005). They showed that if 100,000 patients are destined to die from a disease, a drug that reduces death rates by 20 percent will save 16,000 lives if delivered to 80 percent of the patients; increase the drug delivery to 100 percent of patients and you save an additional 4,000 lives. To compensate for that in improved efficacy you would have to have something that is 25 percent more efficacious. Thus, in the next decade, translation of the scientific evidence we already have is likely to have a much bigger impact on health outcomes than breakthroughs coming on the horizon. The clinical research enterprise has talked a lot about phase 1 and 2 translation research (T1 and T2). Yet, we need to think about T3: the “how” of high-quality care. We need to transcend thinking about translation as an example of efficacy and think instead about translation as encompassing measurements and accountability, system redesign, scaling and spread, learn- ing networks, and implementation and research beyond the academic center (Dougherty and Conway, 2008). Figure 1-6 outlines the three translational steps that form the 3T’s road map for transforming the healthcare system. Figure 1-7 suggests a progression for the evolution of translational research.

OCR for page 51
60% Access 80% Access 100% Access 35,000 30,000 25,000 B2 A2 A3 20,000 A 4,000 8,000 lives lives A1 B 15,000 Deaths Aver ted with Drug Therapy B1 “Break-even points”—efficacy must 10,000 exceed these “break-even points” to make improved drug more beneficial than improved (100%) delivery. 5,000 20% 21% 22% 23% 24% 25% 26% 27% 28% 29% 30% 31% 32% 33% 34% 35% Relative Risk Reduction in Mor tality from Drug Therapy FIGURE 1-5 Potential lives saved through quality improvement—The “break-even point” for a drug that reduces mortality by Figure 1-5.eps 20 percent. SOURCE: Woolf, S. H., and R. E. Johnson. 2005. The break-even point: When medical advances are less important than improv- landscape ing the fidelity with which they are delivered. Annals of Family Medicine 3(6):545-552. Reprinted with permission from American  Academy of Family Physicians, Copyright © 2005.

OCR for page 51
2 Improved healthcare Basic biomedical Clinical ef ficacy Clinical ef fectiveness qualit y and value T1 T2 T3 science knowledge knowledge and population health Key T1 activit y to test Key T2 activities to test Key T3 activities to test what care works who benefits from promising care how to deliver high-quality care reliably and in all settings Measurement and accountability of healthcare quality and cost Outcomes research Implementation of interventions and Clinical ef ficacy research Comparative ef fectiveness research healthcare system redesign Health services research Scaling and spread of ef fective interventions Research in above domains FIGURE 1-6 The 3T’s roadmap. Figure 1-6.eps NOTE: T indicates translation. T1, T2, and T3 represent the three major translational steps in the proposed framework to trans- step test the discoveries of prior research activities in progressively form the healthcare system. The activities in each translational landscape broader settings to advance discoveries originating in basic science research through clinical research and eventually to widespread implementation through transformation of healthcare delivery. Double-headed arrows represent the essential need for feedback loops between and across the parts of the transformation framework. SOURCE: Journal of the American Medical Association 299(19):2319-2321. Copyright © 2008 American Medical Association. All rights reserved.

OCR for page 51
FIGURE 1-7 Evolution of translational research. 

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM Improving quality by promoting a culture of safety through Value-Driven Health Care Information-rich, patient-focused enterprises Information and Evidence is evidence transform 21st-Century continually refined interactions from as a by-product of Health Care reactive to proactive (benefits care delivery and harms) Actionable information available—to clinicians AND patients—“just in time” FIGURE 1-8 Model for 21st-century health-care. s Figure 1 8.ep This area is clearly still under development and in need of more focused attention from researchers In closing, we can no doubt all agree that the kind of healthcare system we would want to provide our own care would be information rich but patient focused, in which information and evidence transform interactions from the reactive to the proactive (benefits and harms). Figure 1-8 sum- marizes a vision for 21st-century health care. In this ideal system, action- able information would be available—to clinicians and patients—“just in time,” and evidence would be continually refined as a by-product of health- care delivery. The goal is not producing better evidence for its own sake, although the challenges and debates about how to do that are sufficiently invigorating on their own that we can almost forget what the real goals are. Achieving an information-rich, patient-focused system is the challenge that is at the core of our work together in the Value & Science-Driven Health Care Roundtable. Where we are ultimately headed, of course, is to estab- lish the notion, discussed widely over the past several years, of a learning healthcare system. This is a system in which evidence is generated as a by- product of providing care and actually fed back to those who are providing care, so that we become more skilled and smarter over time.

OCR for page 51
 EVIDENCE DEVELOPMENT FOR HEALTHCARE DECISIONS REFERENCES AcademyHealth. 2008. Health Services Research (HSR) Methods [cited June 15, 2008]. http:// www.hsrmethods.org/ (accessed June 21, 2010). Agency for Healthcare Research and Quality. 2008a. Developing a Distributed Research Network to Conduct Population-based Studies and Safety Surveillance 2009 [cited June 15, 2008]. http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and- reports/?pageaction=displayproduct&productID=150 (accessed June 21, 2010). ———. 2008b. Distributed Network for Ambulatory Research in Therapeutics [cited June 15, 2008]. http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and- reports/?pageaction=displayproduct&productID=317 (accessed June 21, 2010). ———. 2008c. Effective Health Care Home [cited June 15, 2008]. http://effectivehealthcare. ahrq.gov (accessed June 21, 2010). ———. 2008d. Evidence Report on Treatment of Depression-New Pharmacotherapies: Sum- mary (Pub. No. 99-E013) [cited June 15, 2008]. http://archive.ahrq.gov/clinic/epcsums/ deprsumm.htm (accessed June 21, 2010). Atkins, D. 2007. Creating and synthesizing evidence with decision makers in mind: Integrat- ing evidence from clinical trials and other study designs. Medical Care 45(10 Supl 2): S16-S22. DeVoto, E., and B. S. Kramer. 2006. Evidence-based approach to oncology. In Oncology: An Evidence-Based Approach, edited by A. E. Chang, P. A. Ganz, D. F. Hayes, T. Kinsella, H. I. Pass, J. H. Schiller, R. M. Stone, and V. Strecher. New York: Springer. Dougherty, D., and P. H. Conway. 2008. The “3T’s” road map to transform US health care: the “how” of high-quality care. Journal of the American Medical Association 299(19):2319-2321. Fisher, R. A. 1953. The Design of Experiments. London: Oliver and Boyd. Glasziou, P., I. Chalmers, M. Rawlins, and P. McCulloch. 2007. When are randomised trials unnecessary? Picking signal from noise. British Medical Journal 334(7589):349-351. Kravitz, R. L., N. Duan, and J. Braslow. 2004. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Quarterly 82(4):661-687. Liang, L. 2007 (March/April). The gap between evidence and practice. Health Affairs 26(2): w119-w121. The Lipid Research Clinics Coronary Primary Prevention Trial Results. I. 1984. Reduction in incidence of coronary heart disease. Journal of the American Medical Association 251(3):351-364. The Lipid Research Clinics Coronary Primary Prevention Trial Results. II. 1984. The relation- ship of reduction in incidence of coronary heart disease to cholesterol lowering. Journal of the American Medical Association 251(3):365-374. Lohr, K. N. 2007. Emerging methods in comparative effectiveness and safety: Symposium overview and summary. Medical Care 45(10):55-58. McGrath, P. D., D. E. Wennberg, J. D. Dickens, Jr., A. E. Siewers, F. L. Lucas, D. J. Malenka, M. A. Kellett, Jr., and T. J. Ryan, Jr. 2000. Relation between operator and hospital volume and outcomes following percutaneous coronary interventions in the era of the coronary stent. Journal of the American Medical Association 284(24):3139-3144. Rush, A. J. 2008. Developing the evidence for evidence-based practice. Canadian Medical Association Journal 178:1313-1315. Sackett, D. L., R. Brian Haynes, G. H. Guyatt, and P. Tugwell. 2006. Dealing with the media. Journal of Clinical Epidemiology 59(9):907-913.

OCR for page 51
 REDESIGNING THE CLINICAL EFFECTIVENESS RESEARCH PARADIGM Schneeweiss, S. 2007. Developments in post-marketing comparative effectiveness research. Clinical Pharmacology and Therapeutics 82(2):143-156. Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290(12):1624-1632. Woolf, S. H., and R. E. Johnson. 2005. The break-even point: When medical advances are less important than improving the fidelity with which they are delivered. Annals of Family Medicine 3(6):545-552.