1
Introduction

Abstract: This chapter presents the objectives and context for this report, defines the key concepts used throughout the report, and describes the approach of the Institute of Medicine (IOM) Committee on Reviewing Evidence to Identify Highly Effective Clinical Services to undertaking the study. The committee was charged with recommending an organizational framework for assessing evidence on clinical effectiveness so that consumers, clinicians, professional specialty societies, payers, purchasers, and other decision makers have independent, valid information for making health care decisions. The central premise underlying the report is that decisions about the care of individual patients should be based on the conscientious, explicit, and judicious use of the current best evidence on the effectiveness of clinical services. The conceptual context is the continuum beginning with research evidence, moving to systematic review of the overall body of evidence, and then to interpretation of the strength of the overall evidence for developing evidence-based clinical practice guidelines. The report provides a general blueprint for a national clinical effectiveness assessment program (“the Program”) with responsibility for three fundamental processes: (1) setting priorities for evidence assessment, (2) assessing evidence (systematic review), and (3) developing (or endorsing) standards for evidence-based clinical practice guidelines.

In the early 21st century, despite unprecedented advances in biomedical knowledge and the highest per capita health care expenditures in the world, the quality and outcomes of health care vary dramatically across the United States (Fisher and Wennberg, 2003; Fisher et al., 2003a,b; McGlynn et al., 2003). The economic burden of constantly inflating health



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 17
1 Introduction Abstract: This chapter presents the objecties and context for this report, defines the key concepts used throughout the report, and describes the approach of the Institute of Medicine (IOM) Committee on Reiewing Eidence to Identify Highly Effectie Clinical Serices to undertaking the study. The committee was charged with recommending an organizational framework for assessing eidence on clinical effectieness so that consum- ers, clinicians, professional specialty societies, payers, purchasers, and other decision makers hae independent, alid information for making health care decisions. The central premise underlying the report is that decisions about the care of indiidual patients should be based on the con- scientious, explicit, and judicious use of the current best eidence on the effectieness of clinical serices. The conceptual context is the continuum beginning with research eidence, moing to systematic reiew of the oerall body of eidence, and then to interpretation of the strength of the oerall eidence for deeloping eidence-based clinical practice guidelines. The report proides a general blueprint for a national clinical effectie- ness assessment program (“the Program”) with responsibility for three fundamental processes: () setting priorities for eidence assessment, () assessing eidence (systematic reiew), and () deeloping (or endorsing) standards for eidence-based clinical practice guidelines. In the early 21st century, despite unprecedented advances in biomedi- cal knowledge and the highest per capita health care expenditures in the world, the quality and outcomes of health care vary dramatically across the United States (Fisher and Wennberg, 2003; Fisher et al., 2003a,b; McGlynn et al., 2003). The economic burden of constantly inflating health 7

OCR for page 17
 KNOWING WHAT WORKS IN HEALTH CARE care spending is weakening American industry’s competitive edge and in the global economy, and this burden is increasingly being transferred to con- sumers as they are held more financially at risk for the health care services that they use (Gabel et al., 2002; U.S. Government Accountability Office, 2006a,b; Webster, 2006). Enabling and incentivizing “consumer choice” is viewed by some as a potential market strategy to rationalize what most agree is a health care system plagued by overuse, underuse, and misuse (Schwartz, 1984; Wennberg, 2004). Yet even the most sophisticated health care consumer struggles to learn which care is appropriate for his or her circumstance and to obtain it at the right time (Berwick, 2003; Rettig et al., 2007; Wennberg, 2002). With these trends in view, the Robert Wood Johnson Foundation (RWJF) asked the Institute of Medicine (IOM) to address problems in how the nation uses scientific evidence to identify the most effective clinical ser- vices. The IOM appointed the Committee on Reviewing Evidence to Iden- tify Highly Effective Clinical Services in June 2006 to respond to RWJF’s request and prepare this report. The 16-member committee included ex- perts in clinical research, health care coverage, drug development, health care benefits selection (large employers and other purchasers), health care delivery, clinical guideline development, economics, statistical methods and epidemiology, consumer and patient perspectives, child health, preventive medicine, behavioral health, and ethics. Brief biographies of the committee members appear in Appendix G. STUDY SCOPE The committee was charged with recommending a sustainable, replica- ble approach to identifying and evaluating the clinical services that have the highest potential effectiveness. The charge specified three principal tasks: (1) To recommend an approach to identifying highly effective clini- cal services across the full spectrum of health care services—from prevention, diagnosis, treatment, and rehabilitation, to end-of-life care and palliation (2) To recommend a process to evaluate and report on evidence on clinical effectiveness (3) To recommend an organizational framework for using evidence reports to develop recommendations on appropriate clinical ap- plications for specified populations The committee’s initial deliberations focused on articulating its charge in a strategic work plan for the 18-month study period. The committee chose to focus on developing an organizational framework for a national

OCR for page 17
9 INTRODUCTION clinical effectiveness assessment program, referred to throughout the report as “the Program.” The mission of the Program would be to optimize the use of evidence to identify effective health care services. Three functions would be central to this mission: setting priorities for conducting evidence assess- ments, conducting evidence assessments (systematic review), and developing (or endorsing) standards for trusted clinical practice guidelines. The objec- tive of this report is twofold: first, to examine the scientific rationale for these three functions and, second, to recommend an organizational context for implementing the three functions. The committee reviewed, and ultimately excluded, a number of topics that might be related to the charge including cost-effectiveness, knowledge transfer and adherence to guidelines, program costs and sources of program funding, placement of the program (e.g., within a governmental or private- sector framework), patient values and preferences, legal issues, and techni- cal methods underlying evidence assessment or guideline development. The committee explored the relevance of cost and cost-effectiveness analysis (CEA) to the committee’s charge over the course of several meet- ings. The committee decided not to make recommendations about the role of costs in evaluating clinical services for two reasons. First, in the United States, the role of cost in government health policy and coverage deci- sions, clinical guidelines, and practice measures is unresolved albeit often debated (Congressional Budget Office, 2007; Medicare Payment Advisory Commission, 2007; Wilensky, 2006). Although CEA has been used for decades to estimate the relative value of alternative health interventions, particularly with respect to new prescription medications, most policy makers do not use it explicitly. Many policy makers believe information on cost-effectiveness has the potential to guide more efficient use of health care resources. The committee noted, however, that—regardless of the cost side of the equation—reliable cost-effectiveness analysis depends on high-quality evidence on effectiveness. In fact, the Medicare Payment Advisory Commis- sion has recommended that before policy makers routinely employ CEA for decision making, they must address concerns about CEA methods, includ- ing how to assess the effectiveness of health services (Medicare Payment Advisory Commission, 2005). By this reasoning, high-quality comparative effectiveness research is a prerequisite to performing valid cost-effectiveness analyses. Second, RWJF, the sponsor of this study, urged the committee to limit its work to the non-cost issues related to determining the effectiveness of health care services. Following the completion of the IOM study, RWJF intends to fund additional research into how cost affects access to effective health care services (Lumpkin, 2006). The committee also discussed at length whether the report should delve into issues related to knowledge transfer and adherence to clinical guide- lines. Clearly, identifying effective health services is just one step toward

OCR for page 17
0 KNOWING WHAT WORKS IN HEALTH CARE ensuring an effective health care system. There is little value to identifying effective services or developing evidence-based practice guidelines, if the knowledge gained does not lead to higher quality health care delivery and improved patient outcomes. However, setting standards for best practices (e.g., through clinical guidelines) differs fundamentally from successfully implementing them through quality improvement projects, which take place at a local level. STUDY METHODS The committee deliberated during 5 in-person meetings and 14 tele- phone conferences between July 2006 and October 2007. As previously noted, during its early discussions, the members of the committee agreed to first develop a strategic work plan for organizing the study. This soon led to a primary focus on three processes deemed integral to identifying effective health care services. Given the dynamic nature of the issues involved in the study, the com- mittee decided to supplement its planned review of the relevant literature with expert testimony on current issues. It thus convened two public work- shops. The first workshop, held in November 2006, focused on evidence generation, evidence synthesis, and evidence assessment of new health care technologies and new applications of existing technologies. The committee heard testimony from various experts, including the developers of health care technologies, government regulators, research scientists, and technol- ogy assessors, on their experiences with the use of positron emission tomog- raphy scanning for the diagnosis of Alzheimer’s disease; pharmacotherapy with bevacizumab (Avastin) and ranibizumab (Lucentis) for age-related macular degeneration; and two technologies related to the early identifica- tion and treatment of colorectal cancer; the fecal DNA screening test and an assay to test toxicity for the chemotherapy agent irinotecan. The second workshop, held in January 2007, focused on organiza- tions that set priorities for developing systematic reviews, clinical practice guidelines, and practice standards. The committee heard testimony from senior representatives of the Agency for Healthcare Research and Quality (AHRQ), the U.S. Preventive Services Task Force (USPSTF), Consumers Union’s Best Buy Drugs, the American Heart Association (in collaboration with the American College of Cardiology), the National Quality Forum, the National Committee for Quality Assurance, the Joint Commission, the American Medical Association (AMA)-convened Physician Consortium for Performance Improvement, UnitedHealthcare, the Cochrane Collabora- tion, the Blue Cross and Blue Shield Association Technology Evaluation Center (an Evidence-based Practice Center), Johnson & Johnson, the ECRI Institute, Genentech, and the Dartmouth-Hitchcock Department of Ortho-

OCR for page 17
 INTRODUCTION pedic Surgery. In addition to oral testimony, the experts provided written responses to the committee’s questions. Appendix B provides further details on the public workshops. CONTEXT FOR THIS REPORT Conceptual Framework The committee based its work on the central premise that decisions about the care of individual patients should be based on “the conscientious, explicit, and judicious use of current best evidence” (Sackett et al., 1996). This means that individual clinical expertise should be integrated with the best information from scientifically based, systematic research and should be applied in light of the patient’s unique values and circumstances (Straus et al., 2005). Centering on the patient is integral to improving the quality of health care (IOM, 2001) and is also imperative if consumers are to take an active role in making informed health care decisions based on known risks and benefits. The committee also recognizes that health care resources are finite. Thus, setting priorities for the systematic assessment of the scientific evidence is essential. What Is Eidence? In the everyday sense, “evidence” is considered a collection of facts that ground one’s belief that something is true (Dictionary.com, 2007). In searching for evidence that a health care service is highly effective, the notion of what constitutes evidence is more complex. It also depends on one’s perspective. In a systematic review of the different views on the nature of evidence, Lomas and colleagues (2005) observed that scientists view evidence as knowledge that is explicit (codified and propositional), systematic (with transparent and explicit methods used to codify the evi- dence), and replicable. However, outside the research community, decision makers, such as patients, clinicians, health plan managers, and employers, see evidence as being more contextual. For the decision maker, scientific evidence demonstrates what works under ideal circumstances, but it has relevance only when it is adapted to a particular set of circumstances. Someone must interpret the evidence for it to be used to guide clinical decision making. Who Is a Health Care Decision Maker? The era of physician as sole health care decision maker is long past. In today’s world, health care decisions are made by multiple persons, in-

OCR for page 17
 KNOWING WHAT WORKS IN HEALTH CARE dividually or in collaboration, in multiple contexts for multiple purposes. Decision makers are likely to be the consumer choosing among health plans, patients or the patients’ caregivers making treatment choices, payers or employers making health care coverage and reimbursement decisions, professional medical societies developing practice guidelines or clinical recommendations, regulatory agencies assessing new drugs or devices, and public programs developing population-based health interventions. Every decision maker needs credible, unbiased, and understandable evidence on the effectiveness of health care services. Conceptual Context for the Study The committee defined the conceptual context for this study as the continuum that begins with research evidence and that then moves to a scientific, systematic review of the overall body of evidence and then to the interpretation of the strength of the overall evidence for the develop- ment of trusted clinical practice guidelines (Figure 1-1). The systematic review is an essential element of scientific inquiry into what is known and not known about what works in health care (Glasziou and Haynes, 2005; Helfand, 2005; Mulrow and Lohr, 2001; Steinberg and Luce, 2005). The strength of the evidence depends on the quality of the individual studies that comprise the body of evidence, the combined number of participants and events observed in the relevant studies, the consistency of the findings of the relevant studies, and the magnitude of the observed effects (Higgins and Green, 2006; Khan et al., 2001; West et al., 2002). What Is an Effectie Clinical Serice? The terms “effectiveness” and “clinical effectiveness” refer to the extent to which a specific intervention, procedure, regimen, or service does what it what it is intended to do when it is used under real world circumstances (Cochrane Collaboration, 2005; Last, 2001). Recently, numerous propos- als have called for a large expansion in the generation of comparative effectiveness information (BCBSA, 2007a; Congressional Budget Office, 2007; The Health Industry Forum, 2006; IOM, 2007; Medicare Payment Advisory Commission, 2007; Wilensky, 2006). These proposals call for systems to compare the impacts of different options for caring for a medical condition (e.g., prostate cancer) for a defined set of patients (e.g., men at high risk of prostate cancer recurrence). The comparison may be between similar treatments, such as competing prescription medications, or for very different treatment approaches, such as surgery or radiation therapy. Or, the comparison may be between using a specific intervention and its nonuse (sometimes called “watchful waiting”). This report uses the terms

OCR for page 17
 INTRODUCTION Research Studies Examples: • Randomized clinical trials • Cohort studies • Case control studies • Cross-sectional studies • Case series Systematic Review • Identify and assess the quality of individual studies • Critically appraise the body of evidence • Develop qualitative or quantitative synthesis Clinical Guidelines and Recommendations FIGURE 1-1 Continuum from research studies to systematic review to development of clinical guidelines and recommendations. NOTE: The dashed line is the theoretical dividing line between the systematic review of the research literature and its application to clinical decision making, including the development of clinical guidelines and recommendations. Below the dashed line, decision makers and developers of clinical recommendations interpret 1-1 the findings of systematic reviews to decide which patients, health care settings, or other circumstances they relate to. SOURCE: Adapted from Systems to Rate the Strength of Scientific Eidence (West et al., 2002).

OCR for page 17
 KNOWING WHAT WORKS IN HEALTH CARE “effectiveness,” “clinical effectiveness,” and “comparative effectiveness” interchangeably. See Box 1-1 for other key terms that are referred to in the report. Historical Context This study occurs at a time when there is heightened interest in opti- mizing U.S. health care through the generation of new knowledge on the BOX 1-1 Selected Terms Used in the Report Experimental study—A study in which the investigators actively intervene to test a hypothesis. Controlled trials are experimental studies in which an experimental group receives the intervention of interest while a comparison group receives no intervention, a placebo, or the standard of care and the outcomes are compared. In a randomized controlled trial, the participants are randomly allocated to the experimental group or the comparison group. Observational or nonexperimental study—A study in which the investigators do not seek to intervene but simply observe the course of events. In cohort studies, groups with certain exposures or characteristics are monitored over time to observe an outcome of interest. In case-control studies, groups with and without an event or condition are examined to see whether a past exposure or event is more prevalent in one group than in the other. Cross-sectional stud- ies determine the prevalence of a condition or an exposure at a specific time or time period. Case series describe a group of patients with a characteristic in common, for example, individuals undergoing a new type of surgery or the users of a new device. Systematic review—A systematic review is a scientific investigation that focuses on a specific question and that uses explicit, preplanned scientific methods to identify, select, assess, and summarize the findings of similar but separate stud- ies. It may or may not include a quantitative synthesis of the results from separate studies (meta-analysis). In this report, the term “systematic review” is used to encompass reviews that incorporate meta-analyses as well as reviews that pres- ent the study descriptively rather than inferentially. Meta-analysis—The process of using statistical methods to combine quantita- tively the results of similar studies in an attempt to allow inferences to be made from the sample of studies and applied to the population of interest. Technology assessment—An assessment of the effectiveness of medical tech- nologies that uses either single studies or systematic reviews. SOURCES: Cochrane Collaboration (2005); Haynes et al. (2006); Last (2001); West et al. (2002).

OCR for page 17
 INTRODUCTION effectiveness of health care services. As noted earlier, numerous stakehold- ers, policy makers, and government entities have proposed substantial new investment in comparative effectiveness research (America’s Health Insurance Plans, 2007; BCBSA, 2007a; IOM, 2007; Medicare Payment Advisory Commission, 2007; Wilensky, 2006). These calls for the genera- tion of evidence underscore the urgency of the concern that the nation’s health care decision makers be able to discern which evidence is valid, for whom, and under what circumstances. Marked increases in the evidence base for health care decision making will inevitably bring a concomitant need for an increased capability for the synthesis and the interpretation of the evidence. The recent efforts to expand comparative effectiveness research follow more than four decades of progress and setbacks in this area. Overall, there have been significant gains in the science of effectiveness research, from the adoption of randomized controlled trials in the 1960s to the introduction of technology assessment in the 1970s, the methodological advances of the 1980s, and the creation of the Cochrane Collaboration in the 1990s (Box 1-2). Along the way, various government entities and private organiza- tions have been launched to perform or be responsible for clinical effective- ness research. Many of these initiatives have faltered because of inadequate funding or political conflicts with vested interests (Gray, 1992; Gray et al., 2003). This committee hopes that the nation now has the will to address the urgent need to bolster the U.S. health care system with a foundation built on research evidence and scientific methods. ORIENTATION TO THE ORGANIZATION OF THE REPORT This report provides a general blueprint for a national clinical effective- ness assessment program (“the Program”). The overall intent is to outline key Program functions and to recommend an overarching Program infra- structure. The following section describes the organization of the report and the objective of each chapter. Chapter Objectives This introductory chapter has described the objectives and context for this report, including the conceptual framework, key terminology, historical context, and methods used to perform the study. The subsequent chapters sequentially outline the building blocks of the Program, i.e., priority setting, assessing evidence (systematic review), and developing (or endorsing) stan- dards for clinical practice guidelines. The final chapter explores how best to organize these three functions in an overarching Program with maximum potential to benefit patients and the health care system overall.

OCR for page 17
6 KNOWING WHAT WORKS IN HEALTH CARE BOX 1-2 Selected Milestones in U.S. Efforts to Identify Effective Health Care Services 1930s The U.S. Food and Drug Administration (FDA) is given authority to regulate the premarket review of new drugs for safety by the Federal Food, Drug, and Cosmetic Act (1938). 1960s Technology assessment arises on the basis of the recognition that modern technology may have unintended, harmful consequences. The Kefauver-Harris Drug Amendments expand the FDA’s responsibilities to include evaluations of safety and effectiveness. Effectiveness must be proved by “substantial evidence” (1962). 1970s Congress gives the FDA significant authority through the Medical Device Amendments to regulate the testing and marketing of medical devices to en- sure their safety and efficacy (1976). ECRI (now the ECRI Institute) publishes its first monthly publication dedicated to assessing medical technologies (1971). Congress establishes the U.S. Office of Technology Assessment (OTA) (P.L. 92-484) to perform objective analyses of technologies, including health care services, to aid policy making (1972). (Congress eliminated funding for OTA in 1995.) Wennberg and colleagues document wide variations in physician practices, making evident that the style of U.S. health care practice is likewise variable (1973). Congress establishes the National Center for Health Care Technology (P.L. 95- 623) in 1978 to conduct medical technology assessments related to Medicare coverage decisions. (The program was dissolved in 1981 after Congress cut its funding.) 1980s RAND Corporation researchers document that large proportions of the proce- dures that physicians perform are inappropriate, as judged by evidence-based decision criteria. The American College of Physicians initiates the Clinical Efficacy Assessment Project and begins publishing clinical guidelines (1981). The Veterans Administration institutes a Technology Assessment Committee to make recommendations on priority technologies for assessment and appropri- ate methods for technology assessment (1984). The Blue Cross and Blue Shield Association (BCBSA) establishes the Technol- ogy Evaluation Center to assess medical technologies through comprehensive reviews of clinical evidence (1985).

OCR for page 17
7 INTRODUCTION The Agency for Health Care Policy and Research (AHCPR) (now the Agency for Healthcare Research and Quality [AHRQ]) is created and given the respon- sibility for federal health services research by the Omnibus Budget Reconcilia- tion Act of 1989 (P.L. 101-239). The agency’s Center for Medical Effectiveness Research forms several Patient Outcome Research Teams to study the out- comes and costs of alternative treatments for specific clinical problems. The Council of Medical Specialty Societies convenes a national meeting to promote guidelines and training programs for specialty societies and commis- sions the creation of a manual of evidence-based methods (1987). Significant methodological advances enable the generation and use of evidence in medical decisions. These include decision trees, utility theory, Bayes theo- rem for analyzing diagnostic tests, mathematical models, cost-effectiveness analysis, clinical epidemiology, outcomes assessment, meta-analysis, and systematic review. The U.S. Preventive Services Task Force (USPSTF) is convened in 1984 to evaluate research and issue guidelines for preventive interventions. It pioneers the use of comprehensive literature reviews and publishes the first Guide to Clinical Preventive Services in 1989. 1990s AHCPR (now AHRQ) launches a program to create evidence-based guidelines (1990-1996). The Cochrane Collaboration creates a network of organizations from 13 coun- tries, including the United States, to promote evidence-based health care though the production of systematic reviews and clinical guidelines (1993). Funding for AHCPR operations is seriously threatened in response to lobby- ing by a small group of orthopedic surgeons angered by a Patient Outcomes Research Team report on the treatment of back pain (1995-1996). Congress eliminates funding for the Office of Technology Assessment (1995). AHRQ establishes the Evidence-based Practice Centers (EPCs) program to produce reports on clinical evidence and technology assessments (1997). AHRQ, the American Medical Association, and the American Association of Health Plans (now America’s Health Insurance Plans) create the National Guideline Clearinghouse (1998). Health plans, specialty societies, disease-based associations, and foundations create numerous programs that produce clinical guidelines. The Centers for Medicare & Medicaid Services (CMS) establishes the Medi- care Coverage Advisory Committee (now the Medicare Evidence Develop- ment and Coverage Advisory Committee) to provide objective assessments of the available evidence on the safety, efficacy, and clinical benefits of medi- cal services or products for national coverage decisions (1998). continued

OCR for page 17
 KNOWING WHAT WORKS IN HEALTH CARE BOX 1-2 Continued 2000s CMS introduces Coverage with Evidence Development to generate data on the utilization and impacts of services being considered for a national Medicare coverage decision. The overall objective is to improve the evidence base for providers’ recommendations to Medicare beneficiaries (2005). AHRQ creates the Effective Health Care Program, authorized by Section 1013 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (P.L. 108-173) (2005). The Institute of Medicine establishes the Roundtable on Evidence-Based Medicine (2006). BCBSA, America’s Health Insurance Plans, the Medicare Payment Ad- visory Commission, and others propose substantial new investment in comparative effectiveness research. NOTE: The USPSTF was modeled on the Canadian Task Force on the Periodic Health Exami- nation, which the Canadian Government created in 1976 to weigh the scientific evidence for and against using specific preventive services in asymptomatic populations (Canadian Task Force on Preventive Health Care, 2003). SOURCES: Atkins et al. (2005); BCBSA (2007b); Canadian Task Force on Preventive Health Care (2003); CMS (2006); Congressional Research Service (2005); Eddy (2005); Gazelle et al. (2005); Gray et al. (2003); Helfand (2005); IOM (1985, 2006); Levin (2001); Steinberg and Luce (2005); USPSTF (2007). Chapter 2, An Imperative for Change, documents the imperative for immediate action to change how the nation marshals clinical evidence and applies it to endorse the use of the most effective clinical interventions. Chapter 3, Setting Priorities for Evidence Assessment, provides the committee’s findings and recommendations on setting priorities for evidence assessment (systematic review) and describes key programmatic challenges in establishing a priority setting process for the Program. Chapter 4, Systematic Reviews: The Central Link Between Evidence and Clinical Decision Making, reviews how high-quality evidence assess- ment (systematic review) is integral to identifying effective clinical services and presents the committee’s recommendations for ensuring high-quality evidence assessment. Key programmatic challenges are highlighted. Chapter 5, Developing Trusted Clinical Practice Guidelines, presents the committee’s findings and recommendations for developing (or endors- ing) standards for trusted clinical practice guidelines. Key programmatic challenges are highlighted.

OCR for page 17
9 INTRODUCTION Chapter 6, Building a Foundation for Knowing What Works in Health Care, considers how the previous chapters’ recommendations may be best implemented. It provides guiding principles, assesses three basic alterna- tives, and recommends a general organizational framework for the Pro- gram. Key programmatic challenges are highlighted. REFERENCES America’s Health Insurance Plans. 2007. Setting a higher bar: We beliee there is more the nation can do to improe quality and safety in health care. Washington, DC: America’s Health Insurance Plans. Atkins, D., K. Fink, and J. Slutsky. 2005. Better information for better health care: The Evidence-based Practice Center program and the Agency for Healthcare Research and Quality. Annals of Internal Medicine 142(12 Part 2):1035-1041. BCBSA (Blue Cross and Blue Shield Association). 2007a. Blue Cross and Blue Shield Asso- ciation proposes payer-funded institute to ealuate what medical treatments work best http://www.bcbs.com/news/bcbsa/blue-cross-and-blue-shield-association-proposes-payer- funded-institute.html (accessed May 2007). ———. 2007b. What is the Technology Ealuation Center? http://www.bcbs.com/betterknowledge/ tec/what-is-tec.html (accessed August 8, 2007). Berwick, D. M. 2003. Escape fire: Designs for the future of health care. San Francisco, CA: Jossey-Bass. Canadian Task Force on Preventive Health Care. 2003. CTFPHC history/methodology http:// www.ctfphc.org (accessed July 24, 2007). CMS (Centers for Medicare & Medicaid Services). 2006. National coerage determinations with data collection as a condition of coerage: Coerage with eidence deelopment http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=8 (accessed July 24, 2007). Cochrane Collaboration. 2005. Glossary of terms in the Cochrane Collaboration. Version .. http://www.cochrane.org (accessed November 27, 2006). Congressional Budget Office. 2007. Research on the comparative effectiveness of medical treatments: Options for an expanded federal role. Testimony by Director Peter R. Orszag before House Ways and Means Subcommittee on Health http://www.cbo.gov/ftpdocs/ 82xx/doc8209/Comparative_Testimony.pdf (accessed June 12, 2007). Congressional Research Service. 2005. Technology assessment in Congress: History and legis- latie options, CRS report to Congress. Washington, DC: Library of Congress. Dictionary.com. 2007. Results for “evidence.” Dictionary.com Unabridged ( .) http:// dictionary.reference.com/browse/evidence (accessed October 2, 2007). Eddy, D. M. 2005. Evidence-based medicine: A unified approach. Health Affairs 24(1):9-17. Fisher, E. S., and J. E. Wennberg. 2003. Health care quality, geographic variations, and the challenge of supply-sensitive care. Perspecties in Biology and Medicine 46(1):69-79. Fisher, E. S., D. E. Wennberg, T. A. Stukel, D. J. Gottlieb, F. L. Lucas, and E. L. Pinder. 2003a. The implications of regional variations in Medicare spending. Part 1: The content, qual- ity, and accessibility of care. Annals of Internal Medicine 138(4):273. ———. 2003b. The implications of regional variations in Medicare spending. Part 2: Health outcomes and satisfaction with care. Annals of Internal Medicine 138(4):288. Gabel, J. R., A. T. Lo Sasso, and T. Rice. 2002. Consumer-driven health plans: Are they more than talk now? Health Affairs w2.395. Gazelle, G. S., P. M. McMahon, U. Siebert, and M. T. Beinfeld. 2005. Cost-effectiveness analy- sis in the assessment of diagnostic imaging technologies. Radiology 235(2):361-370.

OCR for page 17
0 KNOWING WHAT WORKS IN HEALTH CARE Glasziou, P., and B. Haynes. 2005. The paths from research to improved health outcomes. ACP Journal Club 142(2):A8-A10. Gray, B. H. 1992. The legislative battle over health services research. Health Affairs 11(4): 38-66. Gray, B. H., M. K. Gusmano, and S. Collins. 2003. AHCPR and the changing politics of health services research. Health Affairs w3.283. Haynes, R. B., D. L. Sackett, G. H. Guyatt, and P. Tugwell. 2006. Clinical epidemiology: How to do clinical practice research. 3rd ed. Philadelphia, PA: Lipincott Williams & Wilkins. The Health Industry Forum. 2006. Comparatie effectieness forum: Key themes. Washington, DC: The Health Industry Forum. Helfand, M. 2005. Using evidence reports: Progress and challenges in evidence-based decision making. Health Affairs 24(1):123-127. Higgins, J. T., and S. Green. 2006. Cochrane handbook for systematic reiews of interentions ..6 [updated September 006], The Cochrane Library, Issue 4, 2006. Chichester, UK: John Wiley & Sons, Ltd. IOM (Institute of Medicine). 1985. Assessing medical technologies. Washington, DC: National Academy Press. ———. 2001. Crossing the quality chasm: A new health system for the st century. Wash- ington, DC: National Academy Press. ———. 2006. Safe medical deices for children. Edited by Field, M. J., and H. Tilson. Wash- ington, DC: The National Academies Press. ———. 2007. Learning what works best: The nation’s need for eidence on comparatie ef- fectieness in health care http://www.iom.edu/ebm-effectiveness (accessed April 2007). Khan, K. S., G. ter Riet, J. Popay, J. Nixon, and J. Kleijnen. 2001. Stage II conducting the review: Phase 5 study quality assessment. In CRD Report Number . Edited by Khan, K. S., G. ter Riet, H. Glanville, A. J. Sowden, and J. Kleijnen. York, UK: NHS Centre for Reviews and Dissemination, University of York. Last, J. M. 2001. A dictionary of epidemiology. New York: Oxford University Press. Levin, A. 2001. The Cochrane Collaboration. Annals of Internal Medicine 135(4):309-312. Lomas, J., T. Culyer, C. McCutcheon, L. McAuley, and L. Law. 2005. Conceptualizing and combining eidence for health system guidance. Ottawa (Ontario): Canadian Health Services Research Foundation. Lumpkin, J. R. 2006. Presentation to the HECS Committee Meeting, July , 006. Wash- ington, DC. McGlynn, E. A., S. M. Asch, J. Adams, J. Keesey, J. Hicks, A. DeCristofaro, and E. A. Kerr. 2003. The quality of health care delivered to adults in the United States. The New Eng- land Journal of Medicine 348(26):2635-2645. Medicare Payment Advisory Commission. 2005. Chapter 8: Using clinical and cost effective- ness in Medicare. In Report to the Congress: Issues in a modernized Medicare http:// www.medpac.gov/documents/June05_Entire_report.pdf (accessed June 2007). ———. 2007. Chapter 2: Producing comparative effectiveness information. In Report to the Congress: Promoting greater efficiency in Medicare http://www.medpac.gov/documents/ Jun07_EntireReport.pdf (accessed June 2007). Mulrow, C., and K. Lohr. 2001. Proof and policy from medical research evidence. Journal of Health Politics, Policy and Law 26(2):249-266. Rettig, R., P. Jacobsen, C. Farquhar, and W. Aubrey. 2007. False hope: Bone marrow trans- plantation for breast cancer. New York: Oxford University Press. Sackett, D. L., W. M. C. Rosenberg, J. A. M. Gray, R. B. Haynes, and W. S. Richardson. 1996. Evidence based medicine: What it is and what it isn’t. BMJ 312(7023):71-72.

OCR for page 17
 INTRODUCTION Schwartz, J. S. 1984. The role of professional medical societies in reducing practice variations. Health Affairs 3(2):90-101. Steinberg, E. P., and B. R. Luce. 2005. Evidence based? Caveat emptor! Health Affairs 24(1):80-92. Straus, S. E., P. Glasziou, W. S. Richardson, and R. B. Haynes. 2005. Eidence-based medicine: How to practice and teach EBM. 3rd ed. London, UK: Churchill Livingstone. U.S. Government Accountability Office. 2006a. Consumer-directed health plans: Small but growing enrollment fueled by rising cost of health care coerage. GAO-06-. Wash- ington, DC: Government Printing Office. ———. 2006b. Employee compensation: Employer spending on benefits has grown faster than wages, due largely to rising costs for health insurance and retirement benefits. GAO-06- . Washington, DC: Government Printing Office. USPSTF (U.S. Preventive Services Task Force). 2007. About USPSTF http://www.ahrq.gov/ clinic/uspstfab.htm (accessed July 28, 2007). Webster, P. 2006. US big businesses struggle to cope with health-care costs. Lancet 367(9505): 101-102. Wennberg, J. E. 2002. Unwarranted variations in healthcare delivery: Implications for aca- demic medical centres. BMJ 325(7370):961-964. ———. 2004. Perspective: Practice variations and health care reform: Connecting the dots. Health Affairs var.140. West, S., V. King, T. Carey, K. Lohr, N. McCoy, S. Sutton, and L. Lux. 2002. Systems to rate the strength of scientific eidence. Eidence Report/Technology Assessment No. 7. (Pre- pared by the Research Triangle Institute-Uniersity of North Carolina Eidence-based Practice Center under Contract No. 90-97-00.) AHRQ Publication No. 0-E06. Rockville, MD: Agency for Healthcare Research and Quality. Wilensky, G. R. 2006. Developing a center for comparative effectiveness information. Health Affairs w572.

OCR for page 17