National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Summary

INTRODUCTION AND OVERVIEW

The demand for better evidence to guide healthcare decision making is increasing rapidly for a variety of reasons, including the adverse consequences of care administered without adequate evidence, emerging insights into the proportion of healthcare interventions that are unnecessary, recognition of the frequency of medical errors, heightened public awareness and concern about the very high costs of medical care, the burden on employers and employees, and the growing proportion of health costs coming from out of pocket (Fisher and Wennberg, 2003; Fisher et al., 2003a, 2003b; IOM, 2000, 2001, 2008a; McGlynn et al., 2003; Wennberg et al., 2002). Although nearly $2.5 trillion was spent in 2009 on health and medical care in the United States, only a very small portion of that amount—perhaps less than one tenth of 1 percent—was devoted to learning what works best in health care, for whom, and under what circumstances.

To improve the effectiveness and value of the care delivered, the nation needs to build its capacity for ongoing study and monitoring of the relative effectiveness of clinical interventions and care processes through expanded trials and studies, systematic reviews, innovative research strategies, and clinical registries, as well as improving its ability to apply what is learned from such study through the translation and provision of information and decision support. Several recent initiatives have proposed the development of an entity to support expanded study of the comparative effectiveness of interventions. To inform policy discussions on how to meet the demand for more comparative effectiveness research (CER) as a means of improving

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the effectiveness and value of health care, the Institute of Medicine (IOM) Roundtable on Value & Science-Driven Health Care convened a workshop on July 30–31, 2008, titled Learning What Works: Infrastructure Required for Comparative Effectiveness Research. Box S-1 describes the issues that motivated the meeting’s discussions: the substantial and growing interest in activities and approaches related to CER; the lack of coordination of key activities, such as the selection and design of studies, synthesis of existing evidence, methods innovation, and translation and dissemination of CER information; shortfalls and widening gaps in the workforce needed in all areas of CER; the opportunities presented by the recent calls for expanded resources for work on the comparative effectiveness of clinical interventions; the growing appreciation of the infrastructure needed to support this work; and the need for a trusted, common venue to identify and characterize the need categories, begin to estimate the shortfalls, consider approaches to addressing the shortfalls, and identify priority next steps.

BOX S-1
Issues Motivating the Discussion

  1. Substantial demand for greater insights into the comparative clinical effectiveness of clinical interventions and care processes to improve the effectiveness and value of health care.
  2. Expanded interest and activity in the work needed—e.g., comparative effectiveness research, systematic reviews, innovative research strategies, clinical registries, coverage with evidence development.
  3. Currently fragmented and largely uncoordinated selection of studies, study design and conduct, evidence synthesis, methods validation and improvement, and development and dissemination of guidelines.
  4. Expanding gap in workforce with skills to develop data sources and systems, design and conduct innovative studies, translate results, and guide application.
  5. Opportunities presented by the attention of recent initiatives and the increasing possibility of developing an entity and resources for expanded work on the comparative effectiveness of clinical interventions.
  6. Growing appreciation of the importance of assessing the infrastructure needed for this work—e.g., workforce needs, data linkage and improvement, new methodologies, research networks, technical assistance.
  7. Desirability of a trusted, common venue to identify and characterize the need categories, begin to estimate the shortfalls, consider approaches to addressing the shortfalls, and identify priority next steps.
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

The goal of the workshop was to clarify the elements and nature of the needed capacity, solicit quantitative and qualitative assessments of the needs, and characterize them in a fashion that will facilitate engagement of the issues by policy makers. Two assumptions guided the discussions but were not explored as part of the workshop: resources will be available to expand work on the comparative effectiveness of medical interventions, and, given recent public discourse on the need for a stronger focus on the work, a designated entity would be developed with a formal charge to coordinate the expanded work.

The workshop gathered leading practitioners in health policy, technology assessment, health services research, health economics, information technology (IT), and health professions education and training to explore, through invited presentations, the current and future capacity needed to generate new knowledge and evidence about what works best, including skills and workforce, data linkage and improvement, study coordination and result dissemination, and research methods innovation. Participants explored, in both qualitative and quantitative terms, the nature of the work required, the IT and integrative vehicles required, the skills and training programs required, the priorities to be considered, the role of public–private partnerships, and the strategies for immediate attention while considering the long-term needs and opportunities. Through the course of the workshop, a number of common themes and implications emerged. These are indicated below, along with a number of possible follow-up actions identified for Roundtable consideration.

Since the meeting, three events have occurred with significant implications for the infrastructure necessary for comparative effectiveness research: (1) the American Recovery and Reinvestment Act of 2009 (ARRA) included $1.1 billion for the conduct of CER; (2) formal assessments by the IOM and the federal government have recommended priorities for such research; and (3) the Accountable Care Act of 2010 (ACA) established an independent Patient-Centered Outcomes Research Institute (PCORI). See Appendixes C, D, and, E for additional background. Accordingly some of the information has been updated as appropriate to bring the text current with 2011 circumstances.

Comparative Effectiveness Research and the
Roundtable on Value & Science-Driven Health Care

The IOM’s Roundtable on Value & Science-Driven Health Care provides a trusted venue for key stakeholders to work cooperatively on innovative approaches to the generation and application of evidence that will drive improvements in the effectiveness and efficiency of medical care in the United States. Participants seek the development of a learning health system

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

that enhances the availability and use of the best evidence for the collaborative healthcare choices of each consumer and healthcare professional, that drives the process of discovery as a natural outgrowth of patient care, and that ensures innovation, quality, safety, and value in health care. As leaders in their fields, Roundtable members work with their colleagues to identify issues not being adequately addressed, determine the nature of the barriers and possible solutions, and set priorities for action. They marshal the energy and resources of the sectors represented on the Roundtable to work for sustained public–private cooperation for change.

This work is focused on the three major dimensions of the challenge:

  1. accelerating progress toward the long-term vision of a learning health system, in which evidence is both applied and developed as a natural product of the care process,
  2. expanding the capacity to meet the acute, near-term need for evidence of comparative effectiveness to support medical care that is maximally effective and produces the greatest value,
  3. improving public understanding of the nature of evidence, the dynamic character of evidence development, and the importance of insisting on medical care that reflects the best evidence.

Roundtable members have set a goal that by the year 2020, 90 percent of clinical decisions will be supported by accurate, timely, and up-to-date clinical information and will reflect the best available evidence. To achieve this goal, Roundtable members and their colleagues work to identify priorities for action on those key issues in health care where progress requires cooperative stakeholder engagement. Central to these efforts is the Learning Health System series of workshops and publications that collectively characterize the key elements of a healthcare system that is designed to generate and apply the best evidence about the healthcare choices of patients and providers as well as identify barriers to the development of such a system and opportunities for progress.

Each meeting is summarized in a publication available through the National Academies Press. Workshops in this series include the following:

  • The Learning Healthcare System (July 20–21, 2006)
  • Judging the Evidence: Standards for Determining Clinical Effectiveness (February 5, 2007)
  • Leadership Commitments to Improve Value in Healthcare: Toward Common Ground (July 23–24, 2007)
  • Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches (December 12–13, 2007)
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
  • Clinical Data as the Basic Staple of Health Learning: Creating and Protecting a Public Good (February 28–29, 2008)
  • Engineering a Learning Healthcare System: A Look to the Future (April 28–29, 2008)
  • Learning What Works: Infrastructure Required for Learning Which Care is Best (July 30–31, 2008)
  • Value in Health Care: Accounting for Cost, Quality, Safety, Outcomes, and Innovation (November 17–18, 2008)
  • The Healthcare Imperative: Lowering Costs and Improving Outcomes (May, July, September, December, 2009)
  • Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care (July, September, October, 2010)

This publication summarizes the proceedings of the seventh workshop in the Learning Health System series, which focused on the infrastructure needs—e.g., methods, coordination capacities, data resources and linkages, workforce—for developing an expanded and efficient national capacity for CER. A synopsis of the key points from each of the sessions is included in this chapter, with more detailed information on session presentations and discussions found in the chapters that follow. Sections of the workshop summary not specifically attributed to an individual are based on the presentations, background papers, and discussions associated with the workshop, and reflect the views of this publication’s rapporteurs, not those of the IOM Roundtable on Value & Science-Driven Health Care.

Day 1 featured two keynote speakers who provided a vision for developing an infrastructure that can contribute to an evidence base of what works best for whom, as well a sense of some of the potential returns from health care driven by evidence (Chapter 1), and presentations by speakers asked to characterize the nature of the work (Chapter 2), the information networks (Chapter 3), and the talent (Chapter 4) needed to carry out that vision. Day 2 featured discussions focused on identifying priority items for implementation to meet current shortfalls and opportunities to build upon existing public–private partnership efforts (Chapter 5). Chapter 6 provides a summary of the final session’s discussion to outline key elements of a roadmap for progress, suggest some “quick hits” for immediate implementation, and opportunities to build needed support; this chapter also highlights common themes from the meeting’s discussions and suggestions on opportunities for follow-up actions by the Roundtable. An overview of the topics discussed in specific manuscripts is provided in Table S-1.

A white paper, authored by staff in 2007 and titled Learning What Works Best: The Nation’s Need for Evidence on Comparative Effectiveness

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE S-1 Overview of the Specific Aspects of Comparative Effectiveness Research (CER) Infrastructure Addressed in This Publication’s Manuscript

Chapter Manuscript and Author(s) CER Research Methods and Settings Clinical Data Development and Use Health Information Technology Evidence Review and Synthesis Coordination and Dissemination Workforce Education and Training International CER Efforts
1 The Nation’s Need for Evidence on Comparative Effectiveness in Health Care: Learning What Works Best J. Michael McGinnis et al.
A Vision for the Capacity to Learn What Care Works Best Mark B. McClellan
The Potential Returns from Evidence-Driven Health Care Gail R. Wilensky
2 The Cost and Volume of Comparative Effectiveness Research Erin Holve and Patricia Pitman
Intervention Studies That Need to Be Conducted Douglas B. Kamerow
Clinical Data Sets That Need to Be Mined Jesse A. Berlin and Paul E. Stang
Knowledge Synthesis and Translation That Need to Be Applied Richard A. Justman
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Methods That Need to Be Developed Eugene H. Blackstone et al.
Coordination and Technical Assistance That Need to Be Supported Jean R. Slutsky
3 Electronic Health Records: Needs, Status, and Costs for U.S. Healthcare Delivery Organizations Robert H. Miller
Data and Information Hub Requirements Carol C. Diamond
Integrative Vehicles Required for Evidence Review and Dissemination Lorne A. Becker
4 Comparative Effectiveness Workforce—Framework and Assessment William R. Hersh et al.
Toward an Integrated Enterprise—The Ontario, Canada, Case Sean R. Tunis et al.
5 Information Technology Platform Requirements Mark E. Frisse
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Chapter Manuscript and Author(s) CER Research Methods and Settings Clinical Data Development and Use Health Information Technology Evidence Review and Synthesis Coordination and Dissemination Workforce Education and Training International CER Efforts
5 Data Resource Development and Analysis Improvement T. Bruce Ferguson, Jr., and Ansar Hassan
Practical Challenges and Infrastructure Priorities for Comparative Effectiveness Research Daniel E. Ford
Transforming Health Professions Education Benjamin K. Chu
Building the Training Capacity for a Health Research Workforce of the Future Steven A. Wartman and Claire Pomeroy
Public–Private Partnerships Carmella A. Bocchino et al.
6 The Roadmap—Policies, Priorities, Strategies, and Sequencing Stuart Guterman et al.
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

in Health Care, provided important context for the workshop discussions. The executive summary of that white paper and the full manuscript are included in Chapter 1 and Appendix A, respectively. Appendix B includes evidence summaries of research questions identified and other materials relevant to discussion in a paper in Chapter 2. Appendixes C and D present the recommendations of two groups for priority studies in CER: Initial National Priorities for Comparative Effectiveness Research, an Institute of Medicine report; and the Federal Coordinating Council for Comparative Effectiveness Research Report to the President and Congress. Appendix E contains the portions of the ACA relevant to the structure, funding, and charge of PCORI. The workshop agenda, biographical sketches of the workshop participants, and a list of workshop attendees can be found in Appendixes F, G, and H, respectively.

COMMON THEMES

Common themes that emerged from the 2 days of discussion are summarized in Box S-2 and elaborated in the text that follows:

  • Care that is effective and efficient stems from the integrity of the infrastructure for learning. The number of medical diagnostics and treatments available to patients and caregivers is increasing, but the knowledge about their effectiveness—in particular, their comparative effectiveness—is not keeping pace. This is in part a function of the rate of change, but it is also a product of capacity that is both underdeveloped and, as several participants noted, substantially fragmented, which leads to gaps, inefficiencies, and inconsistencies in the work. The accelerating rate of change in the interventions requiring effectiveness assessment compels a substantial shoring up in the level of effort, the nature of the effort, and the coordination of the effort in order to produce the necessary insights into the right care for different people under different circumstances.
  • Coordinating work and ensuring standards are key components of the evidence infrastructure. Several presentations highlighted the point that substantial activity is currently under way in effectiveness research, including work on comparative effectiveness, but the activities are fragmented and often redundant in both structure and function. The fact that the application of evidence lags behind its production is in part a function of the disparate and “siloed” approaches between and within organizations seeking and developing information. The notions of infrastructure for evidence development therefore also include the capacity for greater coordination in the setting of study priorities; the development of systematic
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

BOX S-2
Infrastructure Required for Comparative Effectiveness
Research: Common Themes

  • Care that is effective and efficient stems from the integrity of the infrastructure for learning.
  • Coordinating work and ensuring standards are key components of the evidence infrastructure.
  • Learning about effectiveness must continue beyond the transition from testing to practice.
  • Timely and dynamic evidence of clinical effectiveness requires bridging research and practice.
  • Current infrastructure planning must build to future needs and opportunities.
  • Keeping pace with technological innovation compels more than a head-to-head and time-to-time focus.
  • Real-time learning depends on health information technology investment.
  • Developing and applying tools that foster real-time data analysis is an important element.
  • A trained workforce is a vital link in the chain of evidence stewardship.
  • Approaches are needed that draw effectively on both public and private capacities.
  • Efficiency and effectiveness compel globalizing evidence and localizing decisions.

decisions for the conduct of CER, systematic reviews, and guideline development; and the need to ensure the consistent translation of developed information. The identification of priority conditions, evaluation, and evidence gaps is needed in order to target limited resources, especially for high-cost or high-volume procedures and interventions.

  • Learning about effectiveness must continue beyond the transition from testing to practice. “The learning process cannot stop when the label is approved,” one meeting participant pointed out. Premarket testing for the safety and effectiveness of various interventions cannot assess the results for all populations or the circumstances of use and differences in practice patterns, so gathering information as interventions are applied in practice settings should represent a key focus in designing the infrastructure to learn which care is best. Local coverage decisions and private insurer use
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

of coverage with evidence development approaches were cited as opportunities to learn as a part of the care process.

  • Timely and dynamic evidence of clinical effectiveness requires bridging research and practice. Although historical insulation of clinical research from the regular delivery of healthcare services evolved to facilitate data capture and control for confounding factors, it may not adequately inform the real-world setting of clinical practice. With the prospect of enhanced electronic data capture at the point of care on real-world patient populations, and statistical approaches to improve analysis, as well as increasing demand to keep pace with technologic innovation, this divide increasingly limits the utility of research results. Efforts under way to better engage health delivery organization, practitioners, patients, and the community in research prioritization, conduct, and results dissemination should be supported and expanded.
  • Current infrastructure planning must build to future needs and opportunities. Research is often driven more by the methods than the questions. In fact, both are important, and infrastructure planning must account for both the key emerging healthcare questions and the key emerging CER opportunities. Emerging questions include those related to the management of multiple co-occurring chronic diseases of increasing prevalence in an aging population, the improved insights into individual variation relevant to both treatments and diagnostics, and the impact of innovation in shortening the lifecycle of any particular intervention. Emerging tools include innovations in trial design, the development of new statistical approaches to data analysis, and the development of electronic medical and personal health records.
  • Keeping pace with technological innovation compels more than a head-to-head and time-to-time focus. Much of the current discussion about CER has emphasized the need for more clinical trials and more head-to-head studies. Although there are numerous examples of diagnostic and treatment interventions for which such studies are needed, the notion of a research process that essentially offers periodic and static determinations is inherently limited. Especially with the rapid pace of change in the nature of interventions and the difficulty, expense, and time required to develop studies—and the challenges of ensuring the generalizability of results in the face of limitations of the traditional approach to randomized controlled trials (RCTs)—a first-order priority for effectiveness research is the establishment of infrastructure for a more dynamic, real-time approach to learning. Leveraging new tools, such as health information technology (IT) should allow for
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

a more networked and distributed approach to information sharing and evidence creation.

  • Real-time learning depends on HIT investment. It was noted that collecting data is the most time-intensive part of trials and studies, and IT is critical to streamlining this work. Moreover, it is the key to accelerated learning from broader-based clinical experience. We heard that “[t]he type of learning needed cannot be paper based.” The increasing complexity of the factors involved in understanding the effectiveness of clinical options under different circumstances requires a blend of database access and computing power that can only be provided from broadly applied HIT. Although not in itself sufficient to produce the information required for better medical care management, it is a necessity for the continuous improvement expected of a learning health system. A policy framework for privacy and security will be necessary to build and maintain public trust that information will be protected as it is shared.
  • Developing and applying tools that foster real-time data analysis is an important element. The scope and scale of evidence needs suggests that innovation is needed across the range of research methods, from making clinical trials faster and less expensive to moving beyond randomized trials to better address practical circumstances, using registries, observational databases, and other emerging data resources. If full advantage is to be taken of HIT, statistical tools and analytic algorithms that can be embedded in databases to allow real-time insights will be important. Similarly, tools are needed that will allow findings to be drawn from databases built on different vendor platforms, using semantic technology to integrate currently disparate medical data, in order to develop the next generation of statistical tools for the analysis of clinical data, including the building of models that allow insights to be generated by virtual studies.
  • A trained workforce is a vital link in the chain of evidence stewardship. As in many other domains, progress in CER will relate to the capacity to develop and maintain the broad and diversely skilled workforce needed. Mention was often made of that factor as a determining element as well for progress in development of the learning health system. Given the pace of change in the number and variety of clinical interventions as well as in the tools and approaches to assessing them, there is a need to ensure that these developing opportunities are matched by the skills of the workforce. This includes training and education in the methodologies of research design, the translation of research, guideline development, and the maintenance and mining of clinical records. But it also
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

includes giving attention to reorienting the education of frontline caregivers around their emerging responsibilities for access, interpretation, and discussion with patients of a dynamic evidence base, as well as helping to ensure the availability and integrity of the clinical data that shape conclusions on evidence.

  • Approaches are needed that draw effectively on both public and private capacities. Several times in the course of the meeting it was pointed out that although the total investment in CER in the United States is substantial, it is inefficient because of the absence of a vehicle for common priority setting and coordination of efforts and because the work on effectiveness done by private companies in product development and testing is usually not accessible to the broader community. In aggregate, private investment often far exceeds public investment in assessing a given intervention, but even when available, studies associated with an enterprise with a commercial stake may be viewed suspiciously. Several models are in development to establish public–private collaborative efforts to improve the efficiency and effectiveness of the work.
  • Efficiency and effectiveness compel globalizing evidence and localizing decisions. Two presentations featured international work, including the Cochrane Collaboration on evidence synthesis, and efforts in Ontario, Canada, to develop and apply insights about the comparative effectiveness of clinical interventions. Reference was made throughout the meeting to work going on elsewhere in the world and, in particular, to work at the National Institute for Health and Clinical Excellence in the United Kingdom. This brought clearly into play the need to ensure that, where possible, common work to assess an intervention’s clinical effectiveness—or collective work to assess the body of evidence—be collaborative and well coordinated across boundaries, while also being mindful that different cultural and policy environments may lead to different decisions at the local level.

Key Factors and Needs

Workshop speakers described a number of implications of the current state of play for the development of an infrastructure for CER (Box S-3). These included the following:

  • Several elements are involved in infrastructure development. Developing the infrastructure for CER has at least five dimensions: (1) putting in place the physical capacity, i.e., the hardware; (2) developing the analytic tools and methods; (3) training the work-
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

BOX S-3
Key Factors and Needs for Expanded Comparative
Effectiveness Research Capacity

•   Several elements are involved in infrastructure development:

o  putting in place the physical capacity, i.e., the hardware;

o  developing the analytic tools and methods;

o  training the workforce needed;

o  establishing processes for efficient and effective operation; and

o  shaping the strategy for attention and phasing.

•   Strategies and priorities for infrastructure application include the following:

o  conduct of systematic reviews,

o  conduct of primary research,

o  clinical registry resources,

o  introduction of health information technology throughout practice,

o  fostering public and private collaboration, and

o  keeping focus on the utility and impact of a networked world.

force needed; (4) establishing processes for efficient and effective operation; and (5) shaping the strategy for attention and phasing. Presentations at the meeting described and discussed in qualitative terms the needs and challenges in each of these dimensions and offered “opening bid” quantitative estimates on the needs for the IT infrastructure, as well as for investments in human capital. Refinements of these first approximations will be needed, as will additional clarity on the analytic tools, processes, and strategies for a stronger infrastructure for research into effective health care.

  • Strategies and priorities for infrastructure application. The dimensions noted above represent in certain ways the functional dimensions of relevance to the infrastructure that is needed for effectiveness research. There are phasing considerations as well, in part driven by the ability and need to take actions even without additional resources and in part driven by the time required to set in motion the necessary activities. Suggestions for key strategies and priorities for progress included the following:

o  Conduct of systematic reviews. There is an immediate need to improve the conduct, coordination, and consistency of systematic reviews—a point that, in effect, echoed the recommendations of the 2008 IOM report Knowing What Works in Health Care: A Roadmap for the Nation.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

o  Conduct of primary research. Similarly, the approach to primary research on effectiveness needs a more systematic means of determining priorities, better tools and more streamlined designs, and a deeper bench workforce to do the work.

o  Clinical registry resources. In making the transition to a pattern of real-time, continuous learning in health care—in effect, creating a beta approach to clinical data systems that generate learning—the technologies for clinical registries and in the field of registry development, maintenance, and improvement will need to be strengthened.

o  Introduction of HIT throughout practice. In the area of IT development, the issues include the need to install appropriate hardware in virtually every clinical setting, the incorporation into operating software of design elements that are pegged to research activities and embedded analytic tools, the incorporation of design elements used in decision assistance, and training of the required workforce to work with this technology.

o  Fostering public and private collaboration. The longer-term development needed to sustain the growth and improvement of the infrastructure will include the design of approaches that foster meaningful public and private collaboration in support of the research activities.

o  Keeping focus on the utility and impact of a networked world. Also important to guide strategy development in the long term are approaches designed to take advantage of the resources emerging in our increasingly networked world—the opportunities for which hints are provided by recent developments, such as the Patients Like Me Web site, the HMO Research Network, the registries of the Society of Thoracic Surgeons, and even information made available by such resources as Google and Wikipedia.

CONTEXT, PRESENTATION, AND DISCUSSION SUMMARIES

Background for workshop discussions was provided by an IOM staff-authored background brief that illustrates the case for expanded CER, provides an overview of current CER activities and needs, and briefly discusses relevant issues not under consideration at the workshop (e.g., financing and structure of a new entity to coordinate CER work). Workshop presentations focused on key infrastructure needs and identified components of and existing capacity for CER, provided qualitative and quantitative assessments of what is needed to meet the demand, and suggested options for strengthening and building upon existing infrastructure. Much of the discussion focused

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

on the prioritization of these needs and how to develop the beginnings of a roadmap of specific immediate steps and priority actions needed to move from where we are to where we need to be. The background brief, workshop presentations, and meeting discussions are summarized below—with expanded discussion included in the main body of the text.

The Need and Potential Returns for Comparative Effectiveness Research

Enhancing the capacity for CER is not an end in itself but is rather a means to begin guiding the development of a healthcare system in which care is evidence driven and focused on providing care of value to individual patients. The staff-authored issue brief, provided as background for meeting discussions, and two presentations provided an important starting point for workshop discussions by summarizing current CER capacity, outlining a vision for—and suggesting the potential returns of—an evidence- and value-driven healthcare system.

The Nation’s Need for Comparative Effectiveness Research1

The nation’s capacity has fallen far short of its need for producing reliable and practical information about the care that works best. Medical-care decision making is now strained, at both the level of the individual patient and the level of the population as a whole, by the growing number of diagnostic and therapeutic options for which there is insufficient evidence to make a clear choice (IOM, 2008a). As reviewed in the background paper provided to workshop participants, these developments have fundamental implications for health prospects, and to capture and use them effectively and efficiently will require a proportionate commitment to understand their advantages and appropriate applications. It is a problem in both capacity investment and resource allocation. If only 1 percent of the nation’s healthcare bill were devoted to understanding the effectiveness of the care purchased, the total investment would be more than $20 billion annually. In contrast, even accounting for the support from all private and public sources, the aggregate national commitment to assessing clinical interventions is still likely well under 1 percent.2

_______________

1 At the request of the Roundtable’s sustainable capacity working group, Roundtable staff developed an Issue Overview on national capacity for CER (IOM, 2007). This paper was provided as part of the meeting briefing materials to inform workshop discussion and is summarized briefly here (S-16–S-19). The complete white paper can be found in Appendix A of this publication.

2 The American Recovery and Reinvestment Act of 2009 provided $1.1 billion of funds to the National Institutes of Health, Agency for Healthcare Research and Quality, and the Secretary of Health and Human Services for activities related to CER.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Activities currently under way to assess the effectiveness of healthcare interventions are broad but underresourced and fall far short of the need (IOM, 2007). In addition to the contributions of industry through phase 3 and 4 trials, several government agencies support CER, including the Agency for Healthcare Research and Quality (AHRQ), which has a specific mandate and a small appropriation for CER. The total of all appropriations to all federal agencies—the National Institutes of Health (NIH), the Veterans Health Administration, the Department of Defense, the Centers for Medicare & Medicaid Services (CMS), the Food and Drug Administration (FDA), AHRQ, and the Centers for Disease Control and Prevention—for all health services research amounts to about $1.5 billion, only a modest portion of which is devoted to CER and which is far below the industry level (AcademyHealth, 2005). Additional work, also modest, is undertaken by certain of the larger healthcare delivery organizations. Evidence synthesis activity is supported by the insurance industry, professional societies, healthcare organizations, and government. AHRQ has established a network of 13 AHRQ-sponsored evidence-based practice centers that review literature and produce evidence reports including comparative effectiveness reviews. Organizations interested in evidence reviews will often draw upon syntheses performed by several well-established technology assessment entities (IOM, 2008a).

The most pressing needs of clinicians and their patients center on the development of reliable studies on which to base their decisions. These needs have been characterized in various ways, and they can be grouped into the key areas indicated in Box S-4 (Buto and Juhn, 2006; Clancy, 2006; Health Industry Forum, 2006; Hopayian, 2001; Kupersmith et al., 2005; Rowe et al., 2006). The related key challenges are summarized in Table S-2.

To narrow the rapidly growing gap between the available evidence on clinical effectiveness and the evidence necessary for sound clinical decision making, various organizations and recent public articles have called for the creation of a new entity and a quantum increase—several billion dollars—for CER (IOM, 2008a; Kupersmith et al., 2005; Wilensky, 2005). The various approaches suggested for building the required capacity can be grouped into four categories according to the funding patterns for their support (Box S-5). Each of the approaches is based on an existing or recent model. Although presented as discrete models for discussion purposes, they are not mutually exclusive.

Other implementation considerations include those related to funding and program management. Suggestions for the funding mechanism range from a direct annual federal appropriation or a small set-aside from the Medicare Trust Fund to the structuring of proportionately matching contributions including set-asides from Medicare fund expenditures, from private health insurance premiums, or from manufacturer research and

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

BOX S-4
Prominent Comparative Effectiveness Research
Activities and Needs

  1. Studies of comparative effectiveness (“head to head”)
  2. Systematic reviews of comparative effectiveness
  3. Assessment of comparative value/cost effectiveness
  4. Coordinated priority setting and execution
  5. Improved study designs and research methods
  6. Better linkage of studies of efficacy, safety, and effectiveness
  7. Appropriate evidence standards consistently applied
  8. Consistent recommendations for clinical practice
  9. Guidance for coverage and funding
  10. Dissemination, application, and public communication

SOURCE: IOM, 2007.

TABLE S-2 Prominent Comparative Effectiveness Research Activities and Needs—Key Challenges


Issue

Key Challenges


Head-to-head studies

Scant resources; rapidly increasing need; comparison choice

Systematic reviews

Few primary studies; inconsistent methods; uncoordinated

Comparative value insights

Little agreement on metrics or role of costs; cost fluctuation

Priority setting

Fragmentation; inefficiency; no mechanism for coordination

Study designs and tools

Clinical trial time/cost/limits; large dataset mining methods

Research life-cycle links

Efficacy–effectiveness disjuncture; postapproval surveillance

Evidence standards

Standards not adapted to needs; inconsistency in application

Practice guidance

Disparate approaches; conflicting recommendations

Coverage guidance

Narrow evidence base; limited means for provisional coverage

Application tools

Public misperceptions; incentive structures; decision support


SOURCE: IOM, 2007.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

BOX S-5
Models for Enhancing Capacity


Incremental funding augmentations

  • Incremental model

Publicly funded entity

  • Executive branch agency model
  • Independent government commission model
  • Legislative branch office model

Privately funded entity

  • Operating foundation model
  • Investment tax credit cooperative model

Public–private funded entity

  • User fee public model
  • Federally funded research and development center public model
  • Independent cooperative model
  • Independent quasi-governmental authority model

SOURCE: IOM, 2007.

development expenditures (Hopayian, 2001; Health Industry Forum, 2006; Kupersmith et al., 2005; Wilensky, 2005). There can be many variations on these themes, but the key concept is related less to the source of the funds invested than to the value of the return for the outcomes and efficiency of the nation’s health care.

Because of the challenges to increasing CER through a simple appropriation to an existing agency—the difficulty of marshaling an appropriation at a sufficient level, the agency’s lack of political independence, the limited ability to draw on other agencies—much of the recent discussion has focused on three of the independent models, often with blended public and private funding (Buto and Juhn, 2006; Kupersmith et al., 2005; Wilenksy, 2005). As independent entities, each of these approaches assumes the establishment of a governing board composed of stakeholders and charged with priority setting, broad budget allocation, and fiduciary responsibility for execution of the program of activities. These approaches differ in the degree of insulation between the stakeholder priority setting and the conduct of the scientific studies as well as in the ways the studies would be managed, the involvement of existing agencies, and the reporting of results.

To this end, the ACA (2010) established the Patient-Centered Outcomes Research Institute (PCORI) as an independent non-profit organization to

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

assist in informing the health decisions of “patients, clinicians, purchasers, [and] policy-makers.” The ACA appropriated to the PCORI Trust Fund $10 million, $50 million, and $150 million for fiscal year 2010-2012. Additionally, $150 million plus $1 per Medicare part A and B enrollee has been appropriated for 2013 and $150 million plus $2 for each A/B enrollee, each year from 2014-2019. As outlined in the Act, PCORI will set a national agenda for research priorities, fund entities that conduct priority research, improve clinical effectiveness research methods, and ensure transparency and broad dissemination of its findings. It will be overseen by a Governing Board, comprised of 19 members appointed by head of the Government Accountability Office, as well as 2 ex officio representatives from the Agency for Healthcare Research and Quality and the National Institutes of Health. For more information on PCORI, see Appendix E.

A Vision for the Capacity to Learn Which Care Is Best

The growing support for CER represents an important first step in transforming health care. Mark B. McClellan, director, Engelberg Center for Health Care Reform, from the Brookings Institution emphasized that as the infrastructure required to expand the nation’s capacity for CER is identified and prioritized, policy makers will need to consider how these elements can serve the longer-term goal of developing a learning health system. Efforts to improve the key infrastructure elements and data networks, methods, and workforce should also consider how to best build upon current health system capacities. Key advances needed for these elements include supporting a virtual approach to linking databases through the development of needed standards and incentives; advancing innovative approaches to clinical trials to facilitate their conduct in real-world settings, as well as improved statistical and epidemiologic methods; and a focus on developing a broad, cross-disciplinary workforce with capabilities in biostatistics, epidemiology, decision analysis, health economics, health services research, and program evaluation. To take best advantage of the many efforts already under way, infrastructure is also needed to promote the sharing and learning from the diverse experiences of all stakeholders. Public–private partnerships are one possible approach to helping organizations share information and learn more quickly about what works best.

McClellan also suggested that for infrastructure development, form should follow function, and he identified four critical evidence gaps that need to be addressed by CER and a learning health system. First, to move beyond evaluating the average impact of a treatment in a population and toward targeted medicine, researchers need a better understanding of current care and how this might vary from patient to patient. Large epidemiologic datasets will be useful to develop the disease models or natural

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

histories that provide such baselines for future evaluations. The development and use of these data resources have several implications for infrastructure, including the need to develop and implement complete standards for data collection, clinical trials, and electronic records.

Establishing the means to monitor the safety of medical therapies and products is another key evidence gap. Developing data networks and requisite methods of analysis will help to support the creation of a national, virtual infrastructure—as endorsed in the FDA Amendments Act of 2007—for monitoring product use, including adverse reactions. Such infrastructure may eventually serve as an important component of infrastructure for evidence generation—by supporting studies that compare the safety and effectiveness of treatments in different subgroups.

A third and related need is developing a reliable and relevant evidence base on the comparative effectiveness of treatment options to help physicians and patients make the best possible healthcare decisions. At present, conducting carefully randomized studies in real-world situations on practical treatment questions can be difficult as well as costly and time consuming. Moreover, by the time a large randomized trial is completed, the information may be outdated. The key challenge is to move beyond approaches that generate evidence about the overall average effect—in one population versus another—to the efficient development of information relevant to particular types of patients. In this respect, work is needed to determine limitations, methodological challenges, and needed improvements in data collection methods, as well as to develop agreement on the amount and type of evidence needed for decision-making purposes.

Finally, infrastructure development must aim to address the evidence gap related to understanding effective treatment strategies and policies. Current capacity cannot contend with the impending exponential growth in the complexity of medical decision making. Subtle differences in the management of chronic diseases and practice patterns affecting chronic disease management often result in broad variations in care delivered. In the absence of information on these technologies and strategies, care provided can be only marginally beneficial or even harmful. McClellan also noted that some suggest it should be possible to reduce costs in Medicare by 20 percent without consequences for patient outcomes if these variations are addressed. However, these practices often cannot be assessed in a simple RCT, but require study in real-world settings. Such studies could be very useful in closing the gap between what we know works and what is delivered in medical practice, as well as in understanding underlying issues related to the coordination and integration of care that constitute a major problem in the current healthcare system. Infrastructure needed to address this challenge should involve broad collaboration among stakeholders, as well as developing consensus on the best methodological approaches. A

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

particular focus is needed on methods development for studies in real-world settings, including observational approaches.

The Potential Returns from Evidence-Driven Health Care

Interest in the potential of comparative clinical effectiveness information to help Americans learn to “spend smarter” is part of a drive towards the increased availability and use of evidence to guide medical practice. According to Gail R. Wilensky, senior fellow at Project HOPE (Health Opportunities for People Everywhere), the potential ability for better information to improve health outcomes and also to help moderate spending increases is enormous. To capture some of the potential savings that CER could bring, consideration needs to be given to which approaches, data resources, and analyses will be most useful in producing the information needed. Data will be available from many sources, and it will be important to find ways to identify and address issues related to study design limitations and biases, as well as to reduce the costs and time required for the collection of new prospective data for comparative effectiveness trials and studies. Anticipated need for expanded CER might begin with an initial investment of several hundred million dollars and then ramp up to $4–$6 billion a year. Several steps are critical to ensuring a return on this investment. First, a center should be established and charged with creating better information on comparative clinical effectiveness. Second, priority setting for comparative effectiveness analyses should focus on medical conditions in high-cost, high-volume areas as well as areas that are subject to substantial practice variation. It will also be important to consider issues of clinical relevance, disease burden, and the various subgroups that are particularly affected. Third, it is essential to recognize that all stakeholders need to be a part of the decision-making process, as a slowdown in spending rates will have broad effects across the healthcare system.

Wilensky suggested that an immediate infrastructure priority should be to develop a common national data source that captures what is known about the likely clinical outcomes of various treatments for relevant population subgroups. Moreover, although cost- and clinical-effectiveness information should be considered in reimbursement and even clinical decisions, Wilensky underscored the importance—for technical and political reasons—of keeping these analyses and the places where they are conducted separate. Finally, as important as it is to have information available on clinical and cost effectiveness, the potential gains will not be achieved unless the reimbursement system is changed to make better use of information to reward health care of value rather than just paying more for doing more. In this respect, she emphasized the importance of expecting and allowing for different players to use this information differently, the need to make CER

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

information available to help guide reimbursement policies that reward good clinical outcomes, and the importance of taking advantage of the diversity in healthcare delivery by tying local coverage decisions to evidence development. Finally, it will be particularly important to provide legislative authority to CMS to introduce what is known about clinical and cost effectiveness into its reimbursement decisions.

The Work Required

CER aims to determine what works best for whom under which circumstances to inform the healthcare decisions of patients, physicians, and policy makers. Developing information that is relevant and understandable to these end users requires the efficient conduct of a range of activities, including primary research, synthesis, and translation. Efficient use of resources depends on prioritization of research questions, coordination of disparate but related efforts, and advancing research methods and data resources. The presentations described in Chapter 2 discussed the nature of the specific types of the work required, clarified what is known about the current capacity, illustrated the opportunities to improve care presented by expanded investment, and offered initial suggestions about policies or activities for progress.

Cost and Volume of Current Comparative Effectiveness Research

As policy discussions about CER gather momentum, there continues to be a lack of awareness of the current scale of CER. Erin Holve, senior manager at AcademyHealth, presented the results of a survey of the costs and volume of current CER. The study was based on stakeholder interviews with research funders and researchers as well as a review of databases that track health research. In this work, CER was defined as an examination of the effectiveness of the risk and benefits of two or more healthcare services or treatments used to treat a specific disease (e.g., pharmaceuticals, medical devices, medical procedures, other treatment modalities in appropriate real-world settings). Results from both sources suggest there are currently more than 600 studies under way in the area. Three primary research categories were considered: head-to-head trials (including pragmatic trials), observational studies (including registry studies, prospective cohort studies, and database studies), and syntheses and modeling (including systematic reviews). Data from the interviews demonstrate a wide range in the cost of conducting CER, both across and within study designs, although costs clustered by study design (Table S-3). Interviewees emphasized a need for multi-disciplinary training to expose researchers to a variety of methods in trials, observational studies, and syntheses. Finally, the study revealed an

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE S-3 Costs of Various Comparative Effectiveness Studies


Type of Study

Cost


Head to head

Randomized controlled trials:

   Smaller

   Larger

 

$2.5m–$5m

$15m–$20m


Observational

Registry studies

Large prospective cohort studies

Small retrospective database studies

$2m–$4m

$800k–$6m

$100k–$250k


Synthesis

Simulation/modeling studies

Systematic reviews

$100k–$200k

$200k–$350k


absence of clear definitions regarding the scope of comparative effectiveness as well as a limited understanding of the appropriate methods for conducting CER. These definitional and organizational issues may be an impediment to coordinating future CER activities.

Intervention Studies That Need to Be Conducted

To illustrate the multifaceted need for comparative effectiveness information on procedures, devices, pharmaceuticals, diagnostics, and health systems and to highlight some of high-priority studies that might be needed, Douglas B. Kamerow, chief scientist at RTI International, presented results from a stakeholder work group convened to pilot a process for identifying candidate comparative effectiveness studies. The process resulted in the adoption of selection criteria—including the importance of the conditions being treated or prevented, the current availability of effective treatments or preventive interventions, lack of definitive knowledge about the relative effectiveness of available treatments, research plausibility, and study type heterogeneity. These criteria were used to select among candidates nominated in the following comparative effectiveness categories: diagnostic studies drug–drug comparisons, health services systems studies, preventive interventions, surgical studies, and treatment studies across modalities. Potential comparative effectiveness studies identified using this process are

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

listed in Table S-4, and brief evidence reviews developed by Kamerow for each item are provided in Appendix B. Importantly, this exercise illuminated a number of challenges facing those who seek to prioritize the work needed. Key lessons learned included the existence of many opportunities for research that can make a difference in costs and outcomes; the importance of establishing an explicit and transparent process and of considering stakeholder perspectives and input during the process of nomination, review, and selection; and the need for carefully defined research questions and the utilization of the full range of study designs and methods. Many of these issues and perspectives were similarly reflected in the report on CER priorities mandated under ARRA and conducted by the IOM, the summary of which is found at Appendix C.

Clinical Data Sets That Need to Be Mined

Reflecting on the opportunities to better use routinely captured electronic data to generate insights on benefit and safety, Jesse A. Berlin, vice president of pharmacoepidemiology at Johnson & Johnson, noted that the appropriate use of existing data, as well as creative new uses of existing data collection mechanisms, will be crucial to improving healthcare decision making. Reviewing the major strengths and limitations of currently available administrative data in addressing questions of comparative effectiveness and safety, he noted that most existing databases currently used in observational studies of pharmaceuticals were created for a purpose other than research. These purposes include allowing payers to track expenditures in order to manage costs (in insurance claims databases, the largest and most prevalent type), manage patients (electronic medical records), and manage purchasing and capacity (facility-specific databases). These databases can provide relatively inexpensive and rapid access to clinical data and analyses of pharmaceutical exposures within a quantifiable source population, and these data reflect healthcare decisions and outcomes as they were actually made (versus the artificial constructs of an RCT). By virtue of reflecting actual, real-world clinical practice, databases offer an external validity that is greater than RCTs. However, databases can be limited by missing data (particularly on such confounders as smoking, height, weight, race, over-the-counter drugs, and alcohol consumption), failures to follow up (due to turnover in healthcare plans), and data quality issues (e.g., miscoding of diagnoses). Furthermore, data reflect the healthcare delivery system and, as such, may fail to account for or capture healthcare visits outside of providers using a specific electronic health record (EHR) system, due to benefit design or other issues. The use of these data for research is also complicated by the difficulty of capturing benefits of treatment (e.g., improvement in blood pressure, quality of life) compared to the relatively

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE S-4 The Comparative Effectiveness Studies Inventory Project Identified 16 Candidate Topics for Comparative Effectiveness Research

Study Topic Study Type Age Group Condition
Treatment of attention deficit hyperactivity disorder in children: drugs, behavioral interventions, no prescription Comparative effectiveness treatment studies across modalities Children Mental diseases
Treatment of acute thrombotic/embolic stroke: clot removal, reperfusion drugs Comparative effectiveness treatment studies across modalities Adults Heart and vascular diseases
Treatment of chronic atrial fibrillation: drugs, catheter ablation, surgery Comparative effectiveness treatment studies across modalities Adults Heart and vascular diseases
Treatment of chronic low back pain Comparative effectiveness treatment studies across modalities Adults Neurological diseases
Gamma knife surgery for intracranial lesions vs. surgery and/or whole brain radiation Comparative effectiveness treatment studies across modalities Adults Neurological diseases
Treatment of localized prostate cancer: watchful waiting, surgery, radiation, cryotherapy Comparative effectiveness treatment studies across modalities Adults Cancer
Diagnosis and prognosis of breast cancer using genetic tests: human epidermal growth factor receptor 2 and others Diagnostic studies Adults Cancer
Over-the-counter drug treatment of upper respiratory tract infections in children Drug–drug and drug–placebo treatment studies Children Respiratory disorders
Drug treatment of depression in primary care Drug–drug and drug–placebo treatment studies Adults Mental disorders
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Study Topic Study Type Age Group Condition
Drug treatment of epilepsy in children Drug–drug and drug–placebo treatment studies Children Neurological diseases
Use of erythropoiesis-stimulating agents in the treatment of hematologic cancers Drug–drug and drug–placebo treatment studies Adults Cancer
Outcomes of percutaneous coronary interventions in hospitals with and without onsite surgical backup Health services/systems studies Adults Heart and vascular diseases
Screening hospital inpatients for methicillin-resistant Staphylococcus aureus infection Preventive interventions Adults Infectious diseases
Tobacco cessation: nicotine replacement agents, oral medications, combinations Preventive interventions Adults Preventive interventions
Prevention and treatment of pressure ulcers Surgical studies Adults Dermatological diseases
Inguinal hernia repair: open vs. minimally invasive Surgical studies Adults Surgical disorders

NOTE: Study topics are categorized by study type, age group, and condition.
SOURCE: Kamerow, 2009.

high capture of treatment risks that are often also codified as “clinical diagnoses.” Regardless, little is understood of how much of an effect these benefits or risks have, or the perceptions of the patients or healthcare providers regarding the benefits and risks.

Designing targeted studies within databases is a promising direction for research. For example, special data collection screens might pop up on an in-office computer when patients matching a specific set of criteria were under consideration. This idea could be extended to include the conduct of large, simple, randomized studies within the databases. The question

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

is whether additional aspects of data collection can be tailored (as in primary data collection efforts) within the context of an existing data collection system. These ideas are not novel, but neither have they yet been widely adopted. Other ideas discussed include capturing the data at the physician–patient interface and providing data back to clinicians. Feedback to providers could encourage them to enroll in clinical trials and could permit healthcare professionals to better understand their own treatment decisions and the impacts of those decisions.

Knowledge Synthesis and Translation That Need to Be Applied

Currently the United States lacks a single reliable source that people can use to evaluate the safety and effectiveness of medical treatments. In January 2008, the IOM published Knowing What Works in Health Care: A Roadmap for the Nation (IOM, 2008b), which explored the national capacity to use scientific evidence to identify highly effective clinical services. Richard A. Justman, national medical director at UnitedHealthcare served on the report committee, and he discussed key findings related to the state of knowledge synthesis and translation and opportunities to scale up national capacity to meet the anticipated demand. While there are multiple avenues available today to help consumers, physicians, and others decide which treatments are safe and effective, they all have significant limitations. Justman highlighted how the absence of a national comparative effectiveness architecture has led to an evidence base that is replete with gaps, duplications, and contradictions (Table S-5). For example, some systematic reviews of clinical evidence and some clinical practice guidelines lack scientific rigor, relying on a consensus of expert opinion rather than clinical evidence as the basis for their conclusions. The body of clinical evidence for some health services that consumers and physicians are interested in may be weak or totally lacking. Bias and conflict of interest on the part of experts further complicate the understanding of the conclusions that can be drawn from available clinical evidence. Finally, the multiple clinical guidelines available for the treatment of the same condition frequently make differing recommendations. The 2008 report urged Congress to direct the Secretary of Health and Human Services to designate a single entity to ensure the production of credible, unbiased information about what is known and what is not known about clinical effectiveness. It also recommended the appointment of a Clinical Effectiveness Advisory Board to oversee the program and the appointment of a Priority Setting Advisory Committee to identify high-priority topics. The report further prescribed the development of evidence-based methodological standards for systematic reviews, including a common language for characterizing the strength of evidence. It recommended that bias be minimized by balancing competing interests,

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

publishing conflict-of-interest disclosures, and prohibiting voting by members with material conflicts.

Methods That Need to Be Developed

To contend with the growing scope and scale of clinical evidence needs, work is needed to improve and refine current research methods as well as to develop innovative new approaches to ensuring the development of efficient, timely, and relevant information for healthcare decision making. Although randomized clinical trials and meta-analyses of these trials provide the best evidence for use in comparative studies of the effectiveness of clinical interventions and care, it is impossible, difficult, unethical, or prohibitively expensive to randomize all relevant factors. Eugene H. Blackstone, head of clinical investigations at the Cleveland Clinic Heart and Vascular Institute, presented five foundational methodologies that will accelerate movement from the current siloed approach to evidence generation to an approach that enables predictive and personalized medicine (Figure S-1).

Re-engineering clinical trials will require addressing six main pitfalls associated with traditional RCTs: (1) complexity, (2) data capture, (3) generalizability, (4) equipoise, (5) appropriateness, and (6) funding. For the many instances in which even RCTs are not feasible or sufficient to meet information needs, methods for conducting approximate randomized trials using balancing strategies and real-world observational clinical data have become increasingly common—although a number of their important features remain to be explored and understood. Many trials focus on early outcomes or else introduce medicines or devices that bring additional complications. Thus methods for longitudinal surveillance and long-term outcomes analysis—e.g., birth-to-death, patient-centric health records populated with discrete values for variables—are also needed.

Among the most promising methodologies emerging is the semantic representation of data. The elements of this methodology include the storage of patient data as nodes and arcs (graphs) that can seamlessly link all types of data across current medical silos, from genomics to outcomes; a rich ontology of medicine that permits natural-language queries of complex patient data without the need to understand the underlying data structure; the assembly of this ontology and the assertions that make it useful; and intelligent agents to assist in the discovery of unsuspected relationships and unintended adverse outcomes. An immediate focus should be on supporting a worldwide effort to develop the comprehensive formal ontology of medicine needed to implement semantic databases and knowledge bases.

Methods are then needed to transform the results of trials, approximate trials, and automated discovery from static publications into dynamic, patient-specific (“personalized”) medical decision support tools

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE S-5 Duplicated Efforts by Selected Health Plans and Technology Assessment Firms, 2006

Type of Service Health Plans Technology
Assessment Firms
United Healthcare Kaiser Permanente Aetna WellPoint Hayes, Inc. Technology
Evaluation
Center
ECRI Institute
Screening
Genetic testing to predict breast cancer recurrence
Proteomic testing for ovarian cancer
Virtual (computed tomography [CT]) colonoscopy
Disease management
Ambulatory blood pressure monitoring
Intermittent intravenous insulin therapy
Diagnosis
CT angiography for suspected coronary artery disease
Microvolt T-wave alternans
Wireless capsule endoscopy
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Treatment
Brachytherapy for various cancers: breast, ovarian, and prostate cancer and brain tumors
Dysfunctional uterine bleeding and fibroids
Fallopian tube occlusion for permanent contraception
Growth factor–mediated lumbar spinal fusion
Intracoronary brachytherapy
Minimally invasive surgery for low back pain
Photodynamic therapy for Barrett’s esophagus and esophageal cancer
Vagus nerve stimulation for intractable depression
Devices
Artificial total disc replacement for lumbar and cervical spine
Cochlear implants
Total artificial heart
Total hip resurfacing arthroplasty

NOTE: Not all reviews are comprehensive assessments. Agency for Healthcare Research and Quality evidence-based practice centers have reviewed 5 of the 20 topics listed (ambulatory blood pressure monitoring, CT angiography, proteomic testing for ovarian cancer, spinal fusion for low back pain, and uterine fibroids). The Kaiser Permanente entries represent all Kaiser regions.
SOURCE: IOM, 2008a.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE S-1 Five foundational methodologies that need to be developed.

(simulation). Although such methodologies are widely used for institutional assessment and ranking, they need to become clinically rich and easily used real-time tools. The discrepancy between the “goodness of fit” of models to data and the minimization of prediction error needs to be addressed to enable accurate decision support. Algorithmic techniques, such as random forests–based methods, are intriguing and promise to fill the gap in accurately predicting a patient’s response to treatment, but they are still in their infancy. Moving beyond randomized trials to the real world, exploiting emerging semantic technology in order to integrate currently disparate medical data, using the knowledge generated for strategic decision support, and developing the next generation of statistical tools for analysis of clinical data are but a few concrete examples of the methods that need to be developed to provide an infrastructure for learning which is the right treatment, for the right patient, at the right time.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Some preliminary estimates of the resources needed to spur methodology development are suggested.

Coordination and Technical Assistance That Need to Be Supported

AHRQ has played a leading role in promoting the evidence development, synthesis, and translation activities integral to CER. Jean R. Slutsky, director of AHRQ’s Center for Outcomes and Evidence noted that CER as a concept and reality has grown rapidly over the past several years. Most of this work has built upon an appreciation for the role of technology assessment, comparative study designs, and HIT in the gathering and dissemination of best evidence to clinical practice; however, the development of the infrastructure needed for an expanded national capacity for CER has received less attention. To plan for such capacity rationally and strategically, one must have an understanding of the range of organizations currently conducting CER activities as well as some idea of which functions might benefit from either centralized or local approaches. Slutsky described lessons learned from AHRQ’s work to support CER, outlined some practical realities of the current state of play, and suggested some priority areas in need of attention if the nation is to better meet the information needs of the diverse healthcare system.

Slutsky noted that priority CER needs include improvements aimed at supporting and training researchers; providing technical assistance in research design, conduct, and implementation; and developing capacities to prioritize, coordinate, fund, and engage stakeholders in CER activities. Training in research design and translation are particularly important to ensure that designs and protocols efficiently and effectively answer research questions and that findings are not used inappropriately and do not have unintended consequences. Because of the impact of CER on many different sectors (e.g., patients, industry, health plans), the research must be well designed and conducted in a fair and transparent manner, and receive adequate funding and support. Provisions in the ARRA and ACA hold promise in this respect. In addition, CER-focused public–private partnerships building on the work of other federal agencies (e.g., NIH, CMS, coverage with evidence development, and the Department of Veterans Affairs) are beginning to emerge to address some of these issues. As AHRQ discovered in developing the Section 1013 healthcare program, involving stakeholders early, listening to them, and involving them throughout the process are critically important.

The Information Networks Required

The scale of efficiencies gained through improvements to methods, coordination, and prioritization of CER will be limited by the available

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

capacity to capture, access, and share relevant data and information. Design and development of robust information networks and efforts to foster collaboration around common work are critical aspects of CER infrastructure. This capacity, too, has been addressed in recent legislation. The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of ARRA in 2009, allocates $20 billion to be used as incentive payments to promote the adoption and “meaningful use” of electronic health records. Along these lines, presentations summarized in Chapter 3 discuss key current issues and needed capacity for networks to support the generation, synthesis, and the application of evidence, as well as for providing opportunities to support learning from clinical practice.

Information Technology Requirements

Robust, advanced clinical information systems (CIS)—including EHRs—are increasingly viewed as essential support for an evidence-based and learning health system. To provide policy makers with “order of magnitude” CIS estimates of the new or additional spending needed to speed broad adoption in care delivery organizations throughout the nation, Robert H. Miller, professor of health economics at the University of California at San Francisco described current EHR adoption, future EHR capital and operating expenditure requirements, and prospects for EHR adoption in the $648 billion hospital sector and the $447 billion physician and clinical services sector (spending levels as of 2006). EHR capabilities and estimated adoption level in hospital inpatient systems are indicated in Table S-6. Miller estimated that roughly $90 billion in new money may be needed over 8 years for robust hospital EHRs. Despite the 1.7 percent increase this represents for total hospital spending, adoption of EHRs will likely increase. There is substantial momentum in this sector, as health systems and larger hospitals increasingly see CIS as a cost of doing business, although public hospitals and unaffiliated hospitals with low or negative margins will likely lag behind.

Miller used rough estimates of the number of office-based physicians to develop an order of magnitude estimate of $40–$50 billion in new money for robust physician EHRs that may be needed over 8 years. This 1–1.25 percent average increase in physician services expenditure is feasible for most practices, and evidence suggests that the return on such an investment for physician practices could be substantial. Larger physician practices are adopting EHRs relatively rapidly, especially compared to solo/small groups (i.e., 10 physicians or fewer) and community health centers—for whom the business case is not perceived as favorable.

To achieve full EHR adoption, all types of healthcare delivery organizations need to increase CIS; however, more will be needed to achieve

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE S-6 Hospital Electronic Health Record Capabilities and Adoption Estimates


Stage

Description

2008


Stage 7

Medical record fully electronic; healthcare organization able to contribute continuity of care document as by-product of electronic medical record; data warehousing/mining

0.1%

Stage 6

Physician documentation (structured templates), full clinical decision support, full Radiology Picture Archiving and Communication System (PACS)

1.0%

Stage 5

Closed loop medication administration

1.3%

Stage 4

Computerized physician order entry, clinical decision support (clinical protocols)

1.9%

Stage 3

Clinical documentation (flow sheets), clinical decision support system (error checking), PACS available outside radiology

32.9%

Stage 2

Clinical data repository, controlled medical vocabulary, clinical decision support system inference engine, may have document imaging

33.2%

Stage 1

Ancillaries—lab, radiology, pharmacy

12.5%

Stage 0

All three ancillaries not installed

17.1%


SOURCE: HIMSS Analytics, Hospital IT Expenses and Budgets Related to Clinical Sophistication. Market Findings from HIMSS Analytics. (Chicago, IL: Health Information Management Systems Society, 2008).

effective EHR use for evidence-based medicine. Overall, CIS adoption will likely improve quality, but improvements are needed to EHR software, government and payer financial incentives, public performance reporting, EHR support services, and health information exchange.3

_______________

3 The American Recovery and Reinvestment Act (ARRA) of 2009 provides a total of $19 billion to promote the adoption and use of HIT, particularly EHRs. The HIT-specific ARRA provisions provide $2 billion to the Office of the National Coordinator for HIT—charged with creating a strategic plan for a nationwide interoperably health information system—and allocate $17 billion for financial incentives, through Medicare and Medicaid reimbursements and to physicians and hospitals that become “meaningful users” of EHRs. The focus on “meaningful use” is a recognition of the need to not only adopt HIT but to employ HIT capabilities that improve health care through the “exchange and use of health information to best inform clinical decisions at the point of care” (HHS ONC. http://healthit.hhs.gov/portal/server.pt?open=512&objID=1325&parentname=CommunityPage&parentid=1&mode=2&in_hi_userid=10741&cached=true) [accessed September 10, 2009].

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Data and Information Hub Requirements

The near-term aim to develop a comprehensive and expanded approach to CER is inherently challenging because of the lack of a controlled environment for assessing therapeutic options, the heterogeneity of patient characteristics, and the distributed nature of both the requests for and the sources of information. An evolution in the approaches to data and information hubs is needed to meet these challenges. Although large databases and clinical registries offer immediate opportunities for learning what works in health care, Carol C. Diamond, managing director of the healthcare program at the Markle Foundation, argued that the greatest promise of HIT lies in its ability to enable networked analysis—or the quick and efficient learning via a networked and distributed approach to information sharing and evidence creation. To maximize this potential, four key challenges must be addressed: (1) clearly defining the ultimate goal; (2) being open to reset definitions and assumptions about health data and research approaches; (3) articulation of new, broadly accepted working principles based on 21st-century information paradigms; and (4) developing an information policy framework that broadly addresses public hopes and concerns.

These challenges should be thought of as a jumping-off point for envisioning what is needed to move to a distributed approach to research—one characterized by connectivity, networks, and feedback loops. Rather than clinicians relying solely on large databases, centralized research centers, and analysis outside of healthcare delivery that can take months and years, Diamond presented a scenario in which clinicians access, in real time, current research and evidence syntheses as well as information provided via local networks on factors relevant to treating a particular patient (e.g., individual physician’s patient outcomes versus his peers, community outbreaks, sensitivity patterns). Distributed analytic tools move research closer to practice by allowing clinicians and patients to quickly answer practical questions and to make better decisions.

This paradigm shift challenges assumptions about health data and research approaches. Diamond cited several examples to illustrate what might be achieved if research is conducted in networked environments, if information is provided when and where it is needed, and if clinicians, researchers, and patients connect the silos of clinical care and clinical research. Examples include childhood cancer networks that continuously use data to evaluate outcomes in order to improve protocols and treatments, an international and national flu surveillance network (Distribute), and the first real-world, open and nonblinded, patient-driven trial on the use of lithium for amyotrophic lateral sclerosis patients—research driven by the Web community PatientsLikeMe. As a starting point for developing

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the information hubs needed, Diamond noted several examples of established distributed research models including the National Cancer Institute’s Shared Pathology Information Network and Cancer Biomedical Informatics Grid and AHRQ’s Distributed Research Network. Bringing distributed research networks to the scale needed to address national needs for CER will require attention to motives, standards, methods, and rules.

Integrative Vehicles Required for Evidence Review and Dissemination

The essential functions of any system dedicated to developing a robust evidence base for medical practice are synthesizing information derived from relevant trials and studies with insights emerging from clinical practice and ensuring that this information is continually updated. As clinical information systems are increasingly deployed, and as research increasingly draws upon connected and distributed data and information networks, the demand for synthesis work—to ensure studies are appropriately reviewed, vetted, and incorporated into the evolving evidence base—will also grow. Lorne A. Becker, co-chair of the Cochrane Collaboration steering group, provided an overview of the evidence synthesis development, coordination, and application within the United States and internationally and described opportunities to expand the nation’s capacity to meet the anticipated demand.

Synthesis provides the necessary link between knowledge generation and its application to medical practice by identifying gaps, helping to set the research agenda, assessing the quality of individual studies, and collecting and appraising the data. Currently, evidence syntheses vary in the methods used, their complexity, and the reproducibility of results. Becker discussed large complex evidence syntheses that assess evidence over a broad domain as well as systematic reviews that have a more narrowly targeted focus. Clinical practice guidelines are also discussed.

Becker noted several advantages and disadvantages of these approaches with respect to time, cost, usefulness to decision makers, and robustness of information produced. These trade-offs suggest the need for a program that targets and supports the conduct of complex syntheses, as well as broader efforts to build a diffuse network of skilled producers to develop focused reviews. Currently, reviews in the United States are developed in a decentralized fashion involving both public and private entities. On a population basis, the United States contributes far fewer reviews than other nations (Figure S-2)—suggesting an opportunity for increased U.S. involvement in synthesis and dissemination activities as well as for gains through greater international coordination. Several notable international collaborations were noted including the Joanna Briggs Institute, the Campbell Collaboration, a consortium of health technology assessors (e.g., the

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE S-2 Systematic review production by country and per capita (2004).
SOURCE: Adapted from Moher et al., 2007.

European Network for Health Technology Assessment), and the Guidelines International Network.

Becker suggested that international efforts should focus on increasing efficiency of evidence syntheses through improved coordination—perhaps with the formation of a registry of systematic reviews—and fostering rapid progress in methods and standards development—for the conduct, reporting, and assessment of evidence syntheses and guidelines. Such work will benefit all healthcare decision makers, as well as accelerate the development of key infrastructure elements needed for expanded CER capacity in the United States.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

The Talent Required

A comprehensive program to meet current information needs requires more than an expansion of existing programs and infrastructure. New structures, systems, and elements of HIT will need to be integrated into current practice to help improve research and practice. Furthermore, the capacity for prioritization, coordination, and conduct of CER will be increasingly interdisciplinary and involve many professions and healthcare sectors, thus requiring greater attention to human capital development. Chapter 4 provides two perspectives on what the discipline of CER and its associated workforce might look like.

Comparative Effectiveness Workforce—Framework and Assessment

The purpose-oriented nature of CER and the focus on informing practice and policy decisions suggest needed attention to how we train and develop the workforce required. Wlliam R. Hersh, professor and chair of the department of medical informatics and epidemiology at Oregon Health and Science University, and colleagues developed a framework for the CER workforce needed, made some preliminary estimates of the size of those needs, and proposed an agenda for further research. The heterogeneity of CER activities makes planning for its workforce needs challenging. Investigators and staff in CER come from many backgrounds—including clinical medicine, clinical epidemiology, biomedical informatics, biostatistics, and health policy—and work in a variety of settings, including academic units, university centers, contract research organizations, government, and industry. To simplify discussion, five key domains of CER activity were identified: (1) clinical epidemiology, (2) biomedical informatics, (3) health services research, (4) clinical guideline development and implementation, and (5) communications (Figure S-3). Several areas of significant overlap between domains were noted, including methods development and identifying information needs, suggesting required attention to interdisciplinary training and education. For each domain, the skill sets and competencies, training and education approaches, and issues related to expanding current capacity were reviewed.

The authors concluded that quantifying the needs of the overall workforce requires a better sense for the scale of expansion for the various CER activities (e.g., systematic reviews, trials, studies, guideline development, data mining). While the education and training of the current workforce can be applied to many aspects of CER, the workforce training and education needs for expanded capacity for CER will likely be substantial.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE S-3 Key activity domains for comparative effectiveness research. Workforce development will be critical to support the many primary functions within each of these domains as well as to foster the cross-domain interactions and activities identified (e.g., methods development, identifying information needs).

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Toward an Integrated Enterprise—The Ontario, Canada, Case

An example of different workforce elements engaged in a system focused on developing and applying clinical effectiveness information was provided in an overview of an Ontario program aimed at ensuring that promising but unproven technologies are made available to patients for whom the risk–benefit ratio is favorable. Sean R. Tunis, director of the Center for Medical Technology and Policy, and colleagues described a system that allows purchasers (primarily hospitals) to request that a health technology be reviewed by the Ontario Health Technology Assessment Committee (OHTAC). If, after completion of this assessment there is insufficient information to recommend a coverage decision, OHTAC may request a “conditionally funded field evaluation.” These studies, led by government-funded, independent research entities, are designed to address the evidence gaps necessary for policy makers to make coverage decisions. Funding this research costs approximately $8–$10 million per year (about $500,000 per field evaluation) and requires the support of Ministry of Health staff as well as hospital and university investigators with a wide variety of expertise (epidemiologists, biostatisticians, physicians, health economists, health policy experts, health services researchers, etc.). Tunis noted that the direct and explicit link between decision makers and the CER entities facilitates research timeliness and a clear focus on information satisfies the needs of decision makers and allows for evidence-based technology diffusion.

Although the U.S. healthcare system differs greatly from Ontario’s in size, complexity, and design, Ontario’s experiences illustrate that a significant amount of research can be achieved with very little spending if existing infrastructure is used wisely. Important lessons relevant to U.S. efforts to build CER capacity include establishing a stable funding source to support CER that, unlike the standard grant review cycle time, can fund rapidly evolving research needs; ensuring a timely process focused on the needs of decision makers to increase the likelihood that data generated by a study will be relevant; designing programs independent from government and industry, and ensuring a transparent decision-making process; increasing the efficiency and effectiveness of research by creating partnership between universities and those conducting field evaluations; and leveraging Medicare’s influence on private payers to more broadly support coverage with evidence development. In addition, this analysis presents an opportunity to consider potential collaborative activities, such as international patient registries or standards of study design, which may help to globalize CER in the future.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Priorities for Implementation

Workshop discussion shaped an ambitious vision for the potential gains—in the efficiency, effectiveness, and value of health care delivered in the United States—that might be realized with a greater focus on and expanded capacity for clinical CER. Realizing this vision requires strategic and implementation priorities for the near and long term. Chapter 5 features a discussion of IT platforms, data resource and analysis improvements, the clinical research infrastructure, health professions training, and training capacity needs. Each paper summarizes suggestions presented and discussed at the workshop on staging and policies, key needs in the relevant area, and possible approaches to ramping up. Also discussed are opportunities to take advantage of existing manufacturer, insurer, and public capacities through public–private partnership mechanisms.

Information Technology Platform Requirements

A reformulation of approaches to information systems will be essential to better capture and apply clinical data important to advance care and the evidentiary basis for practice. Mark E. Frisse, professor of biomedical informatics at Vanderbilt University, noted a needed shift in informatics approaches to representing data, developing comprehensive systems, and integrating these systems into decision-making settings. IT infrastructure must be systemic, sustainable, relevant, and incremental in design and benefit; it must also be based both on principles that engender public trust and on explicit policies for use. Drawing upon his experiences with a Tennessee regional health information exchange (the Memphis Exchange), Frisse offered suggestions on IT platform requirements and approaches that will help realize significant societal benefit at a realistic marginal cost.

With proper design and integration, the current collection of databases, health record systems, health information exchanges, financing, workforce, policies, and governance can evolve into a system that can address a range of needs. Too often, the design of systems emphasizes administrative transactions and episodic care at the expense of recording data that can be used to drive care, process improvement, and promote research. A clear framework—informed by effectiveness, quality, safety, and efficiency outcomes—is needed to prioritize and support a wide range of scientific, clinical, and policy aims. The Memphis Exchange demonstrated that trust and policy—rather than technology—are the primary barriers to an integrated IT platform; that approaches can be developed to contend with issues related to combining data from disparate sources, identifying and matching data, sharing data, and protecting confidentiality and privacy; and that loosely coupled data sets from disparate resources are amenable to supporting a wide range of

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

research efforts. When tied to deidentification processes, these data could serve as a powerful resource for biosurveillance, public health research, quality improvement, and comparative effectiveness studies.

Information exchanges however, are just one part of a larger HIT platform. The choice and effectiveness of care delivery technologies (e.g., EHRs) is critical. If properly designed and implemented, an interconnected system will return substantial benefits at marginal costs. National investment decisions that could simplify the integration of data across disparate systems include an immediate acceleration of knowledge representations that could be applied to clinical use quickly (e.g., RxNorm, Unified Medical Language System); decisions over the extent to which payment and administration coding standards can reflect disease states and contexts required of learning health systems (e.g., International Classification of Diseases [ICD-9], Systematized Nomenclature of Medicine, ICD-10); enforcement of a few, selective standards (e.g., Logical Observation Identifiers Names and Codes, SCRIPT); promotion of efforts that make laboratory and medication history more portable in a secure and affordable way; and selection of a few, simple quality initiatives that can guide improvement of any interventions enabled by IT. Broad adoption without coupling technologies to system improvements will not produce optimal outcomes. Trials are needed to test different approaches and ensure that IT expenditures are made wisely.

Data Resource Development and Analysis Improvement

Data resources and analysis can be used to guide clinical decisions; yet, despite the many potential information resources (e.g., product developers, federal agencies, payers, practitioners, providers), a cogent framework for selecting and using these resources to ensure that care delivery is centered on patient needs is lacking. Moreover, the absence of clarity on how and for what purpose such information would be used impedes progress. Compounding these challenges is the wide variation in information resources available across provider groups. T. Bruce Ferguson from the East Carolina Heart Institute discussed the robust, procedure-focused clinical databases in the field of cardiology that have independently validated processes and outcomes linked to quality improvement. To remain effective in a practice-based, learning health system, these vertical, procedure-based clinical data will need the longitudinal, medical-condition context important for the development of quality comparative effectiveness information. Resource and analysis development work must translate into a dynamic, real-time learning infrastructure, including built-in feedback processes and a focus on the patient and the point of care.

Data resources are currently incomplete both on the patient and provider level and are inadequate for learning about key gaps. The misalign-

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ment between resources and their use leads to conflicting and erroneous data and interpretation. Movement has been slow at three important levels of data resources: integrated health systems, electronic medical records, and national-level resources from providers and payers. A priority for data resource development is the accurate definition of data (both the information and context elements) to ensure appropriate use. Key needs include defining the type, source, and use of data; operationalizing data collection; defining and making change to interoperation dynamics; and standardizing data use for comparative effectiveness. Advances at the national and local levels will be important, and the National Consortium of Clinical Databases’s effort to integrate databases from the Society of Thoracic Surgeons, the American College of Cardiology, and the American Heart Association is particularly promising. A better alignment of incentives for comparative effectiveness—both informational and financial—is needed, as is a better definition of the opportunity and value of clinical and research data for use in CER. Ferguson noted opportunities to link data collection with efforts to improve performance measurement and reporting at the national level and, also at the local level, to enhance provider-level evaluation for quality improvement, benchmarking, and profiling.

Although robust methods for data analysis exist, they are often limited to specific databases, are procedure—rather than outcome—based, and have limited applicability for point-of-care use. Analyses are also retrospective in scope and require expensive infrastructure, and findings are often difficult to implement. Critical to improvement is the notion that analysis must be embedded into the data context infrastructure. New, patient-centric comparative effectiveness analyses are needed, as are approaches that account for multiple procedural options or assess risk over the duration of the medical condition. Tools for clinical point-of-care application of comparative effectiveness analyses and new analytic tools for CER will affect quality, effectiveness, appropriateness, and efficiency of care.

Practical Challenges and Infrastructure Priorities for Comparative Effectiveness Research

The process of developing and completing study protocols must be efficient in order for CER to reach its full potential for improving medical care. Daniel E. Ford, director of the Johns Hopkins’ Institute for Clinical and Translational Research, reflected on key barriers to efficient clinical research and outlined research infrastructure needs for improving the quality and timeliness of research. He noted that CER needs to become a common occurrence in the delivery of care in the United States, and developing the needed infrastructure will require ample and long-term support. Moreover, efficient and valuable research requires the support of multiple

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

stakeholders, including patients, healthcare providers, healthcare plans, and the research community. Limited participation by just one of these stakeholders can impede study progress and reduce the value of the overall investment.

To optimize the quality and value of CER, Ford suggested six priorities for infrastructure development: (1) establishing a process for timely consultation from all stakeholders—ideally, standing panels; (2) accelerating study initiation through streamlining both institutional review board and contracting mechanisms; (3) developing a standard policy on insurers’ coverage of services for individuals in clinical trials; (4) enhancing capabilities to conduct research in hospitals and practices outside the academic center—perhaps building on existing practice-based research networks; (5) developing stronger partnerships between researchers and healthcare systems; and (6) developing the workforce needed for CER teams—in particular, IT professionals, database developers and analysts, and biostatisticians who are expert in the analysis of cluster-randomized designs and sophisticated observational study methodologies.

Transforming Health Professions Education

Health care is moving toward a patient-centered, evidence-based health management orientation. Computerization of health records, wider use of patient care registries, greater availability of tools that allow for tracking individuals as well as populations of patients, and information-savvy consumers will drive our current fragmented health system toward one that will emphasize greater accountability, transparency of information, and higher levels of performance. Benjamin K. Chu, Southern California regional president for Kaiser Foundation Health Plan and Hospitals, described how these changes will shape the future practice environment and suggested that health professions education should take place in environments that emulate current models of best care. Such an approach would encourage the effective use of new tools by teams of health professionals as well as use and improvement of approaches that achieve the best outcomes across the full continuum of care and patient needs.

The delivery system infrastructure needed to support the best performance of health professionals depends on clear expectations for high performance along defined and measurable dimensions of care, adoption of appropriate IT tools that provide essential information to drive performance improvement, and payment systems that value better outcomes. Key opportunities for progress include computer-assisted tools with sophisticated evidence-based decision-support protocols combined with process changes and strict adherence to bundles of care, the ability to track gaps in preventive care and chronic disease management, and payment reform that

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

emphasizes bundled payments for episodes of care and evidence-informed case rates, or capitation.

To correct gaps in care and to ensure safe and effective interventions, health professionals will increasingly have to work together in teams and share accountability for their patients’ clinical outcomes. Acute episodes of illness will require coordination of handoffs, patient safety protocols, and checklists and other interventions designed to minimize harm and to maximize benefit to patients. Chronic disease management and adherence to preventive measures that are known to be effective will become systemwide accountability requirements. The complexity of care and the huge burden placed on shorter physician–patient interactions with a multitude of different clinicians will require that other health professionals as well as ancillary staff be used to bridge the gaps. Every touch point, enhanced with Web-based and other communication-based tools, will be an opportunity to maximize care. A new professionalism will build on the principles of lifelong learning, duty to patients, and devotion to finding best outcomes as well as emphasize teamwork and evidence-based care. Computerized simulation training will become a staple for health profession education. Team skills—the ability to lead, develop, and encourage the active contribution of other professionals in the clinical setting—will become an essential core of professionalism. Demonstrated competency both in clinical arenas and in the ability to work effectively with others will be required.

Building the Training Capacity for a Healthcare Workforce of the Future

Research holds the promise of finding and testing the answers to the challenges that face U.S. health care, but traditional approaches are inadequate. Steven A. Wartman, president of the Association of Academic Health Centers, called for the development of a new kind of research infrastructure focused on health and health care that can guide and inform decision making. Such an approach would support research to discover, disseminate, and optimize the adoption of practices that advance the health of individuals and the public as a whole. In his discussion, Wartman suggested that the key to the changes needed is expanding the continuum of medical research to ensure that discoveries ultimately serve the public. This expansion would include all aspects of health, including biomedical, public health, and multidisciplinary research on the social and environmental determinants of health. Table S-7 outlines an approach to achieving this new research vision, and among the most pressing needs is the development of a new cadre of researchers, clinicians, and health leaders—a workforce that includes, among others, health professionals, engineers, sociologists, urban planners, policy experts, and economists. The cross-cutting nature of academic health centers (AHCs) suggests an unprecedented opportunity to

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE S-7 An Approach to Achieving a New Vision for Health Research


New People and Skills

•   Multidisciplinary teams

•   Strategic faculty recruitment

•   Expansion and training of research support staff

•   New partners (e.g., industry, nongovernmental organizations, faith-based organizations, payers, government, public, organizations diverse communities, patients, general public)

•   New venues (e.g., community-based research)

•   Training to provide new skills, including inter-professional training

•   Incentives within academia to support all types of health researchers (e.g., academic home, revised promotion, tenure criteria)

New Infrastructure

•   Information technology investments (e.g., electronic health records, personal health record, regional health information organizations)

•   Biostatistics and data management support

•   Biorepositories

•   Streamlined clinical research approval processes

•   Efficient intellectual property policies

•   Links between academia, industry, and venture capitalists

New Investments and Incentives

•   Expanded funding for clinical, translational, and social health research by the National Institutes of Health, National Science Foundation, foundations, others

•   Identification of new funding sources, especially for T2 and T3, behavioral, public health, and social health research

•   Increased organizational investment in translational research cores (e.g., informatics, clinical research nurses)

•   National coordination of research resources (e.g., informatics linkages, data sharing)


SOURCE: Wartman and Pomeroy, 2009.

build AHCs to foster interprofessional collaborative activity and to develop needed health research teams. These teams may reside in new departments, institutes, and centers as typical academic silos give way to more horizontal integration.

Organizational and management trends taking place in the nation’s AHCs are remolding the ivory tower into a complex business enterprise. This transition is characterized by reorganization along nondisciplinary lines and a management structure that, conceptually and operationally, better aligns the entire institution. To build the needed training capacity, AHCs will need to ensure commitment of their own leaders to expand “health research,” invest in new infrastructure (e.g., IT, data repositories,

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

biorepositories), and support curricular and training innovations to develop multidisciplinary, multisector research teams. In addition, AHC leadership can drive this new vision of health research by calling for adequate and innovative funding mechanisms, providing needed culture and infrastructure, and facilitating the partnerships with government, industry, and community groups that are needed for health research. It will be necessary to provide clear-cut career paths for health researchers along with adequate and appropriate institutional resources. Key opportunities include the provision of academic homes for translational researchers, the development of appropriate recruitment packages, and criteria for promotion and tenure.

Many healthcare sectors—industry, community, and other nonacademic organizations—have important roles in facilitating fundamental change in medical research. The involvement of community constituencies affected by research will increasingly be an essential component of health research—through contributing input into research priorities, helping build trust of community participation in research, or disseminating findings. Particularly critical are national policy makers who will drive this transformation by endorsing the importance of health research in leveraging biomedical discoveries for health improvements; by providing adequate funding for the full range of health research needed, including workforce development; and by helping to address current barriers to research (e.g., Health Insurance Portability and Accountability Act procedures).

Public–Private Partnerships

A fundamental challenge in advancing CER is developing an infrastructure that is sufficiently robust to support and nurture productive relationships among stakeholders with different perspectives and organizational missions. Without a mechanism for bringing these parties to the same table, fundamental differences in institutional cultures can impede or even preclude stakeholder-to-stakeholder communication. Public–private partnerships can bridge these gaps and remove barriers to cooperation. This mechanism not only creates space for collaboration—in which barriers to cooperation can be discussed and addressed—but also offers a structure and operational guidelines, typically tailored to a specific partnership by the participants, that help facilitate cooperative work. A value for participating entities is they can learn more and distribute new knowledge more quickly in a collaborative environment. Public–private partnerships can help link some of health care’s disparate component elements and draw productively on the respective assets of participating stakeholders. Public–private partnerships are viewed by some as fundamental building blocks in the development of the CER infrastructure. A panel discussion featuring perspectives of health plans, the federal government, and industry

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

representatives considered current and planned public–private partnership efforts as well as how these efforts can be used in a more expansive fashion to develop infrastructure for CER.

Carmella A. Bocchino, vice president for clinical affairs and strategic planning at America’s Health Insurance Plans, discussed several successful public–private partnerships in which health plans and federal agencies have partnered to create databases that are useful in identifying potential safety issues and opportunities to improve care and care delivery. An extension of these activities could contribute to the development of a national data system to serve as a central part of the nation’s health research infrastructure. The United States Renal Data System, a large national data registry for end-stage renal disease patients, offers a potential model for a more comprehensive national data registry. Research and surveillance networks, such as the HMO Research Network, the HMO Cancer Research Network, and the Vaccine Safety Datalink, demonstrate the potential of distributed data networks to help address national research and public health questions. Similar models, such the National Data Aggregation Initiative (NDAI), are being explored for quality measurement and reporting. NDAI seeks to combine Medicare and private-sector data to generate physician performance measures. While these initiatives demonstrate the inherent value of developing the infrastructure and tools to aggregate and analyze these data across populations, challenges remain. Agreement is needed on a shared methodology that can facilitate comparative analyses across the broad spectrum of current clinical research. Data systems design should facilitate data mining as well as the identification and tracking of safety and effectiveness issues in real time. Progress will require the standardization and compilation of data from disparate sources as well as ensuring thoughtful and appropriate design of emerging data sources, such as EHRs, so that data is produced that can help answer questions important to understanding clinical effectiveness. Establishing governance structures will also be a key challenge, as will developing approaches for sustainable funding of these types of research and contending with issues related to ownership of data.

Rachel Behrman, associate commissioner for clinical programs and director of the office of critical path programs at the FDA, summarized two public–private partnerships housed in the FDA: the Critical Path Initiative and the Sentinel Network. The Critical Path Initiative seeks to modernize the way in which FDA-regulated products, including drugs, biological products, and medical devices, are developed, evaluated, and manufactured. The Sentinel Initiative is intended to establish a national integrated electronic structure and approach for monitoring medical product safety. These initiatives have focused on several key issues that require collaborative engagement, including research methods and data analysis tools that ensure the production of timely, reliable, and secure information, as well as

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

governance structures and policies that meet stakeholder needs while also putting appropriate safeguards into place. Specifically, questions related to data access, use, and stewardship need to be resolved. With respect to a CER infrastructure, attention should initially focus on developing mechanisms for priority setting, sustainable financing, and collaboration governance as well as on data transparency so that conduct and reporting of analyses result in high-quality information. Contending with issues related to proprietary data and patentable tools and processes will be essential to progress.

William Z. Potter discussed two public–private partnerships, the Biomarkers Consortium and the Alzheimer’s Disease Neuroimaging Initiative (ADNI), that have productively linked pharmaceutical companies, government agencies, and other stakeholders. The Biomarkers Consortium, which aims to speed up the development of biological markers in support of drug development, preventive medicine, and medical diagnostics, demonstrates the need for careful delineation of specific areas of research focus that protect the individual interests of consortium members. Areas of collaboration were carefully selected, and research was conducted in precompetitive spaces to ensure that the work would achieve the common goals of advancing human health and improving patient care; speeding the development of medicine and therapies for detection, prevention, diagnosis, and treatment of disease; and making project results broadly available to the entire research community. The Biomarkers Consortium had to address issues related to data quality, study design variation, and data sharing—and a project on placebo response was described to illustrate how such work can inform discussions on needed improvements. The ADNI demonstrates that infrastructure can be developed to foster cross-sector communication and work. Underlying this project’s initial, promising results is ADNI’s ability to adequately address data transparency issues. Key barriers identified relevant to the CER infrastructure included the need for internal industry champions to drive collaborative work; the need to meet the costs of full-time equivalent and data management; skepticism by industry, NIH, and academic leadership on the value of such partnerships; and variable legal opinions on intellectual property and medicolegal risks.

Moving Forward

Although expanding CER capacity offers many potential gains for health care, the scale of needed transformation is also large and spans all healthcare sectors. A long-term strategy must appropriately incorporate existing infrastructure, prioritize and sequence needs, engage all stakeholders, and build sustained, cross-sector support. Discussed in the final workshop session were key considerations for such a strategy—roadmap

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

elements, quick hits, and opportunities to build support. The final chapter includes a synthesis of this session’s discussion, a review of common themes heard at the workshop, and a number of possible follow-up actions to be considered for ongoing multistakeholder involvement through the IOM Roundtable on Value & Science-Driven Health Care.

The Roadmap—Policies, Priorities, Strategies, and Sequencing

Stuart Guterman, senior program director for the Commonwealth Fund’s Program on Medicare’s Future, outlined six broad areas discussed during the workshop that should be considered in the development of policies and strategies: data, methods, workforce, organization, translation, and financing. Clear end goals for each area, priority needs within and between categories, and key actors or existing infrastructure that could help initiate the activities needed were discussed. Suggested goals for these areas included the development of capacity to produce relevant data, ensuring maximal value of data through integration and system linkages and making data and information available to appropriate users when and where needed; development of research approaches to meet the needs of CER end users; education of a cadre of professionals—from across healthcare sectors—trained to use tools and techniques for developing and applying comparative effectiveness information; prioritization and coordination across the many organizations engaged in various aspects of evidence development—primary research, synthesis, translation—to enable more efficient information production; movement from evidence to evidence-based decision making; and sufficient and sustained funding to establish and support CER and its application as an integral part of the U.S. healthcare system.

Quick Hits—Things That Can Be Done Now

Actions that can be undertaken immediately will be essential to help accelerate progress by demonstrating in the near term the benefit of expanded CER. W. David Helms, president and CEO of AcademyHealth, noted several opportunities for collaborative efforts by stakeholders to lay the groundwork for a national capacity for CER—advocating for congressional action to establish a platform for CER, increasing federal funding for CER, articulating the case for CER, examining models for an expanded national capacity, and educating state policy representatives and Medicaid officials about the potential and needs for CER. He noted that work can also begin immediately to build up the needed workforce. The many other recommendations for immediate action offered by session respondents and throughout the workshop were also summarized. Subsequent to this meeting, Congress increased the national capacity for CER with the estab-

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

lishment, in the ACA of 2010, of Patient-Centered Outcomes Research Institute, previously described.

Building Support

While building upon many existing activities and infrastructure, an enhanced focus on CER is a shift in the nation’s approach to clinical research and practice. Although viewed as an important element of health reform by most healthcare stakeholders, additional work is needed to build support by the public and policy makers for needed investments and potential returns from CER. An open discussion session on this topic was led by Mary Woolley, from Research!America, who noted four fundamental requirements for building support: (1) having clarity on the ultimate goal, (2) understanding the target audience, (3) ensuring all stakeholders are involved, and (4) understanding the context. This framework suggests several key opportunities to build support for the expanded development and use of CER, including finding ways to frame the many infrastructure needs in simple terms that make sense to all stakeholders, including the public and policy makers; tailoring communications to the interests and concerns of different stakeholders; and engaging in clear communication and crisp, well-tested messaging. Finally, she noted that communication should not be unidirectional, but structured to fully engage all stakeholders involved in infrastructure building. Suggestions offered by workshop participants for possible goals, for opportunities to better engage consumers and patients, and for research that might better inform communications were summarized.

Issues for Possible Roundtable Follow-Up

Throughout the course of discussions, a number of items were identified as candidates for follow-up attention by the Roundtable on Value & Science-Driven Health Care:

  • Better characterization of the elements of the infrastructure: Build on the work sponsored by the Roundtable on workforce needs and IT infrastructure, continue to improve the initial estimates and pursue similar assessments related to requirements for new analytic tools and methods, establish processes for efficient and effective operation of the fields of work, and shape the strategy for attention and phasing. Include examples of effective work at institutional level.
  • Clarification of the nature of the “prework” needed for a more systematic approach to the necessary RCTs: Even though a more
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

practical portfolio of research approaches is essential, the RCT offers the key standard for the rigor required for certain circumstances. Their most effective deployment requires attention to issues of the criteria indicating the need for an RCT, the issues and priorities to be assessed, the best structure of the research questions, and improved approaches to trial design, conduct, and data collection.

  • More focus on the infrastructure needed for guideline development, implementation, and evaluation: Several issues could be productively engaged, including transparency and collaboration across professional groups on improving consistency in the methods, standards, rules, and participants in guideline development and approaches to implementation.
  • Share meeting discussions with organizational stakeholders in elements of the infrastructure: Examples given included the National Quality Forum; the Association of American Medical Colleges; the Association of Academic Health Centers, the Quality Improvement Program, and CMS/Department of Health and Human Services in the context of development of the 10th quality improvement organization statement of work; the American Hospital Association Quality Forum; the International Society for Pharmacoeconomics and Outcomes Research; and provider groups.
  • Devote additional attention to data stewardship issues: Because the basic resource for effectiveness research is the clinical data system, the Roundtable needs to catalyze more discussion on the integrity of this resource, including issues of maintenance, privacy, and data ownership.
  • Identify possible incentives: Look at how subsidies and reimbursement regulations can stimulate increased use of HIT in medical care, increased use of HIT for application of evidence, and increased use of HIT for the development of evidence.
  • Expand engagement of the business case and demand function for infrastructure investment: Give additional attention to the economic or business case for employers to appreciate the investment and its necessity to improve value from health care, the case for more attention by states, the case for deployment of the personal health record to drive more patient–provider interaction, and work on the consequences of not investing.
  • More focus on the issues of strategies and infrastructure for implementing findings on effectiveness: Since evidence is virtually useless if not applied, the Roundtable could give more attention to understanding the infrastructure needs for effective guideline implementation.
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
  • Sponsor discussions on training and health professions education reorientation: With greater appreciation for team-based, networked information stewardship roles by caregivers, the health professions groups should be recruited for collaborative consideration of the training implications.
  • Provide information on the Roundtable’s Web site: The resources of the workshop presentations and discussions should be posted on the Web site—slides, links, and speaker contact information.

REFERENCES

AcademyHealth. 2005. Placement, Coordination, and Funding of Health Services Research within the Federal Government. In AcademyHealth Report, September 2005. http://www.academyhealth.org/files/publications/placementreport.pdf (accessed September 3, 2010).

Buto, K., and P. Juhn. 2006. Can a center for comparative effectiveness information succeed? Perspectives from a health care company. Health Affairs 25(6):w586-w588.

Clancy, C. M. 2006. Getting to “smart” health care. Health Affairs 25(6):w589-w592.

Fisher, E. S., and J. E Wennberg. 2003. Health care quality, geographic variations, and the challenge of supply-sensitive care. Perspectives in Biology and Medicine 46(1):69-79.

Fisher, E. S., J.E. Wennberg, T. A. Stukel, D.J . Gottlieb, F. L. Lucas, and E. L. Pinder. 2003a. The implications of regional variations in Medicare spending. Part 1: The content, quality, and accessibility of care. Annals of Internal Medicine 138(4):273.

Fisher, E. S., J. E. Wennberg, T. A. Stukel, D. J. Gottlieb, F. L. Lucas, and E. L. Pinder. 2003b. The implications of regional variations in Medicare spending. Part 2: Health outcomes and satisfaction with care. Annals of Internal Medicine 138(4):288.

Health Industry Forum. 2006. Comparative Effectiveness Forum: Executive summary. http://healthforum.brandeis.edu/meetings/materials/2006-30-Nov./ExecBrief.pdf (accessed July 20, 2010).

Hopayian, K. 2001. The need for caution in interpreting high quality systematic reviews. British Medical Journal 323(7314):681-684.

IOM (Institute of Medicine). 2000. To err is human: Building a safer health system. Washington, DC: National Academy Press.

———. 2001. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press.

———. 2007. Learning what works best: The nation’s need for evidence on comparative effectiveness in health care. Washington, DC: The National Academies Press.

———. 2008a. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press.

———. 2008b. Learning what works: Infrastructure required for comparative effectiveness research. Washington, DC: The National Academies Press.

Kamerow, D. 2009. Comparative effectiveness studies inventory project. Washington, D.C.: A commissioned activity for the IOM Roundtable on Value & Science-Driven Health Care.

Kupersmith, J., S. Sung, M. Genel, H. Slavkin, R. Califf, R. Bonow, L. Sherwood, N. Reame, V. Catanese, C. Baase, J. Feussner, A. Dobs, H. Tilson, and E. A. Reece. 2005. Creating a new structure for research on health care effectiveness. Journal of Investigative Medicine 53(2):67-72.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

McGlynn, E. A., S. M. Asch, J. Adams, J. Keesey, J. Hicks, A. DeCristofaro, and E. A. Kerr. 2003. The quality of health care delivered to adults in the United States. New England Journal of Medicine 348(26):2635-2645.

Moher, D., J. Tetzlaff, A. C. Tricco, M. Sampson, and D. G. Altman. 2007. Epidemiology and reporting characteristics of systematic reviews. PLoS Medicine 4(3):e78.

Rowe, J. W., D. A. Cortese, and J. M. McGinnis. 2006. The emerging context for advances in comparative effectiveness assessment. Health Affairs 25(6):w593-w595.

Wartman, S., and C. Pomeroy. 2009. Building the training capacity: Implementation priorities In Learning what works: Infrastructure required for comparative effectiveness research. Washington, DC: The National Academies Press.

Wennberg, J. E., E. S. Fishers, and J. S. Skinner. 2002. Geography and the debate over Medicare reform. Health Affairs Web Exclusive:w96-w114.

Wilensky, G. 2005. Developing a center for comparative effectiveness information. Health Affairs Web Exclusive:w572-w585.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

This page intentionally left blank.

Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 1
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 2
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 3
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 4
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 5
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 6
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 7
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 8
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 9
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 10
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 11
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 12
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 13
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 14
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 15
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 16
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 17
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 18
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 19
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 20
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 21
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 22
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 23
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 24
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 25
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 26
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 27
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 28
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 29
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 30
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 31
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 32
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 33
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 34
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 35
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 36
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 37
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 38
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 39
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 40
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 41
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 42
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 43
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 44
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 45
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 46
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 47
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 48
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 49
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 50
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 51
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 52
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 53
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 54
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 55
Suggested Citation:"Summary." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 56
Next: 1 The Need and Potential Returns for Comparative Effectiveness Research »
Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary Get This Book
×
Buy Paperback | $75.00 Buy Ebook | $59.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

It is essential for patients and clinicians to have the resources needed to make informed, collaborative care decisions. Despite this need, only a small fraction of health-related expenditures in the United States have been devoted to comparative effectiveness research (CER). To improve the effectiveness and value of the care delivered, the nation needs to build its capacity for ongoing study and monitoring of the relative effectiveness of clinical interventions and care processes through expanded trials and studies, systematic reviews, innovative research strategies, and clinical registries, as well as improving its ability to apply what is learned from such study through the translation and provision of information and decision support.

As part of its Learning Health System series of workshops, the Institute of Medicine's (IOM's) Roundtable on Value & Science-Driven Health Care hosted a workshop to discuss capacity priorities to build the evidence base necessary for care that is more effective and delivers higher value for patients. Learning What Works summarizes the proceedings of the seventh workshop in the Learning Health System series. This workshop focused on the infrastructure needs--including methods, coordination capacities, data resources and linkages, and workforce--for developing an expanded and efficient national capacity for CER. Learning What Works also assesses the current and needed capacity to expand and improve this work, and identifies priority next steps.

Learning What Works is a valuable resource for health care professionals, as well as health care policy makers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!