National Academies Press: OpenBook
« Previous: 4 The Talent Required
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

5


Implementation Priorities

INTRODUCTION

Significant gains in the efficiency, effectiveness, and value of health care delivered in the United States are possible with a greater system focus on developing and applying insights on what works best for whom. The near-term needs for an expanded and broadly supported capacity for comparative effectiveness research (CER) include infrastructure for the requisite work (e.g. methods, technical support, coordinating capacities), information networks, and workforce. Identification of the highest-priority implementation needs will guide strategic and coordinated development of needed capacity. Consideration is also needed of how infrastructure development might best build upon existing capacity. Papers in this chapter focus on five key areas for work: (1) information technology (IT) platforms, (2) data resource and analysis improvement, (3) clinical research infrastructure, (4) health professions training, and (5) building the training capacity. Each paper offers suggestions for prioritization and staging of policies, as well as possible approaches to increasing the scale of activities. Also discussed are opportunities to take advantage of existing manufacturer, insurer, and public capacities through public–private partnership.

The first three papers focus on developing information acquisition and exchange tools as well as the research approaches essential to speeding evidence development. Based on his experiences developing a regional health information exchange in Tennessee (the Memphis Exchange), Mark E. Frisse of Vanderbilt University suggests several implementation priorities for the development of an IT platform that will realize significant

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

societal benefit at a realistic marginal cost. With appropriate design and integration, current collections of databases, health record systems, health information exchanges, financing, workforce, policies, and governance, it can be evolved into a system that addresses a range of needs in care delivery, process improvement, and research. T. Bruce Ferguson from the East Carolina Heart Institute discusses clinical database work in the field of cardiology and identifies key opportunities to apply data resource and analysis infrastructure toward the development of dynamic, real-time learning systems, centered on the patient and decisions at the point of care. Finally, Daniel E. Ford of Johns Hopkins University discusses opportunities to improve the efficiency and effectiveness of clinical research by streamlining and standardizing processes and policies, increasing investments in practice-based networks and training and retaining research support personnel. Two papers focus on the workforce at the front lines of evidence application and development—health professionals and clinical researchers. Benjamin K. Chu from Kaiser Permanente describes changes to the healthcare delivery system that will shape the future practice environment and illustrates how training and practice environments for health professions education should seek to emulate and improve upon current models of best care. Steven A. Wartman of the Association of Academic Health Centers describes a needed expansion of medical research to a multidisciplinary approach that addresses all aspects of health. He offers some suggestions on how the training capacity might be developed to accelerate a shift to research focused on the discovery, dissemination, and optimized adoption of practices that advance the health of individuals and the public.

This chapter concludes with discussion highlighting opportunities to take best advantage of existing infrastructure elements—such as data resources, expertise, and technology platforms. Speaking from key sector perspectives, Carmella A. Bocchino from America’s Health Insurance Plans, Rachael E. Behrman from the Food and Drug Administration (FDA), and William Z. Potter from Merck Research Laboratories, discuss how public–private partnerships can create needed space for cross-sector collaboration around common areas of interest and expertise.

INFORMATION TECHNOLOGY PLATFORM REQUIREMENTS

Mark E. Frisse, M.D., M.Sc., M.B.A., Professor of Biomedical Informatics, Vanderbilt University

Overview

The overarching intent of this publication is to better understand the requirements necessary to transform our fragmented healthcare infrastruc-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ture into a learning health system. This system must be structured in a way that draws on the best evidence, delivers the best value, adds to learning throughout the system of care, leads to improvements in the nation’s health, and ensures that “each patient receives the right care at the right time” (IOM, 2007, 2008).

Where IT platform requirements are concerned, with thought and cautious action, it is possible to realize the aims of a learning health system through an evolution of our current collection of databases, health record systems, health information exchanges, financing, workforce, policies, and governance. Properly designed and integrated, the composite system would be able to address a wide range of needs at a manageable marginal cost for each. However, the status quo without thoughtful attention to the ends and means may actually impede long-term progress at the expense of short-term expedience.

A recent report by the National Research Council provides some guidance. Among the principles for change espoused in this report is the assertion that health technologies should “record available data so that today’s biomedical knowledge can be used to interpret them to drive care, process improvement, and research” (NRC, 2009). All too often, the design of current systems emphasizes administrative transactions and episodic care at the expense of other priorities. Data are often embedded into specific applications and not represented in a way that clarifies their context or allows reinterpretation as both our analytic techniques and our needs change (NRC, 2009).

An Infrastructure Framework

IT platforms should be based on a clear framework that enables progress toward a wide range of scientific, clinical, and policy aims, while allowing for these aims to evolve over time. The framework should be guided by the analysis and prioritization of initiatives according to their value, difficulty, and requirements for data sharing. The framework should identify potential outcomes according to their impact on effectiveness, quality, safety, and efficiency. In practice, this framework would provide a means of assembling governance, policy, technology, and processes into a series of components that work with one another and that can evolve incrementally over time toward the primary goal of supporting and improving our ability to create and use healthcare knowledge. Such an infrastructure focuses on components that must be assembled to realize specific outcomes. It is these components that should be the focus of activity. Instances of component collections—including various forms of electronic health records (EHRs), personal health records, and health information exchanges—should be viewed not as monolithic products but instead in terms of what their com-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ponents contribute separately and collectively to meeting a specific clinical need.

There are many discrete components and functions, including digital connectivity, source identification, data integrity checking, record location, data aggregation, audits, data collections, and computer–human interfaces. A system is composed of multiple instances of each component (e.g., databases and record locator services) originating in a diverse array of local and national settings and designed for different primary purposes. Each instance of a component can in theory be funded through different means and managed under different governance and operational controls. Each component’s means of representing data can differ as long as two characteristics are met: (1) ways to combine data in order to achieve practice aims must be implemented, and (2) original data elements must be maintained in their original format and, to the greatest extent possible, coupled with the context in which they were obtained.

What unites the disparate instances of components and creates a true system is a clear separation of data from application, a retention of source and context, and a common minimal set of governance structures and policies that address appropriate uses, performance, financing, and responsibility. Governance, policy, and standards are coordinated only to the minimal extent necessary to achieve results, to gain trust, to demonstrate value, and to support incremental progress. System value is recognized not through successful implementation but rather through the impact the system and its components have on measurably improved outcomes.

Lessons from Memphis

The work necessary for developing a regional health information exchange in Memphis, Tennessee (the Memphis Exchange), demonstrates the feasibility of applying these principles and the practicality of this approach. The Memphis Exchange is based on technologies and practices in use for over a decade at the Vanderbilt University Medical Center and described elsewhere (Stead, 2006; Stead and Starmer, 2007). This system produces short-term system-based results, supports incremental improvements, and fosters evolutionary change (Frisse et al., 2008; Johnson et al., 2008). Many lessons have been learned during its 3 years of use and operation.

First, trust and policy—not technology—are the primary barriers to realizing a desired IT platform. Developing data-sharing agreements governing use and oversight was arguably the most challenging initial task. This effort was accelerated considerably by efforts made through the Markle Foundation’s Connecting for Health initiative (Connecting for Health, 2006).

Second, information from many different systems and encoded in many

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

different acceptable standards can be combined inexpensively. These data are “liquid” and are not tied to a specific application but instead to a source, a context, and a unique individual. Each clinical or administrative data element is “wrapped” with a meta-level tag that provides a general description while the original data element—in whatever format it is received—is retained. Currently, the exchange receives data from multiple systems at over 20 major healthcare institutions. Some data elements—like laboratory results—can be presented in a uniform format using Logical Observation Identifiers Names and Codes (LOINC) (Porter et al., 2007). Such an approach can be generalized and can provide intermediate results while the long-term process of standards convergence takes place.

Third, identification and matching of data can be achieved with a degree of precision if attention is devoted to measuring performance using a “gold standard” data set of 5,000 to 10,000 patients. Such a matching approach is not a master patient index in a traditional sense because no unique patient identifier is generated and linkages are represented as data clusters rather than as absolute mappings.

Fourth, perceptions of ownership are more important than the locality often embodied in the “centralized vs. decentralized” debate. In the Memphis Exchange, each participating institution publishes its data to its own “vault.” A vault in this context is a logical database that may be housed in a central or distributed cluster of databases. What is important is that each institution providing data maintains control of its data until they are combined and used to treat an individual patient. When data are used, actual use is recorded in logs, and efforts to assure nonrepudiation are enforced. Our contention is that no system is completely centralized, and many significant queries can only be answered through a collection of loosely coupled systems.

Fifth, confidentiality and privacy can be achieved through a relatively absolute “opt in” or “opt out” decision made at each institution. The primary focus of our confidentiality efforts is on developing a network of trust that is heavily audited and rigorously enforced. This approach ensures that the only individuals examining data are those who have rights (by law or consent). Emphases on selective data, drugs, or other disorders are not easily manageable and cannot be absolutely enforced unless all free-text documents are excluded. Unfortunately, these text documents (e.g., transcribed medical histories) often provide the most meaningful information both for patient care and for chart review.

Finally, based on the Vanderbilt experience, loosely coupled data sets from disparate resources seem capable of supporting a wide range of research efforts. Using technologies and methods similar to those of the Memphis Exchange, Vanderbilt researchers have developed a deoxyribonucleic acid (DNA) biobank linked to phenotypic data derived from the

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Vanderbilt EHR (Roden et al., 2008). Employing an opt-out consent model, these researchers have developed a statistically de-identified mirror image of the electronic medical record (EMR) called a “synthetic derivative.” These records are linked to DNA extracted from discarded blood samples. In one test, the de-identification algorithm removed 5,378 of the 5,472 identifiers, with an error rate for complete Health Insurance Portability and Accountability Act (HIPAA) identifiers of less than 0.1 percent. The aggregate error rate—which includes any potential error, including non-HIPAA items, partial items, and items that are not inherently related to identity—was 1.7 percent. The ability of these de-identification procedures to discover and suppress identifiers was sufficient for institutional review boards to judge the research done with this system to be consistent with an Office of Human Research Protections “nonhuman subjects” designation.

It should be possible to apply such a process equally well to health information exchanges or other ways of accessing information from disparate sources. Such applications will be powerful tools in biosurveillance, public health research, quality improvement, and comparative effectiveness studies.

Applicability to Information Technology Platform Requirements

This approach is very affordable. The total operational costs for a region of 1 million people are under $3 million a year. Even with additional expense incurred by increasing connectivity to smaller care settings and enhancing data-analytic capabilities, the overall cost will be less than $5 million (or $5 dollars per capita per year). This expense should be compared with overall healthcare expenditures, which are estimated at $7.4 billion, or $7,400 per capita, per year. Thus the expense would amount to less than 0.07 percent of per capita healthcare expenditures. Because the costs are largely offset by reductions in duplicate testing, efficiencies in quality metrics, public health reporting, and other functions, the costs that could be allocated to knowledge management and development of a learning health system are insignificant by almost any degree. Extrapolating to a population of 350 million, our cost estimates ($1.7 billion) are less than estimates provided in Chapter 3 of this publication, but our cost models may be based on different assumptions (Miller, 2008).

The Role of Electronic Health Records

The Memphis Exchange is but one part of a larger health information technology (HIT) platform. Clearly, the choice and effectiveness of care delivery technologies (such as EHRs) are critical. Using Miller’s estimates, marginal annual operating expenditures (per capita per year) would be in

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the range of $50 (Miller, 2008). As expected, the costs for systems to deliver the details of care exceed the cost estimates for integrating EHRs into a broad IT platform. EHR costs will likely be offset by efficiencies or driven by other practice imperatives, so the question is not so much what a system costs but the extent to which such a system improves practice performance and the extent to which it can send and receive data from other sources to achieve desired results. If the systems are properly designed, their marginal cost to achieve broader aims is very low.

Properly designed, the marginal benefit of a connected system is quite substantial, and the marginal cost of creating such a system (in context to overall healthcare technology costs or to healthcare expenditures overall) can be very low. Thus the greatest risk to realizing great benefit at low financial and societal cost is likely to be the inclination to create monolithic systems that overengineer and promise more than they can deliver.

Additional Initiatives and Decisions

Some national investment decisions can be made that would simplify the integration of data across disparate systems. Although the Memphis Exchange argues that much can be done without the monolithic standardization efforts and privacy initiatives espoused by many, much more can and must be done to make this experience more applicable. Among the most valuable steps that could be taken are an immediate acceleration of knowledge representations that could be quickly applied to clinical use (e.g., RxNorm, unified medical language system), decisions about the extent to which payment and administration coding standards can reflect disease states and contexts required of learning health systems (e.g., International Classification of Diseases [ICD]-9, Systematized Nomenclature of Medicine, ICD-10), enforcement of a few—and only a few—selective standards (e.g., LOINC, SCRIPT), promotion of efforts that make laboratory and medication history more portable in a secure and affordable way, and selection of a few simple high-quality initiatives that can guide improvement of any interventions enabled by IT (Frisse, 2006).

Focused trials with immediate findings are essential to ensure that IT expenditures are made wisely. Proposed legislation to accelerate the adoption of HIT does not assure an optimal outcome. Applying more funds to technologies that are not coupled to system improvements may help, may hurt, or may do both.1

_______________

1 U.S. Senate Committee on Finance. 2009. American Recovery and Reinvestment Act of 2009.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

DATA RESOURCE DEVELOPMENT AND ANALYSIS IMPROVEMENT

T. Bruce Ferguson, Jr., M.D., Chairman,
Department of Cardiovascular Sciences, East Carolina Heart Institute and Brody School of Medicine at East Carolina University; and Ansar Hassan, M.D., Ph.D., Brody School of Medicine at ECU

Overview

Enormous challenges face U.S. healthcare stakeholders if the 2020 goal of the Roundtable on Value & Science-Driven Health Care—that 90 percent of clinical decisions will be supported by accurate, timely, and up-to-date clinical information that reflects the best available evidence—is to be met. Among the most complex of these challenges is the issue of the data and data analysis that will be used to drive those clinical decisions. Knowledge about the comparative effectiveness of (1) diagnostics and treatments, (2) providers choosing and administering diagnostics and treatments, and (3) the direct value and benefit to individual patients of (1) and (2) is what must be assembled from data and data analysis going forward. Within the context of CER, using cardiovascular disease as an example, this paper will address the data resource development and the data analysis improvement necessary for the migration of health care toward these 2020 goals.

Data as Knowledge

Despite a multiplicity of potential information resources, there is no cogent framework for selecting and using these resources. Within cardiovascular disease, each of the major stakeholder groups has independently developed, financed, and extensively used data generated from systems that are mostly perceived to be proprietary. These data types include the following:

  • Data from the medical product (pharmaceutical and device) companies, which are incentivized to collect safety and efficacy data from pivotal randomized clinical trials (RCTs) for FDA approval of their technologies. The knowledge generated from these studies is critical to the regulatory process. Because equipoise is necessary to randomize patients, particularly in noninferiority trial designs, this body of knowledge is scientifically valid but limited in its applicability to overall care delivery evaluation of effectiveness. Controversy surrounds the application of these trial findings to patients beyond the trial design and beyond the FDA labeling for
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the technologies or pharmaceuticals. Investment in postmarket data collection and analysis, except as required for physician and hospital reimbursement (e.g., Centers for Medicare & Medicaid Services [CMS] Pay with Evidence Development program), has generated an important data void in our healthcare system (Bach, 2007).

  • Healthcare data available from the public domain and through federal agencies such as CMS, Centers for Disease Control and Prevention (CDC), Agency for Healthcare Research and Quality (AHRQ), and the Social Security Administration require analytical expertise and may be expensive. These data provide knowledge on the administrative, financial, and quality characteristics of care delivery based on claims and administrative data that may be somewhat limited in describing actual clinical care delivery.
  • Payers have developed robust administrative and claims-based proprietary systems that extend up to—but as yet do not include—whether a patient actually ingested the medication that was prescribed and filled. These systems are relatively unique in that they give a longitudinal documentation of care with data, some of which have been risk adjusted. These data provide knowledge about longitudinal care processes delivered by multiple providers but are confined to specific payer groups for defined periods of time.
  • Practitioners in cardiovascular disease have developed robust clinical observational databases, such as the Society of Thoracic Surgeons’ National Adult Cardiac Surgery Database (Ferguson et al., 2002), the American College of Cardiology Foundation’s National Cardiovascular Data Registry (ACCF, 2008), and the American Heart Association’s Get with the Guidelines (Giugliano and Braunwald, 2007). In addition, regional databases, such as the Northern New England Cardiovascular Consortium (Malenka et al., 2005) and the New York State Cardiac Surgery and Percutaneous Coronary Intervention Registries, have been collecting data for over 15 years. These clinical registries have developed methods to describe risk-adjusted outcomes that, along with processes of care, describe care delivery specific to the procedure-based episode of care. They have independently validated the processes and outcomes of care that are linked to quality improvement. These systems provide knowledge about those care episodes that is clinically relevant but limited in its scope.
  • Providers have also devoted considerable effort to the development of guidelines to direct clinical care (ACC, 2008). This is a resource-intensive effort, and much of the data available for guideline development falls short of class I data. The knowledge contained in
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the guidelines represents what expert consensus suggests should be done in clinical scenarios that fit into the guideline construct; however, this may limit their usefulness in comparative effectiveness analyses. More recently, the specialty societies have developed guidelines for appropriateness of care, which may become more useful (Douglas et al., 2008).

The fifth stakeholder—the patients and their families—in part desires that this knowledge be integrated in such a way that care delivery centered on the needs and medical conditions of the patient is always available. This requires knowledge about processes and preferably risk-adjusted outcomes of care, as well as administrative and financial data. This cannot be accomplished by using data from just one stakeholder’s system or by employing just one type of knowledge data.

Figure 5-1 illustrates the reason for this. For a patient with a medical condition for which there are two potentially applicable therapies, clinical trials data are unlikely to differentiate between the two therapies because of trial design issues (panel A). A more accurate representation of potential therapeutic effectiveness for that patient is derived from the pool of “application” data, or knowledge gained from data describing the ongoing application of health care to patients. In fact, this is the data domain in which most patients and providers reside and which represents the real challenge regarding data resources and data analysis for comparative effectiveness. A slightly different way of looking at this is represented in panel B of Figure 5-1. Wennberg et al. (2002, 2007) have described a recommendation for Medicare reform based upon three categories of medical services and their direct links to health care spending in the Medicare program. In fact, the majority of health care delivered is either preference- or supply-sensitive care, where the knowledge for these decisions comes from application data. For example, in the United States over 75 percent of patients currently undergoing coronary artery bypass grafts (CABGs) wouldn’t have been eligible for enrollment in the surgical arms of the major randomized trials of percutaneous coronary interventions (PCIs) vs. CABGs based on National Adult Cardiac Surgery Database data (Taggart, 2006), while at the same time an estimated 70 percent of drug-eluting stent (DES) use in this country is currently presumed to be “off-label” (Tung et al., 2006). In terms of comparative effectiveness between these two therapies, in a recent systematic review of PCI vs. CABG by an AHRQ-sponsored evidence-based practice center, observational analyses were excluded from the principal meta-analysis of trials, which concluded that survival at 10 years was similar between the two therapies (Bravata et al., 2007). Until recently, data from RCTs of PCIs with or without DESs vs. CABGs have not demonstrated any difference in outcomes at 1- or 5-year follow-up (Daemen et

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-1 Panel A shows the hypothetical relationship between information generated from RCT data and application data (data generated through the application of health care to patients) on two different therapeutic interventions. As a result of trial design and equipoise for randomization, an outcome such as mortality is unlikely to be measured as discernibly different. Over time, however, application data may highlight differences in that outcome. Some controversy exists as to whether data from RCTs is appropriate for making decisions in the application data space, and vice versa. Panel B relates this construct to the utilization of medical services as described by Wennberg et al. (2002). The majority of service utilization is in the preferences- and supply-sensitive categories; these activities fall under the application data categorization and constitute the primary target area for comparative effectiveness research going forward.
NOTE: RCT = randomized controlled trial.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

al., 2008; Hlatky et al., 2004). If a patient met the enrollment criteria for the trial, these data could be applied and characterized as “effective” care. In contrast, multiple large observational analyses have consistently demonstrated an increasingly significant survival benefit from CABGs as compared with PCIs, beginning at 1 year postintervention (Smith et al., 2006). From the perspective of a patient whose medical condition places him or her in the “gray box,” the recommendation for therapy A vs. therapy B would be based on preference- or supply-sensitive care considerations and application data. Recently, the Synergy between PCIs with Taxus and Cardiac Surgery “all comer” trial of PCIs with DESs vs. CABGs demonstrated a mortality difference that was similar to the large observational studies. This example illustrates the complexity of the data requirement for a comparative effectiveness study. It also emphasizes the need for data resources that come from all stakeholders, and the need for taking into account all stakeholder’s perspectives. Califf and colleagues (2007) have also emphasized that there needs to be partnership development among these stakeholders to address the cardiovascular disease epidemic. Their argument that an ongoing risk–benefit balance of technology needs to be derived in part from its ongoing use by providers is important because the information from this use becomes a component for comparative effectiveness analyses in the learning health system of the future.

Current Demand Shortfalls for Data Resources and Data Analysis

Data Resources

It is important that the framework in which to assess the current demand shortfalls for data resources and analysis for a learning health system be synchronous with the framework necessary to turn these shortfalls into solutions. This in turn emphasizes principles outlined by the Institute of Medicine (IOM) in 2006 regarding the nature of healthcare information, which must become more aligned with the IOM’s six tenets of health care and within the context of comparative effectiveness.

With respect to data resources, these demand shortfalls can be categorized into structure, administration (process), and organization (Figure 5-2). As outlined above, these available data fall short in providing patient-level data that are complete across the medical condition for that patient, as defined by Porter and Teisberg (2006). The provider-level data necessary to address quality of care delivery are also incomplete, both per patient and across the medical condition. As outlined by Califf and others, the data infrastructure is not designed as a resource to generate data where gaps in information for comparative effectiveness exist (in part, the application data in Figure 5-1), namely health policy and quality improvement

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-2 An analysis of current demand shortfalls in terms of potential comparative effectiveness data resource and data analysis. Individual points are discussed in the text. Each one of these levels, however, relates directly to moving the comparative clinical effectiveness and learning health system agenda forward.
NOTE: HIPAA = Health Insurance Portability and Accountability Act; IH = international health; POC = point of care.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

(QI) research, postmarket evaluation, and effectiveness (both medical and financial) on an ongoing basis.

Administratively, the current demands have highlighted a number of obstacles as well. There are substantial regulatory and HIPAA privacy issues that limit or even prohibit data sharing across the patient’s medical condition. In terms of financial support for these data resources, information collection and analysis processes by providers for quality improvement have not been supported as a recognized practice expenditure worthy of specific reimbursement. Significant proprietary investments in data resources have resulted in few incentives for collaborative data use among and across these stakeholders.

In terms of organization, there remains an important disconnection between data resources and data uses that are not defined and specified in these data resources; this disconnection sometimes produces conflicting and erroneous data and interpretation from these otherwise important resources. Finally, we have been slow to recognize that there are in fact at least three important levels of data resources that must be used fully in order to move this agenda forward. Integrated health systems, with major financial commitments to EMR systems, bring a unique and important experience to the table, but one that is still very limited in its applicability to most provider systems (James, 2007). The national-level resources from providers and payers have a much broader potential impact. In addition, it is important to recognize that local and regional resources are making investments in the learning health system; these entities may be able to address certain of these data resource and analysis issues without some of the obstacles and shortfalls present at these other levels.

Data Analysis

Analysis shortfalls can be grouped into structure, administration (process), and implementation (outcome) categories (Figure 5-2). In terms of structure, robust analyses that are available from clinical data sets such as the Society of Thoracic Surgeons and the American College of Cardiology (ACC) are now largely confined to those data sets, which mostly capture data from procedural (“vertical”) but not longitudinal (“horizontal”) episodes of care. These robust systems have developed risk models to facilitate cross-site comparisons of outcomes and within-site comparisons of observed vs. expected (or predicted) outcomes, using national-level populations to generate and validate the risk models. These models, however, are procedure based and are not based on medical outcomes, nor do they incorporate increasingly important additional data such as epidemiologic, socioeconomic, and long-term survival data. Finally, a majority of these analyses have been driven by a health policy and clinical research agenda

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

because of the design of the data system; they have had limited applicability for point-of-care use that is patient-centric. The use of administrative data sets for outcomes analysis, with the limited clinical information available, has been a challenge (Hall et al., 2007; Krumholz et al., 2006).

For data analysis administration, there continues to be a temporal discontinuity between the data sets and the analyses, with all of these analyses being retrospective in scope. Overall, where these large data set analyses are concerned, analysis activities have required rather expensive infrastructure to manage the collection and analysis of the data. Finally, in part because of the structural nuances of the data sets, cross-platform analyses (clinical + clinical, clinical + administrative, clinical + financial) have so far been difficult to accomplish.

The implementation shortfalls highlight the principle that outcomes are more important to patients than structure or process of care. The fact that these analyses and their outcomes are not generally available is an important concern in a learning health environment. It is important not only to learn what the most effective care is, but it is also important to be able to make that decision and apply it as close to the point of care as possible. For example, there have not yet been risk models developed that directly compare, for patients, the effectiveness of therapeutic options, although work in this area is beginning (Ferguson, 2008; Singh et al., 2008). Financial cost and effectiveness data need to be part of this point-of-care implementation.

Clinical comparative effectiveness assessment as part of a learning health system will, to a varying degree, affect each of these resource and analysis demand shortfall issues. The stress that these demands place on existing data resources is substantial. The possible opportunities for migrating to more operationally sustainable platforms in the future become somewhat clearer when coupled with the IOM criteria for future health information.

An Overview of Next Steps

Data Resource Development

The source of much of these data in the future will be the EMR infrastructure, which is still mostly site specific. However, the focus of this paper is on the resource development steps that are critical but generic to a functional data infrastructure for CER going forward.

First, it will be necessary to better define the type, source, and use of data for comparative effectiveness (Figure 5-3). This includes the classic Donabedian triad, but in this case outcomes include both clinical and financial data. In addition, data and metrics for efficiency, effectiveness, and appropriateness need to be available. In terms of data sources, the use

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-3 A relational “map” suggesting major areas of data resource development for comparative effectiveness studies.
NOTE: ACC = American College of Cardiology; AHA = American Hospital Association; CE = comparative effectiveness; CMS = Centers for Medicare & Medicaid Services; EMR = electronic medical record; IH = international health; PCI = percutaneous coronary intervention; QI = quality improvement; RCT = randomized controlled trial; SES = socioeconomic status; SSNDI = Social Security National Death Index; STS = Society of Thoracic Surgeons.
SOURCE: Ferguson, T. B., IOM workshop presentation, July 30-31, 2008.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

of administrative, financial, and clinical data, including both RCTs and observational information, needs to be agreed upon, as does the resource sharing between the components of these data sources. Finally, there needs to be substantive agreement on how the data from these data resources will be used, with the correct data applied to the correct use. Indeed, much of these data are already available; resource development efforts need to be focused on how to operationalize data collection, how to define and make change and interoperation dynamic, and how to standardize data use for comparative effectiveness analyses. In examining how to address the integration and incorporation of data sources for CER, it becomes clear that progress can be made at all three levels of healthcare delivery systems, as suggested above. At the integrated health system level, the extent to which this is possible is largely defined by individual system architecture. At the national and local levels, different resources and opportunities available at one level are not available at the other level; however, success at either or both levels moves the agenda forward. Integration of data at the patient level across administrative and clinical data platforms can and has been accomplished locally. At the national level, the National Consortium of Clinical Databases (NC2D) is examining how the Society of Thoracic Surgeons, the ACC, and the American Hospital Association clinical database activities can address these integration and incorporation challenges from the data resource perspective much the same way that these societies have partnered to create guidelines for clinical care and appropriateness. Other national-level data integration projects are under way, facing different sets of challenges with respect to privacy and technology than perhaps exist at the local level.

An important third area for data resource development is the need to better align incentives for CER, both informational and financial. At the integrated health system level, this alignment would be dependent on both architecture and resource factors. Again, differential opportunities between national and local settings become apparent. Currently, both privacy and funding issues for these resource development activities are very difficult at the national level. As mentioned, specialty society efforts in QI, funded by provider contributions to support and participate in society-led databases, have yet to be recognized as valid practice expenses despite their substantive contributions to an improved quality of care (Ferguson et al., 2003). On the other hand, the national platform allows for the creation of new and important incentives and rewards for alignment; this has implications for the research, clinical care evaluation, and clinical plus financial data agendas going forward. It is in this area where perhaps the differentiation between national and local activities might result in the greatest development achievements early on. First and foremost, the local level is where comparative effectiveness implementation and results will be the most

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

patient-centric. Potential assets include the facts that privacy issues are currently far less complex at the local level than at the national level and that the opportunity for pilot projects to demonstrate feasibility is substantial. Potential liabilities include the fact that local infrastructure expertise and information resources may be limited, although as healthcare systems move into the EMR environment, this is less likely to be the case. Additionally, funding for these activities still remains an issue to be addressed, because they remain an expensive investment. Better alignment of these incentives for CER will reduce the overall cost, while making sustainable comparative effectiveness studies a part of everyday clinical care delivery.

An additional data resource development area is to better define the opportunity and value of clinical and research data (part of application data, Figure 5-1) for use in CER. The value of these data is referenced to administrative data, largely through the major payer resource mechanisms, which are ubiquitous and applicable to all providers. It is not meant to diminish the importance or utility of these data, but only to acknowledge that it likely has limited usefulness for evaluation of comparative effectiveness.

At the national level, this comparative effectiveness agenda ultimately must intersect with the design principles for a national system for performance measurement and reporting (IOM, 2006). It is in part through this intersection that the comparative effectiveness agenda and six healthcare aims articulated by the IOM in 2001 (IOM, 2001) can be pursued simultaneously. Specifically, in each of the areas a comprehensive, longitudinal, and patient-centric measurement can be linked to clinical and research data resources. In addition, shared accountability can be linked to guidelines for clinical care and appropriateness of care.

At the local level substantial opportunity exists as well. Provider-level evaluation for quality improvement, benchmarking, and profiling can be most easily extended to the shared accountability criterion at this level. Migration from data with a provider-centric focus to assembling patient-level data across the entire medical condition is beginning to occur at the local level, while meeting the comprehensive, longitudinal, and patient-centric parameters outlined above. Finally, the local level provides for increased system flexibility for change. With these anticipated dynamic developments in data resources, the integration between clinical and financial data can keep pace with the annual financial reassignment process. The data resource infrastructure must equally be dynamic and rapidly amenable to changes in data definitions and measurement specifications. Perhaps most importantly, it is at this local level where a patient-centric healthcare value can be measured, where value is the quality of patient outcomes relative to the dollars expended (Porter and Teisberg, 2007). Again, from the patient’s perspective, comparative effectiveness evaluation is most important and has its greatest impact at this level.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Data Analysis Improvement

As important as the data resource development process will be, the evolution of data analysis will be a key feature for making the comparative effectiveness agenda operational over the long term. As mentioned, most robust data analyses available today are by definition retrospective; they involve harvesting, aggregating, and then analyzing existing data. As useful as this information can be, there remains too great a gap between these analysis outcomes and point of care.

The first challenge is to incorporate these new data resource developments into analyses (Figure 5-4). Thus at the national level, long-term clinical and financial outcomes analyses are of critical importance. Additionally, the data resource developments will require the generation of new risk models to assess outcomes.

At the local level, the drive to integrate information across the medical condition will in turn drive new analysis tools for these integrated data sets that can be managed with the local level of expertise. This will include specific integration of data sets that move these local analyses beyond the national clinical or administrative analyses toward being more patient-specific. For example, Figure 5-5 illustrates an analysis of the integration of 3 years of National Adult Cardiac Surgery Database clinical data with National Death Index data for long-term outcome and with ZIP code data defining social economic status for those patients operated upon at the East Carolina Heart Institute between 2005 and 2007. Likewise, regional data-sharing arrangements, such as the Virginia Cardiac Surgery Quality Improvement project sharing adult cardiac surgery clinical and financial data, can be highly productive (Grover, 2008).

One of the most important developments in data analysis and improvement will be new patient-centric comparative effectiveness analyses. Current outcome risk models are procedure specific and generated from national data. While these are important metrics with which to evaluate risk and effectiveness, they have limited direct applicability to any individual patient in a particular healthcare setting affected by site-specific care practices and local provider influences. At the national level, the challenge will be to develop comparative effectiveness models of risk that account for multiple procedural options. The integration of clinical data resources, such as the NC2D initiative, is a critical step in this analysis development, because these comparative effectiveness risk models cannot be developed and tested based on single center or local site data. An additional challenge at the national level will be to develop models for assessing risk over the duration of the medical condition beyond the specific intervention-based episodes of care. Contributions from progress at the local level in these developments may prove extremely useful.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-4 A similar relational map suggesting major areas of data analysis improvement for comparative effectiveness.
NOTE: CE = comparative effectiveness; IH = International Health; POC = point of care; SES = socioeconomic status.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-5 An analysis of survival following adult cardiac surgery from a single institution in a patient population operated on between July 2002 and July 2007 is shown. Periprocedural clinical data from the Society of Thoracic Surgeons National Cardiac Database, Social Security Administration Death Index data for long-term mortality outcomes, and U.S. census data for socioeconomic status based on the ZIP code in which the patients resided were linked at the patient level. The covariates were separated by the percent of population with a ZIP code at or below the poverty line from all ZIP codes within eastern North Carolina. This relatively simple analysis highlights the ability to integrate data at the local level.
NOTE: CI = confidence interval, HR = hazard ratio.

At the local level, the challenge will be to incorporate these new comparative effectiveness risk models into local data systems. Importantly, this local analysis capability incorporates site-specific and local provider effects into the comparative effectiveness dialogue between the patient and his or her providers, a key component to informed patient choice (Weinstein et al., 2007) and shared decision making (King and Moulton, 2006).

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

The next step in this analysis improvement, then, is to develop tools for clinical point-of-care application of comparative analysis. The argument for this can be distilled as follows: for comparative effectiveness analyses to substantively affect the quality of care, they (1) must encompass preference- and supply-sensitive care practices, (2) must be available at the point of care, and (3) must be usable for multidisciplinary decision making prior to selecting the best therapeutic option for that patient. These structure (multidisciplinary approach) and process (comparative effectiveness risk models) evolutions will drive the comparative effectiveness process one step closer to true patient-centricity. This in turn creates an absolute requirement to move beyond the retrospective analysis structure used for current analyses of both clinical and administrative data sets. To accomplish this, the analysis engine needs to be embedded in the meta-layer architecture of the data repository, and a selected portfolio of straightforward but useful clinical comparative effectiveness analyses must be continuously generated and available for review in a dashboard model (Figure 5-6). By design these analyses are focused at the comparative effectiveness level and are not structured to compete with or replace the larger, robust data set analyses.

Importantly, the implementation of this approach at the local level allows for point-of-care application of these analyses in the context of the patient’s medical condition. This again brings the comparative effectiveness analysis one step closer to patient-centricity. Continuously updated

FIGURE 5-6 Panel A shows a snapshot from the Society of Thoracic Surgeons (STS) Web-based online risk calculator and the data that can be generated based on this national analysis. Panels B and C show how national-level information can be brought to the local level by comparing predicted risk of morbidity or mortality at this level, influenced by site- and surgeon-specific variables, with this national risk assessment. In addition, the predicted cost of these major outcomes is illustrated. Panel B shows this analysis for surgeon #1, while Panel C shows the same analysis for surgeon #3, both of whom operate at the same institution. Panels B and C show somewhat different predicted outcomes for different categories of patient risks (gray circles), and the size of the gray circle represents cost. The dark circle highlights data for patient A. Cost per surgeon data for each of these risk categories are shown on each lower panel. All of the information on the slide is based on real clinical and financial data that have been merged together; online these three panels constitute a portion of a dynamic dashboard.
SOURCE: Analysis and presentation courtesy of G. Sziraczky, ARMUS Corporation.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

dashboard feedback gives providers the tools they need at the point of care to make therapeutic decisions prospectively based on comparative effectiveness (Figure 5-7).

Perhaps the last step in data analysis improvement is to develop new analytical tools for CER. This will be necessary as the new data resource developments get incorporated into clinical effectiveness studies. Health policy and outcomes research will need to validate the short- and long-term value of comparative effectiveness analyses in driving change in care delivery. At both the national and local levels, these analytical tools will affect quality, effectiveness, appropriateness, and, at the local level, efficiency of care. The research agenda will need to document and validate how the learning health system component based on comparative effectiveness affects these outcomes on an ongoing basis.

Conclusions

A broad array of infrastructure development must occur to transition to a learning health system. Critical to this effort will be the data resource development and data analysis improvement issues addressed here. Three key principles can perhaps be established based upon this analysis. The first is that this resource development and analysis improvement process must translate into infrastructure that is appropriate for dynamic, real-time availability for learning. This in turn will require the incorporation of a much broader array of data resources into the learning infrastructure and in comparative effectiveness studies than has been used in the past. Better definition of the type, source, and use of these data resources is needed, along with public–private partnership necessary to create this scope of data resources and infrastructure necessary for comparative effectiveness work. The second principle is that real-time learning will require feedback processes to be built into the research development and analysis improvement strategies. This includes data and analysis feedback to all major stakeholders, in part as a return on their investment into the infrastructure development for comparative effectiveness. The third principle is that comparative effectiveness is at its optimal usefulness when applied in a patient-centric focus at the point of care. Tools that foster real-time analysis will be an important development. These tools will be embedded in these data resources to allow real-time insights into care delivery and will be used during a shared decision-making process prior to selecting the optimal therapeutic option for that specific patient.

In aggregate, much progress has been made already. Addressing the components of data resource development and data analysis improvement outlined here will further move the agenda forward to meet the 2020 goal.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-7 The dashboard concept is extended to illustrate what a potential point-of-care comparative effectiveness dashboard might look like. Based on randomized controlled trial and application data, procedures A and B are felt to be at clinical equipoise in terms of therapeutic benefit overall. This dashboard brings the comparative effectiveness analysis down to the level of patient A. In this hypothetical example, procedure B would be the option of choice for patient A based both on percentage of complications and on cost compared to procedure A.
SOURCE: Analysis and presentation courtesy of G. Sziraczky, ARMUS Corporation.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

PRACTICAL CHALLENGES AND INFRASTRUCTURE PRIORITIES FOR COMPARATIVE EFFECTIVENESS RESEARCH

Daniel E. Ford, M.D., M.P.H., Vice Dean for Clinical Investigation, Johns Hopkins School of Medicine

Overview

Myriad challenges associated with the conduct of effectiveness clinical trials—particularly RCTs in practice settings—will likely reduce the immediate impact of any expanded funding for comparative effectiveness studies. This type of research is very difficult to do, and, as compared with analyzing existing data, the amount of effort it takes to complete an RCT of effectiveness in multiple practice sites represents a quantum difference in effort, time, and resources. This paper will briefly describe some of the key challenges to the efficiency of clinical research based on observations from the experience of the average investigator and of patients being asked to join a research study, it will reflect on some of the additional challenges associated with CER, and it will offer some suggestions on priorities for research infrastructure improvement that might help to improve the overall efficiency of clinical research.

Practical Challenges

Johns Hopkins University’s current clinical research activities include approximately 3,500 active protocols, roughly 1,000 new protocols a year, and about 700 protocols using investigational drugs. Johns Hopkins is one of several academic medical centers with this high volume of clinical research. The National Institutes of Health (NIH) created the Clinical and Translational Science Awards (CTSA) program as one way for academic centers to consolidate existing funding and to add new funding to promote the quality and efficiency of translational research. At Johns Hopkins we have created the Institute of Clinical and Translational Research with the goal of “connecting science to people.” Through our CTSA program, Johns Hopkins is collecting the data to transform the clinical research enterprise—to help define and promote what it takes with respect to manpower, efficiency, and improving the value of research.

Over the past 10 years, FDA-regulated clinical trials have precipitously moved from U.S. academic centers to community hospitals and emerging international centers. Reasons include the increasing length of time it takes to have study protocols activated and the ability to recruit participants. The length of time it takes to meet recruitment goals in many studies is also a concern. For example, the average time from the first application propos-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ing a clinical trial to being ready to enroll the first patient in the National Cancer Institute Eastern Cooperative Oncology Group network is now about 800 days for phase 2 and 3 trials (Dilts et al., 2008). These barriers appear to be greater in the United States than in emerging countries, which are in addition to the higher costs of conducting research in the United States. Efficient completion of clinical studies is especially important for CER. These studies lose their value if the practice environment changes, the costs of the interventions change (e.g., medication becomes generic), or new treatments become the de facto standard of care. Because of differences in health status, previous use of treatments, and context of healthcare delivery, comparative effectiveness studies conducted outside of the United States may not be generalizable to the United States without some careful analysis.

Why are individuals in the United States less likely to enroll in human subjects research than in some other countries? There is no one reason, and evidence is generally lacking. Most patients in the United States have health insurance and do not need to enroll in clinical studies to receive treatment for their medical conditions. The voluntary nature of joining a research study is emphasized in the United States. Consent forms are becoming longer and addressing more contingencies. One issue specific to the United States is the multiple insurance carriers that pay for care for Americans. There is no standard approach to how these insurance carriers view support for clinical research. This leads to confusion for Americans in several aspects. AHRQ has just posted a technology assessment report called “To What Extent Do Changes in Third-Party Payment Affect Clinical Trials and the Evidence Base?” (AHRQ, 2009). The review finds that there is very limited evidence available to determine if health insurance policies affect enrollment in clinical trials. Interviews with investigators found that device studies were more susceptible to problems with enrollment based on insurance status. For this discussion the statement on clinical effectiveness studies is important:

For later-phase research—notably comparative effectiveness of existing therapies and studies of off-label uses of approved therapies—the impact of payment policy may be greater, but is not well defined. No entity assumes full responsibility for research costs, and plans to co-share expenses are in their infancy. Thus, in areas lacking sufficient evidence, especially regarding products that are already on the market, there is currently no consensus on who should pay for the evidence-generating research. (AHRQ, 2009)

Since evidence concerning insurance barriers to participating in clinical research is limited, I will discuss some examples that would be particularly

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

central to many comparative effectiveness studies. Let us consider a common comparative effectiveness study that compares two approved treatments that are covered by most insurers. Patients would have the following choice: They could choose one of the approved treatments that are already covered by their health insurance. If they went this direction they would know what their copayments would be and would not be concerned that any toxicity resulting from the treatment might not be covered by their health insurance. On the other hand, let us suppose they were asked to consider enrolling in a research study comparing the treatment they choose to another equally qualified treatment. They might have questions about the costs of treatment at the time of consent and generally would be told to call their insurance carriers. If they called their insurance carriers to discuss the financial implications of enrolling in a study, it is very likely they would be told their insurance does not cover research. This simple answer is technically correct, but it is misleading. In general, the study budget has been set up in such a way that the true research component of the study is paid for by the sponsor of the study. The expectation is that the health insurer will still pay for whatever treatment is provided in the trial that they would have paid for even if there was no clinical trial. However, in CER it is possible that new models of funding research studies will be created. If the study is comparing a cheap, generic medication to a new expensive medication, who will pay for the study medications? Will the medication copayment be standardized for all research participants in the protocol or allowed to vary depending on the insurance carrier? If the study team has to negotiate this with every insurance carrier a patient might have, the study will become too cumbersome and slow to be of much value. If both medications are provided free of charge this would not reflect the effect of copayments on adherence in the real world. Even if the study protocol were designed to allow standard copayments, it would be difficult to provide accurate information regarding the financial responsibilities to a potential research participant. The copayment may depend on the time of year when the patient is joining the study and whether limits for copayments have been reached at some time during the study.

Another important issue to research participants is who would pay if there are adverse events or toxicities related to a research intervention. It is important to note that when academic centers agree to conduct studies funded by commercial sponsors, the sponsors are generally required to pay for any adverse events associated with the study interventions. In contrast, studies funded by the NIH do not have any mechanism to pay for injuries related to the study intervention. Most consent forms for federally funded studies include a statement that you or your insurance company is responsible for any injuries that result from the study intervention. It is no wonder that patients in the United States would have second thoughts

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

about taking on the risks associated with joining studies. In the typical phase 1 study, patients may be willing to take on more risks to get randomized to a research intervention that is not currently broadly available but may represent the last chance for them. A study comparing two currently available treatments may not have the same perceived advantage to make the risks acceptable.

What can be done to simplify the financial implications of joining a comparative effectiveness study? There needs to be some way to create a model for research support that reduces the number of insurers with which a study team must interact. The CMS Clinical Trial Policy based on the 2000 National Coverage Decision has been useful in simplifying the process, and an important first step would be if all insurance companies agreed to follow this policy. The policy allows a single coverage decision so that the people who join a trial do not need to get clearance from their individual insurance carriers. Studies that have had formal peer review by a federal agency are considered to have scientific value, and CMS agrees to cover services normally delivered for that clinical condition. For example, while not paying for an investigational drug, CMS will pay for the administration of this investigational drug. Finally, CMS agrees to pay for evaluation of the toxicities associated with an investigational intervention.

Our experience at Johns Hopkins is that an increasing number of insurance companies do cover the associated costs in clinical trials when they are contacted by experienced staff from our insurance coverage office. However, approximately 15 percent of the patients who have already agreed to join a clinical trial are not cleared to join by their insurers. In most cases this is not the required policy of the insurer, but instead a decision by the individual’s employer to not cover participation in a clinical trial. National data related to coverage of clinical trials by insurers would be valuable, but they are not easily available.

We need to make sure that patients, researchers, healthcare providers, healthcare systems, insurers, and study sponsors are all enthusiastic about participating in CER. Without the support of each group, CER is unlikely to reach its promise of informing patients and providers about best practices at the time they need to make their decisions.

Infrastructure Priorities

1. Process in Place for Getting Timely Consultation from All Stakeholders

The results of CER have implications for patients, patient families, healthcare providers, payers, and the manufacturers of healthcare treatments. While it is important to seek the perspectives of all stakeholders before designing and interpreting comparative effectiveness studies, the pro-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

cess must be streamlined. The added value of each additional consultation as the study design is finalized should be measured. Ideally, representative standing panels should be available for timely consultation.

2. Streamline Initiation of Study Through Both Institutional Review Boards and Contracting Mechanisms

Institutional review boards (IRBs) and their required reviews are frequently cited as a barrier to timely starts of studies. Multicenter trials may need to get approval from multiple IRBs. In response to these delays, many have suggested that central IRBs have advantages. Central IRBs can be more efficient in creating the consent form and efficient initial review. However, protection of human subjects is much more than crafting the consent form. Local oversight is needed for the more frequent issues related to human subjects protection, including training and monitoring of competence of research teams, proper consenting of participants, timely recognition of adverse events, and supervising investigational drug services. IRBs are responsible for the conduct of the entire study, not just approving the consent form. Supervision by a central IRB that has little ability to monitor and implement local corrective action plans does not seem as desirable as a local IRB. On the other hand, IRBs have to be given appropriate resources to run in an efficient manner. At Johns Hopkins there are now five separate IRBs that all meet on a weekly basis, supported by an electronic IRB application and tracking system. Each IRB member is paid to serve on the IRB, and there are 25 additional staff to support the IRBs. With this level of support, the IRB can provide quality reviews with most approvals coming in less than 30 days.

Common and expected issues related to IRBs and contracting review should be examined and policies created. For example, institutions should have uniform policies related to when they will allow a practice to be covered by their IRB. The requirements for training and supervision should not be invented for each study. Contracts should have standard policies related to indemnification and collection of biospecimens in study protocols.

3. Standard Policy on Insurer’s Coverage of Services for Individuals in Clinical Trials

As described above, patients need to be confident that there will be no financial penalty if they receive their care in the context of a clinical trial. The easiest solution would be for all insurers to accept the policies developed by CMS. This would eliminate the need for getting the insurer’s approval before the participant enrolls in the study. The approval step is one more barrier for patients deciding if they want to enter a clinical trial.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

4. Enhancing the Research Capability in Hospitals and Practices Outside the Academic Center

CER includes observational studies, evidence synthesis, and RCTs. A priority for supporting practice-based clinical trials is increasing the capacity of practice-based research networks. There are now over 100 practice-based research networks, but many do not have all of the components to complete the research efficiently. At Johns Hopkins we have started the Johns Hopkins Clinical Research Network. Sponsors using this network will be assured that all staff are trained to Johns Hopkins standards, that a single point for contracting is possible, that only one IRB will complete the primary review with timely communication to other IRBs, and that Johns Hopkins Web-based research IT will be accessible throughout the practices in the network.

Practice-based research networks do not need buildings or equipment to function. They do need stable funding for the people who can organize and enhance communication between the practice-based practitioners and researchers. Practice-based research networks need funds to create the contractual agreements and quality assessments that allow research to be conducted efficiently. While accountability for productivity is necessary, the costs associated with recruiting multiple practices anew for each study are considerable. Small amounts of funding to sustain practice networks would be extremely valuable as they would help minimize the costs of recruiting and training practices.

5. Stronger Partnerships Between Researchers and Healthcare Systems

For some research questions, the best approach is to randomize by provider or healthcare setting. Healthcare systems need to more carefully consider the possibility of randomization as they roll out new programs. At the same time, researchers have to realize that healthcare organizations have their own timelines and cannot wait indefinitely before they begin implementation of new programs. Delays in initiating studies caused by the need for multiple submissions for funding are particularly damaging for clinical effectiveness studies. More rapid grant cycles may be needed to increase the likelihood that healthcare organizations and researchers are able to work together on more rigorous evaluation of new healthcare interventions.

6. Need for More Research Staff

While the focus is often on the principal investigators when discussing research personnel capabilities, research now requires a team much larger

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

than just the principal investigator. For CER, research coordinators are needed who are experts in recruiting and retaining research participants. IT professionals with expertise in Web-based data entry and tracking systems for community practices are also in short supply. Analysts who are expert in preparing large administrative data sets and assisting with statistical analysis are in short supply. Another need is for biostatisticians who are expert in analysis of cluster RCT designs and sophisticated methods for assessing causal relationships from observational studies.

Concluding Observations

Efficient, valuable CER requires the enthusiastic support of multiple stakeholders including patients, healthcare providers, healthcare plans, and the research community. Unfortunately, if one of these stakeholders has limited participation, the study will not progress, and the value of the research will be limited. CER infrastructure needs to provide long-term support so that research becomes a common occurrence in the delivery of care in the United States.

TRANSFORMING HEALTH PROFESSIONS EDUCATION

Benjamin K. Chu, M.D., M.P.H., President,
Kaiser Foundation Health Plan and Hospitals
Southern California Region

Overview

In the health professions we enter our respective fields because we want to improve the lives of our patients. We are taught first to do no harm. We are trained to use our knowledge and our devotion to lifelong learning to relieve suffering and improve the health of those we care for. We hold sacred our duty and our responsibility to our patients.

As professionals, we try to learn from the experience of those who preceded us. We apply their knowledge and their experience about what treatments have been effective for our patients. Nonetheless, even with this ingrained dedication to the principles of professionalism and the years of devoted study of existing knowledge, much of how we practice medicine is determined by the finite, cumulative set of experiences we gather from one-on-one interactions with our patients. Furthermore, on a community level, our health system operates on the principle that the sum of all those one-on-one interactions with well-trained professionals will lead to a healthier community.

Critical reviews of the performance of our health system against many

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

measures of community as well as individual health outcomes point out many gaps (Schoen et al., 2006). Few informed observers would disagree that the U.S. healthcare system is experiencing a profound crisis characterized by skyrocketing costs; inconsistent, suboptimal care; and decreasing access to care (Crosson, 2005). We spend significantly more on health care per capita than other industrialized countries, yet adults in the United States receive only about half of recommended care (McGlynn et al., 2003). In its landmark Crossing the Quality Chasm report (IOM, 2001), the IOM reported that “the current systems cannot do the job. Trying harder will not work. Changing systems of care will.” The IOM report envisions a delivery system capable of delivering care that is safe, effective, patient-centered, timely, efficient, and equitable while meeting six challenges:

  1. evidence-based care processes;
  2. effective uses of IT;
  3. knowledge and skills management;
  4. development of effective teams;
  5. coordination of care across patient conditions, services, and settings over time; and
  6. use of performance and outcome measurement for continuous quality improvement and accountability.

This IOM Roundtable on Value & Science-Driven Health Care is focused on identifying the infrastructure changes needed to help the nation fulfill this vision. While there are clearly gaps in key arenas critical to supporting a learning health system that is driven toward achieving the best health outcomes for patients, the participants in the workshop also pointed to steady progress. Substantial investments in IT with decision support and patient care registry capabilities, international progress on clinical systematic reviews, and developing experience in using these and other exciting new consumer-oriented Web-based tools that build on social networking capabilities illustrate a dynamic healthcare environment striving to put in place the key elements for success. It is also a system undergoing intense scrutiny of the development and reporting of objective measures that can be used to define progress and success.

Transforming Health Professions Education

Transforming health professions education is less about training “informationalists”—comparative effectiveness and health services researchers and data analysts—than it is about creating environments for training that encourage the effective use of these new tools by teams of physicians and other health professionals in order to achieve the best outcomes across the

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

full continuum of care and over the lifetimes of our patients. It should not be a surprise that a system of training emphasizing individual responsibility and professionalism but without systematic tools to verify effectiveness would result in inconsistent performance. Without these tools, there is simply no way to know and no way to systematically approach addressing gaps in care. Measuring outcomes with the tools to track and evaluate strategies to achieve better outcomes is now possible. In moving to address gaps in the infrastructure for evidence-based medical practice, perhaps a more important issue is to address the motivation and capability of the delivery system to use this infrastructure to achieve best performance. To facilitate diffusion of evidence-based practices it will be important to (1) establish clear expectations for high performance along clear and measurable dimensions of care, (2) encourage adoption of appropriate IT tools that provide essential information to drive performance improvement, and (3) align our payment systems to value better outcomes.

A health system driven toward achieving best outcomes for patients without the often conflicting goal to maximize income should be one that demands and supports a robust infrastructure to optimize care using the best available evidence. Organizing to meet these expectations will force health systems to address structural gaps, including reexamining the roles and responsibilities of the range of health professionals in a more common goal of performance excellence.

Setting Expectations

The Commonwealth Fund has devoted a considerable portion of its efforts to defining and advocating for a “high-performing health system” through the work of its Commission on a High-Performing Health System. Its report cards define gaps in system-level performance. Its systematic efforts to highlight the attributes and accomplishments of health systems that strive for and achieve high performance sets a clear bar for the U.S. healthcare system (The Commonwealth Fund, 2009). The Institute for Healthcare Improvement’s clarion call to save 100,000 lives and reduce harm for 5 million lives combined with evidence-based tools and collaborative efforts to help achieve these goals clearly set benchmark expectations for high performance (McCannon, 2007). Collaborative efforts to improve ambulatory care outcomes (Landon et al., 2007), the CMS/Premier Hospital Quality Demonstration Project (Premier and CMS, 2007), and a variety of pay-for-performance efforts also set high expectations for high-quality outcomes in addition to helping to define successful practices to achieve high performance. Public reporting of results on larger numbers of increasingly relevant health outcomes combined with continued efforts to demonstrate successful strategies and practices should create an environment where sys-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

tematic application of the evidence base in care delivery should become the norm rather than the exception. Well-informed and self-advocating patients using robust Web-based resources could accelerate this transformation.

Adopting the Right Information Systems

Reconfiguring our health system to use the evidence to optimize outcome is nearly impossible in the world of paper records. For many, the hope for better performance lies with widespread adoption of EHRs. But as others in this Roundtable have pointed out, adoption of EHRs alone is not sufficient. Without the development and use of evidence-based decision-support instruments, registries, panel management, and other tools combined with the attitudinal, cultural, and process changes that are necessary to use them effectively in our healthcare system, the challenge cannot be met.

A high-performing 21st-century healthcare system will require coordination of care among many care components. Interoperable HIT with a full suite of evidence-based decision support tools, care registries, and panel and population analytic capabilities are key drivers for better outcomes. Adoption rates for HIT have progressed slowly and unevenly among hospitals and health centers (AHA, 2007; National Health Policy Forum, 2008). Getting to a high-performing 21st-century health system will require training future health professionals in environments that use these tools. Training programs have an obligation to create a training environment that models the best care possible. Health professional schools, academic health centers, and health professions accreditation bodies should define minimum standards for HIT needed to support a high-performing health system. A timetable for obtaining this core infrastructure should be established. Consideration should be given to a suggestion raised in this symposium that funding for health professions education, particularly Medicare graduate medical education payments (or continued funding), be explicitly tied towards helping training programs gain access to these tools. Special funding might be needed to help safety net training sites gain access to these tools.

Training program certification should also be increasingly tied to health systems that can demonstrate effective use of evidence-based medicine (EBM), teamwork, and continuous learning to push for measurable outcomes of quality care. In-patient sites should demonstrate robust performance-driven programs to improve patient safety. Primary care training should clearly allow for trainees to deliver team-based, data- and evidence-informed care in a setting that fosters coordination of care, preventive care, and optimal management of chronic conditions.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Payment Reform

Fee for service as the prevailing model for payment of health care concentrates effort on single units of interactions, often centered on an episode of illness. Payments are tied to visits, procedures, tests, or some other unit of care. Payments are not directly tied to desired or reasonable outcome. Complications of care are reimbursed as additional necessary units of service while care coordination and case management are assumed but not specifically reimbursed. As a consequence, financial incentives in our health system support more units of care regardless of the firmness of the evidence base for that care. It does not necessarily support better coordinated or managed care.

To put health care on solid evidence-based footing, financial incentives need to be aligned with reasonable expectation of the best possible outcomes. Medicare has already pushed to reduce or eliminate payments for errors in management and avoidable complications. The Medicare Payment Advisory Commission has proposed experimenting with bundled payments for an “episode of care” (MedPAC, 2008). Others have proposed evidence-informed case rate methodologies to bundle payments for the management of illnesses. Such bundled payments would include built-in payments adjusted for complexity and some fixed level of complications to encourage better care (de Brantes and Rastogi, 2008). Risk-adjusted comprehensive payments have been suggested as a way to encourage comprehensive, multidisciplinary, and well-coordinated primary care (Goroll et al., 2007). And, of course, full-risk capitation has been the financial model for the fully integrated healthcare system at Kaiser Permanente.

Needed Systems Changes

The practice of medicine is complex and has become increasingly so. There is an explosion of medical knowledge, specialization, and sophisticated procedures that can yield remarkable results, an overall shift of illnesses from acute illnesses to more difficult to manage chronic illnesses, and the welcome proliferation of effective preventive care strategies. A recent New England Journal of Medicine report reviewed the complexity of coordinating care throughout our nation’s healthcare systems and noted that “it would take a physician 7.4 hours per working day to provide all recommended preventive services to a typical patient panel, plus 10.6 hours per day to provide high-quality long-term care.” Meeting these tasks as well as coordinating care between providers and ensuring seamless transitions in care from one setting to another will require increasing dependence on multidisciplinary teams, better and perhaps larger organizations of health

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

services depending on interoperable EHRs, and robust decision support and panel/population care management tools (Bodenheimer, 2008).

There has been a great deal of attention paid to the concept of a medical home and its ability to improve care for all (Davis and Schoenbaum, 2007). Broadly defined, a medical home is “a physician-directed practice that provides care that is ‘accessible, continuous, comprehensive, and coordinated and delivered in the context of family and community’” (AAP/ACP/AOA, 2007). Other features of this model include care coordination and integration facilitated by registries and HIT, the use of evidence-based decision support, and engagement in QI activities (Rittenhouse et al., 2008). Several studies suggest that the use of medical homes leads to improvements in care and decreased resource use (Arvantes, 2007; Paulus et al., 2008).

Team-based, evidence-informed processes have been at the core of the success of the patient safety and QI efforts spurred by collaborations sponsored by the Institute for Healthcare Improvement, the Joint Commission, and the Premier Hospital Quality Demonstration Project, among others. Interdisciplinary accountability for adherence to evidence-based protocols, bundles of safety practices, and checklists to ensure reliability serve to bolster these efforts and have yielded encouraging results. Care transitions are also amenable to team efforts to help patients avoid complications and rehospitalizations (Coleman, 2006).

Evidence-Based Medicine at Kaiser Permanente

At Kaiser Permanente, we embrace the expectation of high-quality performance on behalf of our patients. We now have real-time information on quality processes and outcomes that can serve as a guide and a measure of effective improvement efforts. We also operate under a financial capitation model that encourages better outcomes. These conditions have encouraged changes in our delivery system that have implications for changing roles for healthcare personnel. Here is an example.

Like a number of large health systems across the country, Kaiser Permanente has invested heavily in IT to give the system the full capabilities of a high-performing, learning health system. The company is approaching the final stages of implementing an EHR with Web-based capabilities that tremendously facilitate communication of information with patients and among a host of health professionals both in outpatient and inpatient settings. We have developed sophisticated registries of patients suffering from a variety of conditions and have deployed evidence-based decision-support tools and panel and population management aids. Not surprisingly, these tools highlight variations in practice and outcomes even in a system that has prided itself on adherence to best-practice protocols for many years. In a world of paper charts where the unit of clinical activity

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

is the patient encounter, we simply did not know what we did not know. Now, tools that allow our system to track performance down to the practitioner level have given us the ability to tailor efforts to achieve better outcomes. Often, this has required remodeling care delivery.

For example, as we developed the capacity to track a portfolio of preventive interventions and chronic disease control measures for our population of almost 3.3 million people in Southern California against evidence-supported standards, we steadily and dramatically improved, but then we reached a performance plateau. As a primary care physician-based system we depended on our increasingly harried primary care physicians to achieve the desired outcomes. Computer-generated outreach reminders and a host of other systemic strategies continued to fall short of our expected goals.

Using our electronic database and population care management tools, we looked at our patients who had large gaps in over 10 key measures covering age-appropriate preventive care and chronic disease management. To our surprise, in this population of patients who were suboptimally managed according to our guidelines, three out of five received their care at Kaiser not through their primary care physicians but through a host of specialty interactions. In any given year a primary care physician could at most address the gaps in only 40 percent of these patients. To reach the other 60 percent would require a systemwide effort that would have to involve specialty areas. Further, we realized that depending solely on physicians would not solve the problem.

As a result, in Southern California we have launched an ambitious redesign of our ambulatory care setting, emphasizing a “proactive office encounter.” This effort involves every member of the healthcare team in both primary care and specialty areas to address these gaps in care. Gaps in recommended care are identified for patients weeks in advance of their scheduled visits. Receptionists and professional and ancillary staff attempt to address these gaps before, during, and after the visit. In addition, physicians are encouraged to work with staff to address gaps identified through our panel management tools. Frontline staff enthusiasm and accountability for helping patients achieve better outcomes has been noticeable in the past year as performance has rapidly improved.

Our experience in the inpatient setting is similar to other organizations that have embraced the challenge to improve patient safety in hospitals. To dramatically reduce infections, to prevent falls and avoidable complications, and to ensure optimal management of patients who present with serious illnesses requires the same team focus and devotion to systematic adherence to proven protocols and bundles of care. After reviewing the results for the use of team-based simulation training in high-risk areas, Kaiser Permanente has begun to roll out a comprehensive effort using computerized human

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

patient simulators with some encouraging results (Draycott et al., 2006). Further, to optimize care after hospitalization and to prevent unnecessary rehospitalizations, it has become increasingly clear that managing the transitions in care is a key component. Intensive case management, home health, home monitoring, and other approaches have helped us to reduce rehospitalization rates for chronic illnesses.

Finally, with the availability of our database tools and analytic capabilities, we have expanded our research unit to capitalize on the available information for observational studies of care and to conduct health services research. The goal is to tap into the richness of this information to develop even more effective strategies.

Conclusion

In summary, health care is moving toward a patient-centered, evidence-based health management orientation. Computerization of health records, wider use of patient care registries, greater availability of tools that allow for tracking individuals as well as populations of patients, and information-savvy consumers will drive our current fragmented health system toward one that will emphasize greater accountability, transparency of information, and higher levels of performance. Computer-assisted tools with sophisticated evidenced-based decision-support protocols combined with process changes and strict adherence to demonstrated, cost-effective bundles of care can lead to safer and better care. Gaps in preventive care and chronic disease management can be easily tracked. To correct gaps in care and to ensure safe and effective interventions, physicians and other health professionals will increasingly have to work together in teams of care and share accountability for their patients’ clinical outcomes. Acute episodes of illness will require coordination of handoffs, patient safety protocols and checklists, and other interventions designed to minimize harm and maximize benefit to our patients. Chronic disease management and adherence to known effective preventive measures will become systemwide accountability requirements. The complexity of care and the huge burden placed on shorter and shorter physician–patient interactions with a multitude of different clinicians will require that other health professionals and ancillary staff be used to bridge the gaps. Every touch point, enhanced with Web-based and other communication tools, will be an opportunity to maximize care.

Health professionals in a high-performing health system will rely on a new professionalism that will build on the principles of lifelong learning, duty to our patients, and devotion to finding best outcomes for them, but they will also emphasize an obligation to optimize teamwork and ensure that care is firmly grounded on the best evidence of effectiveness. Health professionals learn through experience in taking care of patients. To create

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

a healthcare workforce with the skills to expertly use EBM to achieve high levels of population health, it will be necessary to create the settings to allow trainees to emulate and improve on the best models. Healthcare settings that strive for high levels of performance will inevitably move toward these more effective team models of care.

BUILDING THE TRAINING CAPACITY FOR A HEALTH RESEARCH WORKFORCE OF THE FUTURE

Steven A. Wartman, M.D., Ph.D., M.A.C.P., President, Association of
Academic Health Centers; and
Claire Pomeroy, M.D., M.B.A., Vice Chancellor, Human Health Sciences,
School of Medicine, University of California at Davis

Overview

“Institutions must transcend traditional boundaries to generate new ideas and technologies…. And link science with policy and governance to frame questions and foster social change.” (Bawa et al., 2008)

“If medicine is to fulfill her great task, then she must enter the political and social life.” (Rudolf Virchow, 19th-century physician)

A national consensus is emerging that the U.S. healthcare system needs a fundamental retooling as to its purpose and function. This conclusion is driven mainly by the expensive costs of the current system in the context of significant variations in the basic health parameters that reflect the well-being of American society. For example, the current healthcare delivery paradigm strongly incentivizes acute care and the provision of expensive, cutting-edge drugs and technologies to the insured. Missing in this model is a strong focus on facilitating access, incentivizing quality, prioritizing preventive care, and ensuring community-wide public health. When this is coupled with the large numbers of underinsured and uninsured citizens, the result is a fragmented and dysfunctional system that creates wide disparities amongst the population in terms of health and well-being (Wilper et al., 2008).

What is not often heard in the calls for change is the urgent need to develop a new kind of research infrastructure focused on health and health care that can guide and inform decision making during this time of needed and anticipated change in the health system. This requires the development of the evidence base for clinical practice in order to ensure that the health care delivered is both effective and optimal. The importance of enhancing and supporting CER is therefore critical and fundamental to a reformed

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

health system. Indeed, a recent IOM Roundtable workshop has emphasized the kind of infrastructure that is necessary in order to learn which care is best (IOM, 2008).

Revisioning the Medical Research Enterprise as a Necessary Tool to Implement Health System Change

Research holds the promise of testing and finding the answers to the challenges that face health care in the United States today. But the traditional approaches to the needed research are inadequate to discover and define the innovations needed. If improved health is to be provided for all Americans, a vision for a new kind of medical research is needed. Above all, this new kind of research seeks to discover, disseminate, and optimize the adoption of practices that advance the health of individuals and the public as a whole.

Key to this new paradigm is the principle of expanding the continuum of medical research to extend from basic discovery to community-wide health innovations in order to ensure that discoveries ultimately serve the public. Already there have been calls to reevaluate the emphasis and resource allocation afforded to the various types of research (Dougherty and Conway, 2008) (Figure 5-8).

Specifically, there has been increasing recognition that medical research investment must be expanded to support more applied research. For example, the creation of the Clinical and Translational Science Center (CTSC) program by the NIH was a dramatic call for transformation of the national medical research enterprise (CTSA, 2008).

However, this new, expanded vision of medical research remains incomplete. It is increasingly recognized that health is determined only in part by the actual delivery of health care; there are other important determinants of health that are much broader, including behavioral factors, genetic variability, and, perhaps most prominently, the social determinants of health (Marmot and Wilkinson, 2006; Tarlov and St. Peter, 2000) (Figure 5-9).

The emerging field of the social determinants of health emphasizes the fact that factors such as socioeconomic status, education, job security, access to societal resources, social support, and social empowerment are powerful determinants of the health status of a community, even more so than the specifics of healthcare payment and delivery systems. Indeed, it can be argued that the emphasis in the United States on a mostly biomedical research and healthcare delivery model—as opposed to one based more on social and environmental determinants—has contributed to less than optimal health statistics, growing disparities, and spiraling healthcare costs.

The importance of the social determinants of health is highlighted in

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-8 Expanding the research continuum.
SOURCE: Dougherty and Conway, 2008.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 5-9 Determinants of health.
SOURCE: Marmot and Wilkinson, 2006.

the new “Healthiest Nation” campaign conducted by the CDC. Director Gerberding introduced the campaign by saying, “People are talking about healthcare reform, but they’re not really talking about health.” (Rubin, 2008). The recognition of this broader approach to health solutions is also embraced by the World Health Organization through its new Commission on the Social Determinants of Health (WHO, 2008).

Thus, although our national vision for medical research is appropriately expanding to include basic science, clinical, and T1–T3 translational research, a new, even broader view is needed. This broader view encompasses exploration of all the determinants of health, including those beyond the traditional realms of biomedical research. Let the term health research be used to describe studies that address all aspects of health, including biomedical research, public health research, and multidisciplinary research on the social and environmental determinants of health. This broader paradigm of health research is essential to ensuring that our research agenda leads to better health for all.

Approach to Achieving a New Research Vision

Responding to the urgent need for a new approach to research that explores all the facets of health requires fundamental changes in the research enterprise:

  • First, this broader view of health research must be supported and facilitated. The fundamental science of translational and multidis-
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ciplinary approaches must be established so as to be respected and valued in the overarching research and academic communities. That is, new disciplines, methodologies, and applications must strive to be perceived as equivalent in intellectual value and importance to basic biomedical science.

  • Second, new resources must be allocated to provide funding and infrastructure for the new paradigm of health research, including expanded investment in basic science, clinical, translational, public health, and multidisciplinary social research.
  • Third, new types of health research teams must be created; this workforce development will require training future researchers from a wide range of backgrounds and equipping them with new skills to conduct this research, as well as developing new partnerships and expanded venues in which to conduct health research.

The goal of this new vision is to enhance the ability of research to benefit the health of our communities more directly and more efficiently than has so far been the case. This will require a change in the way people think about and invest in research (Table 5-1).

Workforce Development for the New Research Paradigm and the Role of Academic Health Centers

This new paradigm of health research requires fundamental change. Central to this change will be the development of a new cadre of researchers, clinicians, and health leaders who have the expertise and skills to put the essential vision of this new approach into effect. To expand medical research to include translational aspects (T1, T2, T3) and to adequately address the broader definition of health (using behavioral, public health, and social determinants approaches), the health research workforce must be redefined to include individuals with diverse backgrounds and skill sets (Table 5-2). Indeed, the complexity of the challenges facing healthcare delivery today requires true multidisciplinary teams whose members bring multifaceted perspectives to finding health solutions. The new health research workforce must include not only health professionals of all types, but also engineers, sociologists, urban planners, policy experts, economists, and more. The importance of this has been pointed out in a call for a model in which “society would engage all of its facets—not just medicine and public health—in the collective act of preventing disease” (Woolf, 2008).

Academic health centers (AHCs)—consisting of schools of medicine, one or more other health professions schools, and an ownership or affiliated relationship with a teaching hospital or health system—are major players in the development of health researchers at all levels (AAHC, 2008a). Their

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 5-1 An Approach to Achieving a New Research Vision


New People and Skills

•   Multidisciplinary teams

•   Strategic faculty recruitment

•   Expansion and training of research support staff

•   New partners (e.g., industry, nongovernmental organizations, faith-based organizations, payers, government, public, diverse communities, patients, general public)

•   New venues (e.g., community-based research)

•   Training to provide new skills, including inter-professional training

•   Incentives within academia to support all types of health researchers (e.g., academic home, revised promotion, tenure criteria)

New Infrastructure

•   Information technology investments (e.g., electronic health records, regional health information organizations, personal health records)

•   Biostatistics and data management support

•   Biorepositories

•   Streamlined clinical research approval processes

•   Efficient intellectual property policies

•   Links between academia, industry, and venture capitalists

New Investments and Incentives

•   Expanded funding for clinical, translational, and social health research by the National Institutes of Health, National Science Foundation, foundations, others

•   Identification of new funding sources, especially for T2 and T3, behavioral, public health, and social health research

•   Increased organizational investment in translational research cores (e.g., informatics, clinical research nurses)

•   National coordination of research resources (e.g., informatics linkages, data sharing)


cross-cutting nature, exemplified by their multiple missions of research, education, and clinical care, suggest that they are well positioned to play leadership roles in creating this transformation. They have long served as a test bed for innovation, and they have a social mission that extends beyond purely business imperatives. They also often have university locations and affiliations that provide access to a wide range of potential collaborators from fields not traditionally located in health schools.

In addition, organizational and management trends taking place within the AHC enterprise are fostering new types of institutional integration and alignment (Wartman, 2008), which are highly supportive of new interprofessional research models. AHCs have the ability to test and disseminate new approaches to health care and also have access to the expertise of

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 5-2 Workforce Development to Support Health Research


Academic Health Centers and Universities

•   Commitment of leaders to train and support health researchers

•   Create multidisciplinary teams of health professional researchers as mentors and role models

•   Facilitate training and inclusion of broad range of other professionals on health research teams and to serve as mentors and role models

•   Provide needed infrastructure to support training of and research opportunities for all types of health researchers

•   Develop innovative curricula and training programs to develop health researchers

•   Provide academic homes and satisfying career paths with appropriate incentives, rewards, and promotion for health researchers

•   Cross-mission planning to leverage clinical investments to support health research training and research opportunities

•   Streamline and update policies (e.g., intellectual property and clinical contracting) to support trainees and other health researchers

Industry, Community, and Other Nonacademic Partners

•   Industry support for training, including funding, venues, and mentors

•   Community partnerships for training sites, research venues, and study partnerships

•   Government and NGO support for training, including funding, venues, and mentors

•   Enhanced public understanding of the importance of supporting training of health researchers

•   Increased philanthropic support for training of broad range of health researchers

National Policy Leaders

•   Enhanced federal funding of and innovative mechanisms for training health researchers

•   Policies and programs that stimulate transdisciplinary training and career opportunities for health researchers

•   Removal of bureaucratic regulations that inhibit health research training and career opportunities


multidisciplinary teams. Consistent with their institutional mission of promoting societal health, they can train the next generation of researchers and clinicians in this new paradigm. They are able to look beyond their institutional walls to build multisector teams that can carry out and disseminate this new research approach with the overarching goal of improving healthcare delivery and the nation’s health parameters.

To do so, AHCs will need to ensure the following:

  • commitment of their own leaders to drive the expanded research approach,
  • investments in new infrastructure (e.g., information technology, data, biorepositories) to support training and health research opportunities, and
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
  • curricular and training innovation to develop multidisciplinary, multisector research teams with the skills required for this broader paradigm of health research.

Leaders of AHCs are in a strong position to contribute to the development of a workforce that can drive this new vision of health research. By lending their voices to the call for adequate and innovative funding mechanisms, these leaders can advance their institutional mission of improving health in their communities. Further, by ensuring that their organizations provide the needed culture and infrastructure to support new research approaches, they can provide the opportunity for health research. They can facilitate the partnerships with government, industry, and community groups needed for health research. Finally, academic health leaders can commit to and facilitate training the future workforce of health researchers.

Workforce development for health research requires reaching across disciplines within the medical field and beyond. Researchers (and clinicians and educators) from different subspecialties within medical research institutions must interact beyond their disciplinary silos. As encouraged by the CTSC programs, health professionals from medicine, nursing, pharmacy, dentistry, veterinary medicine, and allied health fields can come together to form research teams. And these teams should extend to colleagues in other fields ranging from engineering to policy to humanities and beyond.

These new multidisciplinary teams must also look beyond the walls of academia. Creating partnerships with a variety of nonacademic constituencies who can contribute to defining workforce needs and providing varied training venues will be critical to the success of this new vision, as discussed below.

Curriculum innovation is central to creating a workforce with the skills and experience to succeed in this new paradigm. Key to this innovation will be interprofessional training. If researchers are to work together in teams, they must first learn in teams. Innovative programs that bring various health profession educational programs together (e.g., nurses and physicians) are being developed at several AHCs but need to become more common. Programs such as the Howard Hughes Medical Institute’s “Med Into Grad” program expose Ph.D. basic science trainees to clinical experiences so that they can understand the potential impact of their future work on improving health (HHMI, 2009).

This new breed of health researchers will require specialized training. Programs designed for scholars at all levels—medical, nursing, dental, pharmacy, and other health profession students; residents, fellows, and other advanced trainees; and junior (and senior) faculty—should provide the specialized skills needed for health research. While the training must be tailored to individual needs, opportunities to learn study design, research

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ethics, informatics, leadership skills, business skills, and more should be made available. Clinical trainees will desire exposure to core research resources, such as gene sequencing, specialized assays, and animal models, while basic science trainees will seek out exposure to clinical settings in order to provide context for their work. Programs available through the NIH (K–30, K–12, M.D.–Ph.D.), several foundations, and others are beginning to meet these needs but need to be adequately funded and made available to more trainees.

But this curriculum innovation cannot be confined to graduate and professional education. Additional priorities include

  • K–12 outreach programs that expose young people to the joys of translational research,
  • staff training for all members of research teams,
  • training of community partners, and
  • public awareness campaigns that emphasize the range of health research impacts.

AHCs must provide clear career paths for health researchers who pursue these newer types of research, as they have for traditional biomedical researchers. AHCs must value clinical and translational researchers and actively recruit them and work to retain them by providing research and scholarly opportunities. Under this new research paradigm, clinical and translational research cannot be considered an “add-on” activity for busy clinicians who may have no special training in research. Clinical and translational researchers who have specific education and expertise in translational research must be strategically recruited to support the research agenda of the AHC. An even greater challenge will be to provide career paths and appropriate rewards to researchers from nonmedical disciplines who participate in health research.

The creation of an academic home for this new cadre of faculty has been endorsed by the CTSC program and addressed in a variety of ways at institutions across the country. Recruitment packages for translational researchers that provide the resources, environment, and protected time for faculty success are essential. The redesign of promotion and tenure criteria that will recognize and reward contributions to clinical, translational, and social research as well as to basic science investigation are critical. Finally, reasonable salaries and job security must be available to all health researchers.

AHCs also must assume responsibility for ensuring that adequate and appropriate institutional resources are available to health researchers during their training years and later, when they become faculty members. Traditional research support services and policies in most academic institutions

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

were generally designed to support basic science investigators. Now, it is important to design systems that also support clinical, translational, and social health researchers. In many cases, this will mean expanded administrative support for attracting clinical research grants, proposal preparation, intellectual property advice, conduct of clinical trials, data management, statistical support, financial monitoring, and more. Systems geared to support and incentivize research, rather than manage risk, need to be prioritized by institutional leadership.

AHCs must therefore invest in new research infrastructure. Ideally, this would involve “one-stop shopping,” user-friendly research support offices that reduce the bureaucratic barriers too often encountered by faculty and trainees. Just as academic institutions have made major investments in basic science infrastructure over the years, AHC leaders should consider directing dollars to support expanding research IT services and informatics, data repositories, biorepositories, IT links with other institutions, clinical research units, and others. In addition, leadership endorsement and investment in programs that bring researchers together with community and business leaders can enhance the research enterprise.

AHCs are uniquely positioned to do cross-mission planning that can expand the capacity for all types of health research. By examining clinical investments as they are being made, incremental investments can be used to promote research opportunities. For example, many institutions are making large investments in EHRs. By proactively ensuring that the systems are designed to support health research, total investment can be less than if the clinical and research systems were purchased separately or the clinical system was retrofitted to accommodate the research needs. As an example, one institution has increased its capacity for community-based research by extending its telemedicine patient care network to serve as a backbone for community-based research projects (Nesbitt et al., 2006). Similarly, as institutions create community outreach programs, they should consider how clinical outreach can support and be supported by research initiatives in these same locales.

Overall, workforce development at AHCs will be most effective when the institutional values and priorities are aligned with the goals of this new research paradigm. By making it clear that the clinical mission is improved by health research, by committing to educate a broad range of health researchers, and by making sure that these academic missions are enhanced by high-quality clinical programs, the tripartite mission can be enhanced by alignment and leveraging of resources.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Workforce Development: The Role of Industry, Community, and Other Nonacademic Organizations

Academic–industry partnerships are critical to the successful translation of discoveries. While academia has traditionally focused on basic science discoveries and industry has focused on transitioning innovations to the market, the gap known as the “valley of death” must be bridged in innovative ways. Industry can play a key role in workforce development as well as in health research, especially translational studies.

Industry should serve as a strong partner in research workforce development. Joint training programs in which graduate students, postdoctoral candidates, professional trainees, and faculty spend time in both the industry and academic settings allow access to the expertise available in private companies. Other options include industry sites for internships or the conducting of research trials, didactic or “hands-on” teaching by industry leaders, and direct funding of workforce development by industry. Similar joint initiatives can be developed with government units and nongovernmental service organizations.

Successful academic–industry partnerships are central to the ability of institutions to provide opportunities and funding to both trainees and faculty throughout their careers. By funding workforce development and ongoing career opportunities for health researchers, including those at AHCs, industry can make an investment in their future workforce and potential colleagues and collaborators for health research.

An essential component of successful health research is the involvement of the community constituencies affected by the research. The growing interest in community-based participatory research recognizes the importance of the public’s involvement in research (Jones and Wells, 2007). Components include community input into setting research priorities, sharing of approaches to build trust for community participation in research, and joint academic–nongovernment organization (NGO) leadership of research and dissemination of progress and findings to community. To determine the true efficacy of new medical innovations, it is essential that research not be carried out only in the ivory towers of academic research institutions but also be conducted in real-world community settings.

The inclusion of community partners, including industry, private healthcare providers, health maintenance organizations (HMOs), payers, schools, churches and other faith-based organizations, NGOs, government, and others will require ongoing trust building, communication, and openness to new research approaches.

The public has consistently reported great support for medical research, especially research that benefits patients in clearly understandable ways. Public messaging is key to sustaining the broad public support for bio-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

medical research, as well as enhancing understanding of the broader goals of health research, and must be the responsibility of all. One part of this message is that declining clinical reimbursements to AHCs, especially those that assume a significant “safety net” role, represents a significant threat to the ability to develop the workforce needed for health research and to conduct the research. Messages should include appeals for appropriate societal funding for health research workforce development and also for philanthropic support of training and new types of research.

Workforce Development—The Role of National Policy Makers

This fundamental transformation in medical research to encompass all aspects of health research and the development of a trained workforce dedicated to such research can only happen with support from national policy makers. Decision makers should

  • endorse the importance of health research as essential to leveraging the biomedical discoveries in which society has invested and improving the health of the country;
  • provide adequate funding for the full range of health research, including appropriate resources for the needed workforce development, through a combination of public and private sources; and
  • institute regulations and laws that facilitate health research and remove existing barriers to such research.

While a careful examination of allocations to the NIH for basic, clinical, and translational medical research is an essential part of this process, equally important will be a discussion of how public health and social determinants research will be funded. It is vital that the nation’s basic science research infrastructure continues its impressive scientific progress. But particular attention to adequate funding of the CTSCs and of workforce training programs is urged. Chronic underfunding of health outcomes, epidemiologic, quality improvement, public health, preventive, and other clinical and translational research at the NIH and other agencies (AHRQ, CDC) must be reversed. Conceivably, a new federal entity whose mission is the fostering of all aspects of health research could be developed. Regardless, it will be important to identify funding mechanisms to support health research that encourages transdisciplinary approaches that reach out to nontraditional partners in other fields. Innovative funding mechanisms should be considered for health research, including the possibility of contributions from industry (e.g., pharmaceutical, devices, hospital, HMOs, payers). A strong business case can be made for this since the investment

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

will likely yield innovations that provide value by identifying cost-effective and high-quality practices.

In addition, policy makers should support regulatory and legal changes that stimulate the translation of discoveries to the market. Bureaucratic barriers that slow progress should be removed while ensuring that the safety of the public and trial participants is not compromised. For example, HIPAA procedures could be modified to facilitate identification of clinical research subjects while maintaining protections (AAHC, 2008b). A new approach that reaches beyond the traditional restraints of academia and private industry is essential to achieving the ambitious goal of health research to truly improve the health of our nations.

Finally, demonstrable linkages must be developed between the findings of health research and patient care delivery. These linkages, in our opinion, must go beyond the development of guidelines. Rather they should be integrated into clinical care through a combination of quality measures and payment reimbursement policies. In so doing, a self-perpetuating cycle of improvement would be incorporated into the nation’s healthcare system.

Policy makers, academic institutions, industry, and community organizations must work together to ensure that the potential benefits of health research are fully realized and that the research workforce is optimized. Only through such cooperation can the vision of improved health for all be realized.

Conclusions

The nation needs to vigorously engage in a new health research agenda in order to achieve the goal of a healthier future for the United States and the global community. To accomplish this, a new research paradigm is needed that supports the full range of health research to ensure that new discoveries benefit patients. The adoption of this new research paradigm will require

  • an expanded health research workforce with diverse perspectives and skills in which people work together in multidisciplinary teams and in venues throughout the community; and
  • a health research workforce supported and incentivized by both academic institutions and other organizations with improved infrastructure and increased resources.

This ambitious but essential vision requires that health workforce issues be made a policy priority for the nation (AAHC, 2008c) and that academic and patient care delivery structures be redesigned to support the new paradigm of health research. Ultimately, the impact of this vision will depend

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

upon our political and social will to invest in this new type of research, and then to successfully link the new discoveries to the actual delivery of health care and design of health policy.

PUBLIC–PRIVATE PARTNERSHIPS

Overview

Consideration of how best to take advantage of existing infrastructure—e.g., data resources, expertise, technology platforms—will be important in developing and implementing priority infrastructure elements. Public–private partnerships are seen as an effective way to link some of health care’s disparate component elements and to draw productively on the respective assets of participating stakeholders, and these partnerships offer an approach to developing, supporting, and nurturing productive relationships among stakeholders who come to CER from different perspectives and with diverse motivations, bridging gaps between stakeholders and removing barriers to cooperation. Absent such a mechanism for bringing relevant parties to the same table, fundamental differences in institutional cultures can impede or even preclude stakeholder-to-stakeholder communication. Public–private partnerships not only create a space in which collaboration can safely take place, but they also offer a structure and operational guidelines, typically tailored to a specific partnership by the participants, that help foment and facilitate cooperative work. The papers here report briefly on public–private partnerships from the perspectives of health plans, the federal government, and industry. Panelists were asked to describe relevant current or planned public–private partnership efforts and suggest ways that similar efforts could help build the infrastructure for CER.

Health Plans

Carmella A. Bocchino, R.N., M.B.A., Executive Vice President of Clinical Affairs and Strategic Planning, America’s Health Insurance Plans

Health plans are strongly committed to working with stakeholders in both the public and private sectors to develop tools and other resources and programs to help ensure that patients and providers have the information they need on safety, effectiveness, and value to make sound healthcare decisions. Toward this end, and often in partnership with federal agencies, health plans have created comprehensive databases that can be mined to identify potential safety problems as well as opportunities to improve care and care delivery. Many public–private partnerships have focused on providing information to individual agencies or health plans to

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

explore particular questions in depth, and emerging partnerships seek to broaden the scope and scale of these initial efforts. Briefly reviewed below are examples of learnings from such efforts that illustrate the potential of public–private partnerships as key buildings blocks for the infrastructure required for expanded CER. Key challenges, barriers, and opportunities are also discussed.

Health plan data is consistently collected and used within health plans to help provide timely information, which is important for point-of-care decision making, to clinicians and patients. Unfortunately, the United States wastes millions of dollars annually on medical treatments that may not work. In addition, when evidence exists and is not implemented, there are both human and financial consequences. Several examples from Kaiser Permanente illustrate how these data can be effectively harnessed to generate insights that improve care. A recent study of hip replacements (Meier, 2008), conducted by Kaiser, yielded important information about which devices work best for whom, information that Kaiser was able to quickly share across its network of physicians. Similarly, a case-control study of Kaiser’s data on the cardiovascular effects of cyclooxygenase-2 (COX-2) inhibitors yielded results that have been used by the FDA and others to develop policy and practice guidelines about the use of COX-2 inhibitors. A logical extension of these examples is to find opportunities to draw upon data and share information more broadly, and many have called for the development of a national data system as a central part of the nation’s health research infrastructure. Several current data-sharing efforts are outlined below to illustrate the potential.

Data-Sharing Efforts

The U.S. Renal Data System (USRDS) was established over 20 years ago to help Medicare, the major payer for renal dialysis and transplantation, collect data on the end-stage renal disease patient population and assess potential quality and safety problems as well as program costs. Funded by the National Institute of Diabetes and Digestive and Kidney Diseases at the NIH and CMS, the USRDS provides a means for organizations to share data sets and to collect, analyze, and distribute information on the end-stage renal disease population in the United States. In practice, this data system has served as an effective, proven national data registry that assesses trends in mortality, end-stage renal disease rates, treatment modalities, and morbidity. With the increased calls by clinicians and policy makers alike for a more comprehensive, national data registry, the USRDS merits attention as a possible model to move this effort forward.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Research and Surveillance Networks

There are several examples of health plans working together to create research and surveillance networks. These collaborations include work with the government and private sectors to address important research and public health concerns. The research and surveillance work conducted by these groups provides benefits to the health plans and to the community as a whole.

Among the notable health-plan community efforts to advance the development of needed clinical evidence is the HMO Research Network (HMORN). This umbrella organization was established in 1993 and is a consortium of 15 HMO organizations that have formal, recognized research capabilities. Distributed geographically, the participating HMOs collaborate to develop and implement common study designs, share standardized data, and provide learning for all participants. The power of combining the HMO data sets is that it gives each organization much richer, more reliable information than any one HMO could gather by itself. The work of the HMORN has impacted healthcare delivery through improved decision making and real-time learning across the different plans and has led to peer-reviewed studies on postmarketing surveillance and drug safety, population-based chronic care improvement models, and surveillance of acute diseases, such as rapid detection of immediate potential environmental or biological threats. The HMORN has partnered with several federal agencies on projects that illustrate the utility of data linkages and networks of research resources for improving the nation’s health. These collaborative efforts with the government have led to the creation of collaborative projects under the umbrella of the HMORN, including the Cancer Research Network (CRN) and the Centers for Education and Research on Therapeutics, and have expanded the breadth of collaboration to include work with academic medical researchers.

The HMO CRN, an expansion of the HMORN, is a consortium of 14 research centers based in integrated healthcare delivery organizations. The CRN works with the National Cancer Institute to assist with the mining of very large data sets and focuses on the characteristics of patients, clinicians, communities, and health systems that lead to the best possible outcomes in cancer prevention and care. Multidisciplinary intervention research addresses cancer prevention, early detection, treatment, survivor-ship, surveillance, and end-of-life care. These models also provide some insights into how these partnerships can be effectively leveraged to improve health research.2

The Vaccine Safety Datalink (VSD) was established in 1990 as a col-

_______________

2 See http://crn.cancer.gov (accessed September 8, 2010) for project list.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

laborative effort between the National Immunization Program of the CDC, leading vaccine researchers, and eight large HMOs. Designed to monitor and evaluate vaccine safety, the VSD is a large, linked-data model that allows real-time analysis of vaccination and medical records of more than 6 million people for the purposes of conducting objective, population-based vaccine safety research. The VSD employs a distributed data model whereby the scope of work is decided in advance and only those data deemed necessary to support the agreed-on scope of the research are culled from available data sets and aggregated. Confidentiality of individual medical information is protected as only the data necessary to assess vaccine safety or adverse events is collected and aggregated. The VSD is the largest component in the CDC’s vaccine safety surveillance efforts. It has also pioneered research methods for conducting this work, and it serves as the gold-standard model for the development of other similar surveillance networks. VSD is a valued resource that allows researchers at the CDC and health plans to conduct studies that provide information about the short- and long-term effects of specific vaccines on various populations. Rather than relying on reports from vaccine manufacturers or solely on a passive reporting system to identify possible safety issues, VSD offers a rich data resource that can be accessed quickly, by CDC and vaccine investigators employed by health plans, to continually monitor vaccine safety. These data enable both rapid analyses of specific vaccines and large-scale, retrospective studies of people who have experienced unusual or severe adverse reactions to vaccines. VSD provides a comprehensive data resource enabling researchers to examine virtually all patient health events during the time a patient is enrolled in a health plan. Moreover, if a patient moves to another health plan, an effort is made to continue the patient’s enrollment in the program.

VSD research and data are often used to inform the decisions of the Advisory Committee on Immunization Practices and, as such, provide guidance and direction for public health policy in the United States and the rest of the world. The VSD infrastructure has also provided the opportunity for research on both the safety and effectiveness of many vaccines.

The VSD model also offers ample flexibility as new information needs emerge. As an example, when concern was raised about a potential association of meningococcal vaccine and Guillain-Barré syndrome (GBS), the CDC needed to conduct a rapid study to determine the reality of this association. For this particular request, the data on 6 million lives in the VSD data set were not sufficient for the analysis needed. In response, the America’s Health Insurance Plans (AHIP) organized 5 additional large, national health plans to provide a data set population of 60 million people. Initial analysis of this larger data set determined that there was not a causal link between the meningococcal conjugate vaccine (MCV) and GBS. AHIP, in collaboration with six health plans and with funding provided from the

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

vaccine manufacturer, has continued this surveillance work on MCV and GBS using a protocol that mirrors the methodology of the VSD.

This experience demonstrates the considerable potential inherent in public–private partnerships around the analysis of large data sets to answer questions important to public health, and the particular importance of developing and using standard methodologies for these efforts.

Quality Improvement Efforts

While these partnerships have focused on safety, similar distributed data models are being developed for use in quality measurement and reporting efforts. Supported by a grant from the Robert Wood Johnson Foundation, the AHIP Foundation and the Quality Alliance Steering Committee at the Brookings Institution are currently working on the National Data Aggregation Initiative (NDAI) to develop and implement a standard methodology for aggregating data across multiple health plans for provider performance measurement and reporting. One of the goals of NDAI is to combine private-sector and Medicare data to generate physician performance measures. To achieve this goal, the AHIP Foundation and the Quality Alliance Steering Committee have been working closely with CMS and AHRQ to align the NDAI with the Generating Medicare Physician Quality Performance Measurement Results project.

Like the VSD model, the NDAI will be based on a distributed data model that involves retention of protected health information at the health plans and transmission of measure results and provider identification data to a hub at the AHIP Foundation for aggregation, provider matching, specialty assignment, and reporting. The AHIP Foundation and the Quality Alliance Steering Committee are seeking to implement a distributed data model that is flexible and can accommodate various types of measures in future iterations that rely on a variety of data sources, such as laboratory, registry, and EMR data.

One of the main advantages of the NDAI will be the ability of various stakeholders to conduct regional and national comparisons of provider quality and analyze regional variations in provider quality similar to the Dartmouth Atlas, facilitated by the implementation of a standard methodology. In addition, the distributed data model envisioned under the NDAI holds promise for other efforts, such as the FDA Sentinel Initiative.

The FDA, as discussed elsewhere in this chapter, is exploring opportunities to harness other forms of data, such as those derived from EHRs. Initial discussions of a consolidated industry approach to work with the FDA have included national, regional, and local health plans; the Udall-Reagan Foundation; and other stakeholders.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Challenges and Lessons Learned

The initiatives discussed above underscore the inherent value in developing the infrastructures and tools to aggregate and analyze these data across populations. Collectively, they demonstrate that although there are many different types of health information available within health plan data sets, approaches have been developed to standardize these data so that they can be brought together to answer specific questions relevant to decisions faced by clinicians and patients. These efforts also potentially provide initial building blocks toward a national infrastructure for long-term safety and effectiveness research.

As a first step, it will be necessary to come to an agreement on what shared methodology might facilitate comparative analyses. A great deal of clinical research currently exists, but in forms that impede comparison across studies. An additional need is to build a data system infrastructure that makes it possible not only to mine existing data, but also to identify and track potential issues in safety and effectiveness in health care. For example, when a Massachusetts plan noticed a spike in hospitalization rates across all their plans at a particular time of year, they were able to monitor this pattern over 2 years and work with the local public health agencies to confirm that the increase in hospitalizations was occurring during the influenza season. This data then contributed to a study to examine whether influenza complications were driving those admissions or whether the increase in hospitalizations could be attributed to the patients not vaccinated for influenza. The Massachusetts study points again to the power of the data available in the model of aggregated models described above, in the sense that there is both a capability to bring the data to the questions and to also bring the questions to the data. “Bringing data to the questions” helps close the evidence gap and provide valuable information to patients and clinicians.

A significant challenge facing these efforts has been the standardization of data and the need to adjust many factors when only administrative data (vs. clinical data) are available. As health plans develop and use more robust EHRs, they are learning how to compile these clinical data, but an increasingly central issue is that EHRs have not been created to produce the data needed to answer questions important to understanding quality or clinical effectiveness. For example, many EHRs do not enable researchers to cull data related to the performance of providers. As physician offices and practice sites are reengineered to support the most effective use of EHRs, at the same time it will be necessary to ensure that those records are designed with the inherent capacity to produce the information needed—not only for performance evaluation and quality improvement, but also for research. Without thoughtful and appropriate design, EHRs will not capture the

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

information needed to advance the challenging work of analyzing and filling gaps in evidence.

A second challenge concerns governance. Essential to the sustainability of these efforts is a governance structure with appropriate roles for government and other stakeholders. Essential questions pertaining to the governance model need to be considered, such as funding for the establishment of and long-term sustainability of the infrastructure, study priorities, use of aggregated data, and secondary use of these data beyond the initial scope of work. Health plans, working as part of collaboratives, have produced several different governance structures, both formal and informal, that could serve as models for the Sentinel Network and other developing partnerships.

A third challenge has to do with the tensions that often emerge when the insurance sector has partnered with academic institutions on specific clinical research projects. There is a diversity within the industry on plan engagement with research, but the common challenge that has emerged is that while clinical research is generally regarded as a public good and the NIH is well supported by public funds, the kinds of research, such as comparative analyses, that are important to improving health care are not generally the kinds of research funded in this manner. Discussion is needed on sustainable funding approaches for these types of research.

Finally, the conduct of clinical research or trials implies the collection of data that is intended to be combined with other data and publicly shared. That is, the data are intended to be used to help educate and to provide lessons that can be drawn from what the evidence says or does not say. Conversely, the implicit nature of clinical research is that it is not intended to be reserved for use within a specific institution for its own QI and learning, without public release. A related challenge concerns the ownership of data. Patient advocates argue that data rightfully belongs to patients. Numerous discussions are ongoing about not contributing data to research or QI efforts without patient consent. A strong case can also be made, however, that these collected data are a public good that should be used for research, scientific investigation, and filling the evidence gap. More attention is needed to articulating the case for these data as a public good in that their use can ultimately result in better patient care.

Public–Private Partnerships and Comparative Effectiveness Infrastructure Development

Lessons learned from efforts to promote and develop shared data resources suggest several immediate priorities for comparative effectiveness infrastructure development, including developing standardized methodologies and ensuring data transparency. To develop greater national-level

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

capacity for comparative effectiveness work, additional focus will be needed on prioritizing key issues for research, on developing sustainable funding mechanisms, and on how to best build upon existing data sources, organizations, and initiatives.

As discussed elsewhere in this publication, clinical registries are providing an increasingly rich resource for clinical data, and there may be an expanded role for registries as part of this work. Although currently a source of very good information, registry data are not usually shared broadly.

Also important to consider are the organizations engaged in the analysis of these data. Several existing programs are engaged in work vital to the CER agenda. For example, the Blue Cross/Blue Shield Association Technology Evaluation Center (TEC) program has, for some time, conducted studies to assess the effectiveness of select technologies. The TEC program has developed criteria for these assessments, and these criteria and their reports are made publicly available. The Institute for Clinical and Economic Review (ICER) is a recently established academic comparative effectiveness initiative based at the Massachusetts General Hospital’s Institute for Technology Assessment. Informed by the priorities of payers, clinicians, and patients, and based at arms’ length from coverage decision-making structures in government and the private sector, ICER links clinical effectiveness and comparative value in a rating system that is consistent and rigorous, yet flexible enough to be directly useful to multiple decision makers. ICER is positioned to analyze the strength of evidence and the value of technologies and to provide additional information to both consumers and employers about how technologies, drugs, and devices that show high value can most effectively be moved into the marketplace. A related component of this assessment includes the cost–benefit analysis of these new applications. Many drugs or technologies may not have sufficiently high value to be considered without greater cost sharing on the part of consumers who want them.

Federal Agencies

Rachel E. Behrman, M.D., M.P.H., Associate Commissioner for Clinical Programs and Director of the Office of Critical Path Programs, Food and Drug Administration

Providing opportunities for key stakeholders, such as patients, researchers, and other members of the public sector, to work together with regulators and other government agencies on issues of common interest has been critical to progress in many areas of health care. The development of drugs to treat human immunodeficiency virus/acquired immunodeficiency

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

syndrome was one of the FDA’s early experiences with collaboration. The combined efforts of government, industry, and activists demonstrated that broad stakeholder engagement often accelerates progress in the development and availability of therapeutics. This experience also demonstrated that such development programs cannot be expected to address all relevant questions—a circumstance that was anticipated and accepted by all stakeholders. Although much was learned about therapies as companies pooled their data, and although the work led to development of different paradigms of drug approval, the effort did not provide all the answers about long-term outcomes.

This paper reviews key lessons learned from several existing public–private partnerships and offers suggestions concerning how these efforts might inform or contribute to expanded capacity for comparative effectiveness work.

The FDA and Public–Private Partnerships

Two FDA initiatives, the Critical Path Initiative (CPI) and the Sentinel Initiative, a collaboration being launched under the CPI, continue the tradition of collaboration with key stakeholders and seek to address some of the limitations identified in earlier projects. Each initiative has the potential to foster broader collaboration among stakeholders, and initial work suggests several key areas for future work.

The Critical Path Initiative The FDA’s CPI is aimed at stimulating and facilitating a national effort to modernize the sciences through which FDA-regulated products are developed, evaluated, and manufactured. The initiative was launched in March 2004 with the release of the report Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products (FDA, 2004).3 The report diagnosed the scientific reasons for the recent decrease in the number of innovative medical products submitted for approval—a decrease that was puzzling in light of recent advances in biomedical sciences and disappointing from a public health perspective. The report noted the rising complexity and unpredictability of medical product development and called for a concerted effort to modernize the scientific tools (e.g., in vitro tests, computer models, qualified biomarkers, innovative study designs) and harness the potential of bioinformatics technologies to evaluate and predict safety, effectiveness, and manufacturability of candidate medical products. The report also called for a national effort to identify specific critical path activities that, if car-

_______________

3 See http://www.fda.gov/oc/initiatives/criticalpath/whitepaper.html (accessed September 8, 2010).

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ried out, would help modernize the critical path sciences. The goal of this effort is to reap the expected public health benefits from the promises of biomedical advances of the 21st century.

In March 2006, the FDA released the Critical Path Opportunities List (FDA, 2006).4 Developed with extensive public input, this list describes the areas of greatest opportunity for improvement in the product development sciences and provides 76 concrete examples of how new scientific discoveries—in such fields as genomics and proteomics, imaging, and bioinformatics—could be applied during medical product development to improve the accuracy of the tests used to predict the safety and efficacy of investigational medical products.

The CPI, designed fundamentally to help foster collaboration among stakeholders, is one of the FDA’s top priorities. The agency is building on its unique position to partner with other federal agencies, patient groups, academic researchers, industry, and other stakeholders to identify areas ripe for improvement and to coordinate, develop, and disseminate solutions to scientific hurdles that are impairing the efficiency of developing, evaluating, and manufacturing of FDA-regulated products.

A number of collaborations have been formed under the CPI, including a public–private partnership co-founded by the FDA and Duke, called the Clinical Trials Transformation Initiative, which has the goal of modernizing the clinical trial enterprise. Other collaborations, involving, for example, the NCI, the NIH, the Juvenile Diabetes Research Foundation International, the Critical Path Institute, and industry partners, are working on a range of projects, including the development of an artificial pancreas, the use of imaging in cancer drug development, warfarin dosing, standards development, and bioinformatics projects.5

A collaboration of particular relevance to the EBM effort was announced by the FDA in May 2008. As discussed in more detail in the following section, the Sentinel Initiative has the potential to directly inform any effort to monitor product use, including benefits and risks, and ultimately might be useful in comparative analyses.

Sentinel Initiative The Sentinel Initiative has the ultimate goal of developing and implementing the Sentinel System—a national, integrated, electronic framework and approach for monitoring medical product safety. The framework is envisioned initially to be a distributed network in which data

_______________

4 See www.fda.gov/oc/initiatives/criticalpath/reports/opp_list.pdf (accessed September 8, 2010).

5 For a list of projects launched during 2007 either by the FDA or with FDA participation, see FDA’s CPI Web page at http://www.fda.gov/oc/initiatives/criticalpath/report2007.pdf (accessed September 8, 2010).

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

holders will retain data and provide reports in response to queries from interested parties. Sophisticated links between involved parties are assumed as a necessary component of the broad collaboration. This national electronic framework would enable the FDA (and ultimately perhaps others) to query remote data sources (with the consent and permission of the data owners), using appropriate security and privacy safeguards, for specific information about marketed medical products. The Sentinel Initiative will respond in part to the FDA Amendments Act of 2007 (FDAAA), which calls for active postmarket safety surveillance and analysis. The congressional mandate contained in the FDAAA provides an initial focus for the Sentinel System on medical product safety; however, as envisioned, the Sentinel System could ultimately provide a basic infrastructure for all FDA-regulated products.

The Sentinel Initiative is in the very early stages of development. The FDA’s first step has been to create a broad public forum for the exploration of relevant issues, and a number of short-term contracts have been awarded to research specific aspects of the initiation (e.g., who should be involved, what kinds of privacy and security concerns are of particular importance, how such a partnership should be governed). The immediate goal of the initiative is to develop a public–private partnership, hosted by a nonprofit organization that would oversee the implementation of the system. As mentioned elsewhere in this publication, there are many examples of successful efforts to build and analyze shared data resources around specific interests, and, as the Sentinel report explains, a number of collaborations are also under way that will directly inform Sentinel.6 However, the infrastructure that will support the Sentinel System is envisioned to be a sustained and comprehensive national data resource that is broadly available to many stakeholders.

It is important to note that the Sentinel System will augment, but not replace, current FDA activities. The FDA will continue efforts to modernize and optimize its systems for spontaneous reporting. These systems and processes will remain an important part of the agency’s postmarket surveillance systems. The Sentinel System is envisioned as an important new resource for efficiently detecting meaningful safety signals and investigating important questions about medical products—a key component of the FDA’s work.

Key Challenges and Lessons Learned

The Sentinel Initiative drives important fundamental changes and introduces important innovations. However, significant issues remain to be

_______________

6 FDA’s Sentinel Report is available at http://www.fda.gov/oc/initiatives/advance/reports/report0508.pdf (accessed September 8, 2010).

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

resolved. For example, research methods and data analysis tools will need to be developed to ensure the production and validation of timely, reliable, and secure information. The distributed network approach of the Sentinel Initiative addresses some concerns about patient privacy, but other challenges remain. It is imperative to engage parties that collect, aggregate, and market data and to illustrate the critical need and business case for a sharper focus on outcomes research to improve the nation’s health. For these issues, developing the appropriate governance structures and policies will be critical. Another challenge will be to ensure that the infrastructure developed considers and meets the needs of all parties while putting appropriate safeguards into place. Questions related to data access, use, and stewardship will have to be resolved.

These activities highlight many issues that will also be of central importance in the development of infrastructure for comparative effectiveness. Priority setting is critical in order to provide a common focus for all stakeholders as well as to identify key opportunities to develop smart and small pilot projects. Financing is a continual challenge, particularly given that infrastructure development is a long-term and expensive proposition. Continued attention is also needed to the governance of collaborations. A fourth and crucial area for work is data transparency. Progress in these areas is needed to ensure that analyses are conducted and reported responsibly and to avoid the development of unvetted, low-quality information. Finally, issues about how to handle proprietary data and patentable tools or processes will remain key areas of importance for all potential participants.

Public–Private Partnerships and Comparative Effectiveness Infrastructure Development

Public–private partnerships will be critical for the successful development of a national infrastructure for expanded CER as part of the IOM EBM effort. As with the Sentinel Initiative, the government alone cannot lead us to where we need and could be as a nation with respect to health. The FDA has focused on partnering with others because collaboration provides the best opportunity for substantial engagement by key stakeholders on issues of common interest and, therefore, a greater likelihood of success. The IOM effort is a large and complex project, and no one entity has the expertise, the resources, or the energy to carry it out alone. In addition, it will be important to create a nimble infrastructure to respond to dynamic and evolving research needs. Such an effort will require the engagement and participation of all sectors across the healthcare system. A government approach, possibly relying on legislation, may only slow progress.

Lessons learned from the Sentinel Initiative may be very useful for the IOM effort. Of particular benefit might be small collaborative pilots, simi-

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

lar to those under way as part of the Sentinel Initiative, that are making use of existing large databases to identify and test the tools and processes that will be needed to perform postmarket monitoring. Similar tools will be needed for comparative evidence analyses. Additional considerations that may be useful as the CER project evolves (or as pilots are identified that could inform the project) include the following:

  • What specific tools need to be developed?
  • What are the specific goals of a particular collaboration?
  • How should specific projects or tasks be prioritized? And who should be tasked with setting priorities?
  • Which stakeholders would be most beneficial to and interested in a particular collaborative project?
  • Which organization or organizations can best take the lead on a specific project?
  • How can needed short- and long-term resources be obtained?
  • How can research results be made available to the community without undermining proprietary or patent interests?
  • How do specific collaborations contribute to the larger effort?
  • What time frames can realistically be set for short- and long-term goals?

Partnership formations will require careful vetting by all parties so that everyone involved has confidence in the successful operation of the partnership. With each partnership comes added confidence in what it will take to make a successful partnership. However, each partnership will be different, raising new questions and unique hurdles.

Health Product Developers

William Z. Potter, M.D., Ph.D., Vice President, Franchise
Integrator Neuroscience, Merck Research Laboratories

Two examples of public–private partnerships that have productively linked industry, government, academia, and other stakeholders to address issues of common concern in health care are the Biomarkers Consortium (BC) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI). This paper briefly describes the processes of developing and sustaining these partnerships, as well as some of the key lessons learned that can inform the development of infrastructure for expanded CER. Some suggestions for priority areas for work and opportunities for greater engagement by the health product developer sector are also discussed.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Biomarkers Consortium

The BC, founded in 2006, was established to advance the discovery, development, and approval of biological markers to support new drug development, preventive medicine, and medical diagnostics. The consortium is a major public–private biomedical research partnership with broad participation from stakeholders across the health enterprise, including government, industry, academia, and patient advocacy and other nonprofit private-sector organizations. In addition to the Foundation for the NIH, founding members include the NIH, the FDA, and the Pharmaceutical Research and Manufacturers of America. Other partners in the consortium include CMS and the Biotechnology Industry Organization.

Imperative to a successful partnership is the careful delineation of specific areas of research focus that protect individual interests of consortium members, and, after some discussion, consortium organizations agreed to work together to accelerate the identification, development, and regulatory acceptance of biomarkers in four areas: cancer, inflammation and immunity, metabolic disorders, and neuroscience. Additional goals of the consortium include the conduct of joint research in “precompetitive” areas with partners that share common interest in advancing human health and improving patient care; that speed the development of medicines and therapies for detection, prevention, diagnosis, and treatment of disease; and that make project results broadly available to the entire research community.

An example from neuroscience illustrates another key to the consortium’s success. As an initial focus, the group looked at the placebo response, a fundamental issue of common concern to all stakeholders. An important question for the field is the relative efficacy of antidepressants, but even the efficacy of antidepressants vs. placebo is often unclear. Consider the physician, or any other caretaker, who diagnoses and would like to treat a patient for depression. Trial results demonstrate that the placebo response is often enormously variable, ranging from some 20 percent up to as much as 60 percent in very large trials with up to 100 to 150 patients per arm. These findings raise significant questions about the validity of these data, the study design, or the diagnosis. Healthcare providers are interested in developing and using their data to clarify the quality of treatment and to determine the best possible course of care, but health product manufacturers also have an intense competitive interest in such data—particularly in improving the quality of data in this space and the analyses needed to inform critical healthcare decisions.

The BC addressed this set of issues by creating a metadata set. As outlined in the Foundation for the NIH’s Consortium Placebo Data Sharing proposal, ideal characteristics for implementation include identical study design; extensive characterization of each subject (e.g., more than

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

FDA requirements); data elements stored in standard, easily shared data systems; and appropriate informed consent. Such ideals are of course difficult to realize regularly on a macro level, and the consortium decided to focus initially on an area in which common public and private study design were likely: antidepressant trials conducted since the introduction of selective serotonin reuptake inhibitors. Around this focus, the consortium has initiated several collaborative efforts, including a Depression Rating Scale Standardization Team (DRSST), a Placebo Response Collaborative Study Group, the National Institute of Mental Health Placebo Database Workshop, and placebo databases from Alzheimer’s disease trials.

Discussions leading to the development of these projects began in 2000, and the group is beginning to put the needed infrastructure in place through the Foundation for the NIH. Many of the lessons learned from these discussion, will help to accelerate the development of infrastructure for CER work. Key barriers include the need for an internal champion within each company to work a proposal; meeting costs of full-time equivalent and data management; skepticism by industry, NIH, and academic leadership that learnings of value can be gained; and variable legal opinion as to intellectual property and medicolegal risks.

Alzheimer’s Disease Neuroimaging Initiative

Another noteworthy public–private partnership is the ADNI. Started in 2004, this large research project seeks to define the rate of progress of mild cognitive impairment and Alzheimer’s disease in order to develop improved methods for clinical trials in this area and also to provide a large database that will improve design of treatment trials. It is hoped that the project will provide information and methods that will help lead to effective treatments and prevention efforts for Alzheimer’s disease. The project has funding from the National Institute of Aging, the National Institute of Bioimaging and Bioengineering, Pharmaceutical Research and Manufacturers of America, and several foundations.

ADNI brings together organizations from the public and private sectors, including government agencies, corporations, consumer groups, and other stakeholders, to work collaboratively to determine the right tools to understand the efficacy and effectiveness of drugs for Alzheimer’s disease. Participants in the initiative collaborate via an infrastructure that, while complex, enables cross-sector communication and work and has produced promising initial results. For example, both complex clinical data and intricate brain imaging data are now readily accessible using the Web, in close to real time. Anyone who is interested in developing ways to look at complex data can mine these data, and this approach has begun to return remarkable findings. Underlying the success of this partnership is how

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

it addresses the important issue of data transparency. Through different portals, the data are available both to researchers and, with unprecedented access, to the general public.

Given that researchers can manage this complexity of data with existing tools in the realm of Alzheimer’s disease, these results imply that similar applications are likely for other sets of data. An important lesson from this work is that real data can be made accessible, in real time, in the public domain, and yield useful results. More broadly, the success of this project underscores and justifies the benefits of a consortium approach, particularly when the scientific methodologies employed are adequately rigorous and the questions are sufficiently important.

Public–Private Partnerships and Comparative Effectiveness Infrastructure Development

As a national infrastructure for CER is being developed, leadership will be needed from the federal government to develop the incentive structures, through legislation and regulation, that are important to advance issues related to data standards and data sharing; however, despite the important “pull” provided by legislation, there are opportunities for immediate work that do not require legislation. Some possible focus areas include

  • engagement of industry leadership (e.g., identifying and encouraging industry champions; fostering collaboration of industry, NIH, and academic leadership around common issues and concerns),
  • making the case for broad stakeholder participation around key questions and issues,
  • developing national research priorities, and
  • establishing methods and collaborative agreements for data collection and use.

REFERENCES

AAHC (Association of Academic Health Centers). 2008a. The academic health center: Evolving organizational models. http://www.aahcdc.org/policy/reddot/AAHC_Evolving_Organizational_Models.pdf (accessed September 13, 2008).

———. 2008b. HIPAA creating barriers to research and discovery. http://www.aahcdc.org/policy/reddot/AAHC_HIPAA_Creating_Barriers.pdf (accessed September 15, 2008).

———. 2008c. Out of order, out of time: The state of the nation’s health workforce. Washington, DC: AAHC. http://www.aahcdc.org/policy/AAHC_OutofTime_4WEB.pdf (accessed September 5, 2008).

AAP/ACP/AOA (American Academy of Pediatrics and American College of Physicians and American Osteopathic Association). 2007. Joint principles of the patient-centered medical home. http://www.medicalhomeinfo.org/Joint%20Statement.pdf (accessed August 5, 2010).

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ACC (American College of Cardiology). 2008. Clinical statements/guidelines. http://www.acc.org/qualityandscience/clinical/statements.html (accessed September 15, 2008).

ACCF (American College of Cardiology Foundation). 2008. NCDR: National Cardiovascular Data Registry. http://www.ncdr.com/WebNCDR/ANNOUNCEMENTS.ASPX (accessed September 15, 2008).

AHA (American Hospital Association). 2007. Continued progress: Hospital use of information technology. http://www.aha.org/aha/content/2007/pdf/070227-continuedprogress.pdf (accessed September 10, 2008).

AHRQ (Agency for Healthcare Research and Quality). 2009. Horizon scan: To what extent do changes in third-party payment affect clinical trials and the evidence base? http://www.cms.hhs.gov/determinationprocess/downloads/id67aTA.pdf (accessed September 20, 2008).

Arvantes, T. 2007. North Carolina seeks expansion of primary care program. http://www.aafp.org/online/en/home/publications/news-now/government-medicine/2 (accessed September 15, 2008).

Bach, P. B. 2007. Coverage with evidence development. In The learning healthcare system: Workshop summary, edited by L. Olsen, D. Aisner, and J. M. McGinnis. Washington, DC: The National Academies Press. Pp. 39-45.

Bawa, K. S., G. Balachander, and P. Raven. 2008. A case for new institutions. Science 319(5860):136.

Bodenheimer, T. 2008. Coordinating care—A perilous journey through the health care system. New England Journal of Medicine 358(10):1064-1071.

Bravata, D. M., A. L. Gienger, K. M. McDonald, V. Sundaram, M. V. Perez, R. Varghese, J. R. Kapoor, R. Ardehali, D. K. Owens, and M. A. Hlatky. 2007. Systematic review: The comparative effectiveness of percutaneous coronary interventions and coronary artery bypass graft surgery. Annals of Internal Medicine 147(10):703-716.

Califf, R. M., R. A. Harrington, L. K. Madre, E. D. Peterson, D. Roth, and K. A. Schulman. 2007. Curbing the cardiovascular disease epidemic: Aligning industry, government, payers, and academics. Health Affairs 26(1):62-74.

Coleman, E. A. 2006. The care transition intervention results of a randomized controlled trial. Archives of Internal Medicine 166:1822-1828.

The Commonwealth Fund. 2009. Commission on a high-performance health system. http://www.commonwealthfund.org/About-Us/Commission-on-a-High-Performance-HealthSystem.aspx (accessed August 5, 2010).

Connecting for Health. 2006. The Connecting for Health common framework. http://www.connectingforhealth.org (accessed February 23, 2008).

Crosson, F. J. 2005. The delivery system matters. Health Affairs 24(6):1543-1548.

CTSA (Clinical and Translational Science Awards). 2008. Clinical and Translational Science Awards. http://www.ctsaweb.org/ (accessed September 13, 2008).

Daemen, J., E. Boersma, M. Flather, J. Booth, R. Stables, A. Rodriguez, G. Rodriguez-Granillo, W. A. Hueb, P. A. Lemos, and P. W. Serruys. 2008. Long-term safety and efficacy of percutaneous coronary intervention with stenting and coronary artery bypass surgery for multivessel coronary artery disease: A meta-analysis with 5-year patient-level data from the arts, ERACI–II, MASS–II, and SOS trials. Circulation 118(11):1146-1154.

Davis, K., and S. Schoenbaum. 2007. Medical homes could improve care for all. http://www.commonwealthfund.org/Content/From-the-President/2007/Medical-Homes-CouldImprove-Care-for-All.aspx (accessed August 5, 2010).

de Brantes, F., and A. Rastogi. 2008. Evidence-informed case rates: Paying for safer, more reliable care. New York: The Commonwealth Fund.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Dilts, D. M., A. Sandler, S. Cheng, J. Crites, L. Ferranti, A. Wu, R. Gray, J. MacDonald, D. Marinucci, and R. Comis. 2008. Development of clinical trials in a cooperative group setting: The Eastern Cooperative Oncology Group. Clinical Cancer Research 14(11):3427-3433.

Dougherty, D., and P. H. Conway. 2008. The “3T’s” road map to transform U.S. health care: The “how” of high-quality care. Journal of the American Medical Association 299(19):2319-2321.

Douglas, P. S., B. Khandheria, R. F. Stainback, N. J. Weissman, E. D. Peterson, R. C. Hendel, M. Blaivas, R. D. Des Prez, L. D. Gillam, T. Golash, L. F. Hiratzka, W. G. Kussmaul, A. J. Labovitz, J. Lindenfeld, F. A. Masoudi, P. H. Mayo, D. Porembka, J. A. Spertus, L. S. Wann, S. E. Wiegers, R. G. Brindis, M. R. Patel, M. J. Wolk, and J. M. Allen. 2008. ACCF/ASE/ACEP/AHA/ASNC/SCAI/SCCT/SCMR 2008 appropriateness criteria for stress echocardiography: A report of the American College of Cardiology Foundation Appropriateness Criteria Task Force, American Society of Echocardiography, American College of Emergency Physicians, American Heart Association, American Society of Nuclear Cardiology, Society for Cardiovascular Angiography and Interventions, Society of Cardiovascular Computed Tomography, and Society for Cardiovascular Magnetic Resonance endorsed by the Heart Rhythm Society and the Society of Critical Care Medicine. Journal of the American College of Cardiology 51(11):1127-1147.

Draycott, T., T. Sibanda, L. Owen, V. Akande, C. Winter, S. Reading, and A. Whitelaw. 2006. Does training in obstetric emergencies improve neonatal outcome? BJOG: An International Journal of Obstetrics and Gynaecology 113(2):177-182.

FDA (Food and Drug Administration). 2004. Innovation/stagnation: Challenge and opportunity on the critical path to new medical products. Washington, DC: Department of Health and Human Services.

———. 2006. Critical path opportunities list. Washington, DC: Department of Health and Human Services.

Ferguson, T. B., Jr. 2008. On the evaluation of intervention outcome risks for patients with ischemic heart disease. Circulation 117(3):333-335.

Ferguson, T. B., Jr., B. G. Hammill, E. D. Peterson, E. R. DeLong, and F. L. Grover. 2002. A decade of change—Risk profiles and outcomes for isolated coronary artery bypass grafting procedures, 1990-1999: A report from the STS National Database Committee and the Duke Clinical Research Institute. Society of Thoracic Surgeons. Annals of Thoracic Surgery 73(2):480-489; discussion, 489-490.

Ferguson, T. B., Jr., E. D. Peterson, L. P. Coombs, M. C. Eiken, M. L. Carey, F. L. Grover, and E. R. DeLong. 2003. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: A randomized controlled trial. Journal of the American Medical Association 290(1):49-56.

Frisse, M. E. 2006. Comments on return on investment (ROI) as it applies to clinical systems. Journal of the American Medical Informatics Association 13(3):365-367.

Frisse, M. E., J. K. King, W. B. Rice, L. Tang, J. P. Porter, T. A. Coffman, M. Assink, K. Yang, M. Wesley, R. L. Holmes, C. Gadd, K. B. Johnson, and V. Y. Estrin. 2008. A regional health information exchange: Architecture and implementation. American Medical Informatics Association Annual Symposium Proceedings 212-216.

Giugliano, R. P., and E. Braunwald. 2007. The year in non-ST-segment elevation acute coronary syndrome. Journal of the American College of Cardiology 50(14):1386-1395.

Goroll, A. H., R. A. Berenson, S. C. Schoenbaum, and L. B. Gardner. 2007. Fundamental reform of payment for adult primary care: Comprehensive payment for comprehensive care. Journal of General Internal Medicine 22(3):410-415.

Grover, F. L. 2008. The bright future of cardiothoracic surgery in the era of changing health care delivery: An update. Annals of Thoracic Surgery 85(1):8-24.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Hall, B. L., M. Hirbe, B. Waterman, S. Boslaugh, and W. C. Dunagan. 2007. Comparison of mortality risk adjustment using a clinical data algorithm (American College of Surgeons National Surgical Quality Improvement Program) and an administrative data algorithm (Solucient) at the case level within a single institution. Journal of the American College of Surgeons 205(6):767-777.

HHMI (Howard Hughes Medical Institute). 2009. Med into Grad Initiative: Integrating Medical Knowledge into Graduate Education. http://www.hhmi.org/grants/institutions/medintograd.html (accessed September 13, 2008).

Hlatky, M. A., D. B. Boothroyd, K. A. Melsop, M. M. Brooks, D. B. Mark, B. Pitt, G. S. Reeder, W. J. Rogers, T. J. Ryan, P. L. Whitlow, and R. D. Wiens. 2004. Medical costs and quality of life 10 to 12 years after randomization to angioplasty or bypass surgery for multivessel coronary artery disease. Circulation 110(14):1960-1966.

IOM (Institute of Medicine). 2001. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press.

———. 2006. Performance measurement: Accelerating improvement. Washington, DC: The National Academies Press.

———. 2007. The learning healthcare system: Workshop summary. Washington, DC: The National Academies Press.

———. 2008. Learning healthcare system concepts v.2008: Annual report of the Roundtable on Evidence-Based Medicine. Washington, DC: The National Academies Press.

James, B. 2007. Feedback loops to expedite study timeliness and relevance. In The learning healthcare system: Workshop summary, edited by L. Olsen, D. Aisner, and J. M. McGinnis. Washington, DC: The National Academies Press. Pp. 152-162.

Johnson, K. B., C. S. Gadd, D. Aronsky, K. Yang, L. Tang, V. Estrin, J. K. King, and M. Frisse. 2008. The Midsouth eHealth Alliance: Use and impact in the first year. American Medical Informatics Association Annual Symposium Proceedings 333-337.

Jones, L., and K. Wells. 2007. Strategies for academic and clinician engagement in community-participatory partnered research. Journal of the American Medical Association 297(4):407-410.

King, J. S., and B. W. Moulton. 2006. Rethinking informed consent: The case for shared medical decision-making. American Journal of Law and Medicine 32:429-501.

Krumholz, H. M., Y. Wang, J. A. Mattera, L. F. Han, M. J. Ingber, S. Roman, and S. L. Normand. 2006. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation 113(13):1683-1692.

Landon, B. E., L. S. Hicks, A. J. O’Malley, T. A. Lieu, T. Keegan, B. J. McNeil, and E. Guadagnoli. 2007. Improving the management of chronic disease at community health centers. New England Journal of Medicine 356(9):921-934.

Malenka, D. J., B. J. Leavitt, M. J. Hearne, J. F. Robb, Y. R. Baribeau, T. J. Ryan, R. E. Helm, M. A. Kellett, H. L. Dauerman, L. J. Dacey, M. T. Silver, P. N. VerLee, P. W. Weldner, B. D. Hettleman, E. M. Olmstead, W. D. Piper, and G. T. O’Connor. 2005. Comparing long-term survival of patients with multivessel coronary disease after CABG or PCI: Analysis of BARI-like patients in northern New England. Circulation 112(9 Suppl.): I371-I376.

Marmot, M., and R. G. Wilkinson. 2006. Social determinants of health, 2nd ed. Oxford, UK: Oxford University Press.

McCannon, C. J. 2007. Miles to go: An introduction to the 5 million lives campaign. Joint Commission Journal on Quality and Patient Safety 33(8):447-484.

McGlynn, E. A., S. M. Asch, J. Adams, J. Keesey, J. Hicks, A. DeCristofaro, and E. A. Kerr. 2003. The quality of health care delivered to adults in the United States. New England Journal of Medicine 348(26):2635-2645.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

MedPAC (Medicare Payment Advisory Commission). 2008. Report to the Congress: Reforming the delivery system. http://www.medpac.gov/documents/Jun08_EntireReport.pdf (accessed September 14, 2008).

Meier, B. 2008. A call for a warning system on artificial joints. The New York Times, July 29. http://www.nytimes.com/2008/07/29/health/29iht-29hip.14863139.html (accessed September 15, 2008).

Miller, R. 2008. The information networks required: Information technology requirements. Presented at the Institute of Medicine Workshop Learning What Works: Infrastructure Required for Comparative Effectiveness Research. Washington, DC, July 30, 2008.

National Health Policy Forum. 2008. Health information technology adoption among health centers: A digital divide in the making. http://209.85.173.104/search?q=cache:obn8UTmh9Y4J:www.nhpf.org/pdfs_bp/BPHealth (accessed September 13, 2008).

Nesbitt, T. S., S. L. Cole, L. Pellegrino, and P. Keast. 2006. Rural outreach in home telehealth: Assessing challenges and reviewing successes. Telemedicine Journal and eHealth 12(2):107-113.

NRC (National Research Council). 2009. Computational technology for effective health care: Intermediate steps and strategic directions. Washington, DC: The National Academies Press.

Paulus, R. A., K. Davis, and G. D. Steele. 2008. Continuous innovation in health care: Implications of the Geisinger experience. Health Affairs 27(5):1235-1245.

Porter, J. P., J. Starmer, J. King, and M. E. Frisse. 2007. Mapping laboratory test codes to LOINC for a regional health information exchange. American Medical Informatics Association Annual Symposium Proceedings 1081.

Porter, M. E., and E. O. Teisberg. 2006. Redefining health care: Creating value-based competition on results. Boston, MA: Harvard University Press.

———. 2007. How physicians can change the future of health care. Journal of the American Medical Association 297(10):1103-1111.

Premier and CMS (Centers for Medicare & Medicaid). 2007. Centers for Medicare & Medicaid Services (CMS)/Premier Hospital Quality Incentive Demonstration project: Project findings from year two. http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/resources/hqi-whitepaper-year2.pdf (accessed September 13, 2008).

Rittenhouse, D. R., L. P. Casalino, R. R. Gillies, S. M. Shortell, and B. Lau. 2008. Measuring the medical home infrastructure in large medical groups. Health Affairs 27(5):1246-1258.

Roden, D. M., J. M. Pulley, M. A. Basford, G. R. Bernard, E. W. Clayton, J. R. Balser, and D. R. Masys. 2008. Development of a large-scale de-identified DNA biobank to enable personalized medicine. Clinical Pharmacology & Therapeutics 84(3):362-369.

Rubin, R. 2008. CDC launches campaign to make the USA a healthier nation. USA Today 4D.

Schoen, C., K. Davis, S. K. How, and S. C. Schoenbaum. 2006. U.S. health system performance: A national scorecard. Health Affairs 25(6):w457-w475.

Singh, M., B. J. Gersh, S. Li, J. S. Rumsfeld, J. A. Spertus, S. M. O’Brien, R. M. Suri, and E. D. Peterson. 2008. Mayo clinic risk score for percutaneous coronary intervention predicts in-hospital mortality in patients undergoing coronary artery bypass graft surgery. Circulation 117(3):356-362.

Smith, P. K., R. M. Califf, R. H. Tuttle, L. K. Shaw, K. L. Lee, E. R. Delong, R. E. Lilly, M. H. Sketch, Jr., E. D. Peterson, and R. H. Jones. 2006. Selection of surgical or percutaneous coronary intervention provides differential longevity benefit. Annals of Thoracic Surgery 82(4):1420-1428; discussion, 1428-1429.

Stead, W. 2006. Providers and EHR as a learning tool. In The learning healthcare system: Workshop summary, edited by L. Olsen, D. Aisner, and J. M. McGinnis. Washington, DC: The National Academies Press. Pp. 268-275.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Stead, W., and J. M. Starmer. 2007. Beyond expert based practices. In Evidence-based medicine and the changing nature of health care, edited by M. B. McClellan, J. M. McGinnis, E. G. Nabel, and L. M. Olsen. Washington, DC: The National Academies Press. Pp. 95-105.

Taggart, D. P. 2006. Thomas B. Ferguson lecture: Coronary artery bypass grafting is still the best treatment for multivessel and left main disease, but patients need to know. Annals of Thoracic Surgery 82(6):1966-1975.

Tarlov, A. R., and R. F. St. Peter. 2000. The society and population reader: A state and community perspective (Vol. 2). New York: New Press.

Tung, R., S. Kaul, G. A. Diamond, and P. K. Shah. 2006. Narrative review: Drug-eluting stents for the management of restenosis: A critical appraisal of the evidence. Annals of Internal Medicine 144(12):913-919.

Wartman, S. A. 2008. Toward a virtuous cycle: The changing face of academic health centers. Academic Medicine 83(9):797-799.

Weinstein, J. N., K. Clay, and T. S. Morgan. 2007. Informed patient choice: Patient-centered valuing of surgical risks and benefits. Health Affairs 26(3):726-730.

Wennberg, J. E., E. S. Fisher, and J. S. Skinner. 2002. Geography and the debate over Medicare reform. Health Affairs (Suppl. Web Exclusive):w96-w114.

Wennberg, J. E., E. S. Fisher, J. S. Skinner, and K. K. Bronner. 2007. Extending the p4p agenda, part 2: How Medicare can reduce waste and improve the care of the chronically ill. Health Affairs 26(6):1575-1585.

WHO (World Health Organization). 2008. Social determinants of health. http://www.who.int/social_determinants/en/ (accessed September 13, 2008).

Wilper, A. P., S. Woolhandler, K. E. Lasser, D. McCormick, D. H. Bor, and D. U. Himmelstein. 2008. A national study of chronic disease prevalence and access to care in uninsured U.S. adults. Annals of Internal Medicine 149(3):170-176.

Woolf, S. H. 2008. The power of prevention and what it requires. Journal of the American Medical Association 299(20):2437-2439.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

This page intentionally left blank.

Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 241
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 242
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 243
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 244
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 245
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 246
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 247
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 248
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 249
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 250
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 251
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 252
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 253
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 254
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 255
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 256
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 257
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 258
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 259
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 260
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 261
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 262
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 263
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 264
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 265
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 266
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 267
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 268
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 269
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 270
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 271
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 272
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 273
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 274
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 275
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 276
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 277
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 278
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 279
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 280
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 281
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 282
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 283
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 284
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 285
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 286
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 287
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 288
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 289
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 290
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 291
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 292
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 293
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 294
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 295
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 296
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 297
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 298
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 299
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 300
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 301
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 302
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 303
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 304
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 305
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 306
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 307
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 308
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 309
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 310
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 311
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 312
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 313
Suggested Citation:"5 Implementation Priorities." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 314
Next: 6 Moving Forward »
Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary Get This Book
×
Buy Paperback | $75.00 Buy Ebook | $59.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

It is essential for patients and clinicians to have the resources needed to make informed, collaborative care decisions. Despite this need, only a small fraction of health-related expenditures in the United States have been devoted to comparative effectiveness research (CER). To improve the effectiveness and value of the care delivered, the nation needs to build its capacity for ongoing study and monitoring of the relative effectiveness of clinical interventions and care processes through expanded trials and studies, systematic reviews, innovative research strategies, and clinical registries, as well as improving its ability to apply what is learned from such study through the translation and provision of information and decision support.

As part of its Learning Health System series of workshops, the Institute of Medicine's (IOM's) Roundtable on Value & Science-Driven Health Care hosted a workshop to discuss capacity priorities to build the evidence base necessary for care that is more effective and delivers higher value for patients. Learning What Works summarizes the proceedings of the seventh workshop in the Learning Health System series. This workshop focused on the infrastructure needs--including methods, coordination capacities, data resources and linkages, and workforce--for developing an expanded and efficient national capacity for CER. Learning What Works also assesses the current and needed capacity to expand and improve this work, and identifies priority next steps.

Learning What Works is a valuable resource for health care professionals, as well as health care policy makers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!