National Academies Press: OpenBook
« Previous: 2 The Work Required
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

3


The Information Networks Required

INTRODUCTION

The scale of efficiencies that might be gained through developing prioritization and coordinating capacities and improving the methods used for comparative effectiveness research (CER) will be limited by the infrastructure available to support the capture, access, and sharing of relevant data and information. Design and development of robust information networks, and efforts to foster collaboration around common work, will therefore be a critical aspect of creating the infrastructure for expanded CER—necessary for the generation and application of evidence alike, as well as for providing opportunities to support learning from clinical practice. In addition to the federal efforts to increase the adoption and use of electronic health records as described previously, many organizations have developed such capacities, and drawing upon these and other resources through systematic, linked, and coordinated networks would greatly enhance the nation’s fundamental capacity to generate evidence. Papers included in this chapter describe what was known about capacity in 2008, give a rough estimate of the necessary capacity, and offer initial suggestions on policies or activities for progress. These issues are considered in more depth in the Institute of Medicine (IOM) workshop summary publication on The Digital Infrastructure for the Learning Health System: The Foundation for Continuous Improvement in Health and Health Care (IOM, 2011).

Clinical information systems (CISs)—including electronic health records (EHRs)—hold particular promise, given their emerging prominence at the nexus of clinical research, clinical practice, and decision

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

making. Appropriately designed EHRs not only serve as a means for practitioners to access best practices and evidence guidelines, but they also capture a broad array of information important to the diagnosis and treatment of individual patients. To provide policy makers with “order of magnitude” estimates of the spending needed to speed broad adoption of CISs in care delivery organizations throughout the nation, Robert H. Miller from the University of California at San Francisco describes current EHR adoption, future EHR capital and operating expenditure requirements, and prospects for EHR adoption in the hospital and in physician and clinical services sectors.

Work is also needed to develop the technical capacity, methods, standards, and policies for the efficient exchange of information from EHRs and other data sources (e.g., administrative databases, clinical registries) and to disseminate evidence syntheses and other resources to guide practice. Although large databases and clinical registries offer immediate opportunities for learning what works in health care, Carol C. Diamond from the Markle Foundation argues that the greatest promise of health information technology (HIT) lies in its ability to enable quick and efficient learning via a networked and distributed approach to information sharing and evidence development. To maximize this potential, approaches to data and information hubs will need to evolve to address four key challenges: (1) clearly defining the ultimate goal; (2) being open to reset our definitions and assumptions about health data and research approaches; (3) articulating new, broadly accepted working principles based on 21st-century information paradigms; and (4) developing an information policy framework that broadly addresses public hopes and concerns. Diamond illustrates how these challenges are a jumping-off point for moving to a distributed approach to research—one characterized by connectivity, networks, and feedback loops.

Finally, an essential function of any system dedicated to developing a robust evidence base for medical practice is the synthesis of information derived from relevant trials, studies, and insights emerging from clinical practice. As data resources and networks expand, demand will also grow for synthesis work to ensure studies are appropriately reviewed, vetted, and incorporated into the evolving evidence base. Lorne A. Becker from the Cochrane Collaborative provides an overview of current approaches to evidence review, synthesis, coordination, and dissemination—internationally and within the United States—and offers some suggestions on key opportunities for expanding capacity to meet the anticipated demand.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ELECTRONIC HEALTH RECORDS: NEEDS, STATUS, AND COSTS FOR U.S. HEALTHCARE DELIVERY ORGANIZATIONS

Robert H. Miller, Ph.D., Professor of Health Economics in Residence,
University of California at San Francisco, Institute for Health & Aging

Introduction

Implementing ubiquitous evidence-based medicine (EBM) requires robust CISs, especially EHRs. As a result, policy makers want to understand what EHR capabilities are needed, to what extent they’ve been implemented, likely costs for further adoption and maintenance, and prospects for full implementation of those capabilities.1 Despite keen policy-maker interest, as of mid-2008 few “global” cost estimates for ubiquitous CISs in the healthcare delivery system had been generated (CBO, 2008), in large part because the U.S. healthcare system is so large and diverse, while usable CIS cost and benefit data have been so scarce.

In 2006, of the $2.1 trillion in total healthcare expenditures, the $648 billion hospital sector and $447 billion “physician and clinical services” sector were the most intensive users of CISs, incurring the bulk of CIS capital and operating expenses (Catlin et al., 2008). Other, smaller healthcare delivery system sectors that had less intensive CIS use together accounted for another $400 billion or so in expenditures in 2006. Using National Health Expenditure Accounts terminology, these sectors included dental services, “other” professional and personal healthcare services, nursing home care, and home health care. Healthcare sectors least relevant to this analysis accounted for $600 billion in spending; they included retail outlet sales of medical products, administration and government public health, research, construction, and equipment.

Because this report is derived from a presentation given in July 2008, it does not include a description or analysis of recent economic developments or of the 2009 economic stimulus legislation on CIS adoption and cost. It does include a handful of updated references for studies that had been in manuscript form in July 2008 and which were subsequently published.

Methods

For this overview, two main types of data sources were used: (1) data from peer-reviewed articles and non-peer-reviewed reports, and (2) interview data from research conducted on behalf of the 2007–2008 California Governor’s Health Information Technology Financing Advisory Commis-

_______________

1 Personal health records constitute a separate set of capabilities that consumers could use to view and act on data from various sources, including from the EHRs that hospitals and physician organizations use.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

sion, for which researchers obtained information on CIS adoption, CIS business case and value proposition, and ability to finance CISs from large health systems, rural hospitals, public hospitals, medical groups, independent practice associations, and community health centers—albeit only for one state, California.

For the July 2008 IOM workshop, the author generated rough order-of-magnitude estimates of new or additional spending on CISs needed to implement EBM, over and above current spending on CIS capital projects and operating costs. Such ballpark cost estimates could be useful to policy makers that want to know whether the likely new spending on CISs is closer to (say) $50 billion than it is to $500 billion dollars, whether most healthcare delivery system organizations can afford the new CIS spending, and what public policies are needed to achieve ubiquitous CIS adoption. Given that any CIS cost estimates would be rough, we aimed to create cost estimates that were more likely to err on the high than on the low side—if conservative (worst-case) estimates of CIS costs were “manageable” for delivery system organizations, then, likely CIS costs would be even more manageable.

Hospital Sector Clinical Information Systems

What’s Needed and What’s Been Adopted

Table 3-1 contains a brief description of hospital CIS (EHR) capabilities and adoption, using a stages-of-CIS adoption schema used by the Health Information Management Systems Society (HIMSS) and data that HIMSS obtained for early 2008 (HIMMS Analytics, 2008). The schema shows a hierarchy of CIS adoption, with organizations at a higher stage of adoption typically having capabilities found at lower stages. Most hospitals had new or old (i.e., “legacy”) stage 1 ancillary systems that manage basic information on radiology orders, laboratory orders, and pharmacy prescriptions. The most basic systems depend on orders that providers first write out by hand, and that are made electronic at some point prior to test/prescription processing.

Stage 2 CIS capabilities can pull patient data together from many (often isolated and disparate) information systems into a central data repository that enables managers to generate reports and providers and staff to more easily view more demographic, test result, prescription, and other data. Stage 3 capabilities enable improved data presentation and capture, some checking for errors in prescription and test ordering, as well as digital imaging.

As of 2007, CIS capabilities at stage 4 and beyond were still relatively rare—according to HIMSS data, less than 5 percent of hospitals had com-

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 3-1 Hospital Electronic Health Record Capabilities and Adoption Estimates


Stage

Description

2008


Stage 7

Medical record fully electronic; healthcare organization able to contribute continuity of care document as by-product of electronic medical record; data warehousing/mining

0.1%

Stage 6

Physician documentation (structured templates), full clinical decision support, full Radiology Picture Archiving and Communication System (PACS)

1.0%

Stage 5

Closed loop medication administration

1.3%

Stage 4

Computerized physician order entry, clinical decision support (clinical protocols)

1.9%

Stage 3

Clinical documentation (flow sheets), clinical decision support system (error checking), PACS available outside radiology

32.9%

Stage 2

Clinical data repository, controlled medical vocabulary, clinical decision support system inference engine, may have document imaging

33.2%

Stage 1

Ancillaries—lab, radiology, pharmacy

12.5%

Stage 0

All three ancillaries not installed

17.1%


SOURCE: HIMSS Analytics, 2008. For more information see www.connectingforhealth.org/resources/CCEndorser.pdf (accessed September 8, 2010).

puterized physician order entry (CPOE), where the ordering physician (rather than support staff) did the data entry. CPOE systems are considered more likely to affect decision making than simpler ordering systems, as they can generate patient safety/quality reminders and alerts when physicians enter data at the point of care, rather than when staff enter order data later in the ordering process. At the far end of the spectrum are the least implemented capabilities, such as “closed loop” medication administration (with bar coding), physician documentation, and robust capability for health information exchange.

American Hospital Association (AHA) data from 2006 also provided information on the CIS adoption in U.S. hospitals, although they likely overstated CIS adoption because executives and staff in hospitals with advanced CISs were more likely to respond to a survey on CIS adoption than were respondents in hospitals without advanced CISs (AHA, 2007a). The AHA survey findings indicated that 66 percent of hospitals had results

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

viewing for lab and radiology, and between 46 percent and 66 percent of hospitals had lab/radiology/pharmacy order entry by staff; however, in only 10 percent of hospitals did more than half of physicians routinely use CPOE capabilities for medications ordering. Similarly, while the survey results also showed some progress in implementing CIS capabilities with built-in EBM rules. For example, 31 percent of hospitals provided real-time drug alerts at the point of order data entry (by staff or providers), 37 percent provided “back-end” (not real-time) drug alerts, but only about 10 percent of hospitals offered providers with suggested clinical guidelines and pathways for patient care.

Order-of-Magnitude Cost Estimates

Generating even rough estimates of future CIS costs throughout the U.S. healthcare delivery system is a perilous endeavor due to a lack of high-quality evidence about CIS adoption and cost. Nevertheless, it is possible to show how rough CIS capital and operating costs estimates could be generated and describe the pitfalls of any one estimate.

A crude estimate of hospital sector CIS capital costs can be created by multiplying the number of staffed U.S. community hospital beds by the estimated cost per hospital bed of implementing robust CIS capabilities, and then subtracting the proportion of the CIS capital cost already incurred. If one was to take a CIS cost per bed estimate of $57,000 found in a 2005 RAND report (which the RAND researchers believed was very rough) and increase it to $100,000 in order to account for inflation and to create a distinctly conservative (high) bias to the cost estimate (Girosi et al., 2005), robust CISs in all U.S. hospitals would cost $90 billion in capital costs, given about 800,000 community hospital beds, and perhaps another estimated 100,000 hospital beds with similar characteristics in federal and state hospitals (AHA, 2007b).

Hospitals already have spent some portion of this hypothetical $90 billion in hospital sector CIS capital cost; however, how much they’ve spent is unclear due to limitations in evidence on the CIS adoption, cost, and spending. Suppose that hospital-sector organizations have already incurred 25 percent of the capital cost of robust CISs on average, and so need to incur nearly $70 billion in additional capital expenditures (75 percent of the hypothetical $90 billion). We assume an 8-year time horizon for achieving robust CISs for nearly all hospitals, since implementing CISs can take years, even in a large health system with substantial information systems staffing, while U.S. hospitals and health systems are at varying stages of implement-

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ing CISs. In that case, CIS capital spending per year would amount to $8.5 billion on average.2

Hospitals already are spending some portion of the hypothetical $8.5 billion per year on new CIS capabilities—but again, we don’t know how much. For example, while one 2008 survey projected that U.S. hospitals would spend about $10 billion per year on all HIT-related capital projects, it provided no estimates of spending only on CISs or on only new CIS capabilities (rather than on replacements for old capabilities) (HIMMS Analytics, 2008).

To show how calculating “new” CIS expenditures might work, suppose for example that CISs accounted for half of the $10 billion per year in hospital HIT-related capital projects, and that new CIS capabilities (not just replacement of old capabilities) accounted for 60 percent of that $5 billion per year. In that case, hospitals would already be spending about $3 billion of the $8.5 per year billion for needed CISs, leaving about $5.5 billion per year in additional “new” CIS capital expenditures per year, or about $42 billion over 8 years.3 In one of many simple simulations, and assuming that each dollar of new capital spending creates $0.25 in new operating expenses, hospitals would incur about $48 billion for additional operating costs over the 8 year period, for a total of $90 billion in new CIS expenditures—about $11 billion total in CIS spending per year.

Given the generally poor quality of the evidence behind the assumptions, a much lower (e.g. $50 billion) or much higher (e.g., $130 billion) amount over 8 years is equally plausible for the hospital sector. The important point is that these amounts can be seen as a rough, order-of-magnitude range for additional hospital CIS spending.

Prospects for New Clinical Information System Deployment

Is the needed CIS spending over 8 years “feasible” for the U.S. hospital sector? In 2006, $90 billion for new CIS spending would translate into an average 1.7 percent increase in hospital spending per year. Assuming no financial benefit from CIS investment, and given median hospital net margins of around 5 percent in 2007 (MedPAC, 2008), such CIS spending might be feasible for many hospitals only if some major construction projects were delayed. Obviously, such spending would not be feasible for

_______________

2 To keep the exposition simple, we do not include spending to replace some of the additional CIS hardware (a shrinking part of total CIS expenditure) and ignore interest and discount rates for future costs and benefits.

3 In fact, the total “new” CIS costs would be somewhat higher, since any new CIS capital expenditures create new capital replacement (depreciation) costs and new operating costs—for software maintenance, additional information systems, clinical staffing, and so on.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

financially weaker hospitals or if hospital margins deteriorated due to a difficult economy.

Stakeholders care about “net” CIS costs—that is, total CIS costs less cost savings and new revenues. From the hospital’s perspective, the good news is that some CIS-related financial return to the hospital is likely, since hospitals will use EHRs to create some efficiency savings from operations, and generate some revenues from higher reimbursement coding and from new services—benefits that potentially could make a substantial contribution towards paying for new CIS costs. Any government subsidies would contribute an additional amount. Moreover, despite a likely unfavorable measurable financial return on investment (as of mid-2008), many executives and boards—especially in larger hospital systems and larger nonsystem hospitals—appeared to view advanced CISs as a cost of doing business. That is, while a CIS investment might not be justified based on a measurable return on investment analysis, many health system leaders see it as necessary expense in order to compete successfully in a market place that increasingly will compare organizations by the quality of care provided (Miller et al., 2009a).

Physician Practice Clinical Information Systems

What’s Needed and What’s Been Adopted

Physician offices typically can use two different types of information systems. The best chronic disease management systems (CDMSs) for chronic and preventive care (e.g., for diabetics, asthmatics, women needing cervical cancer screening) use electronic data from billing, scheduling, registration, and lab systems, plus manually entered data, to create paper patient data summaries and reminders for visits, along with lists of patients needing services and provider performance reporting. In this setting, CDMS software coexists with the paper medical record. Simpler CDMS software imports no electronic data. While such systems are useful, policy attention has focused on ambulatory care EHR software that typically includes a suite of capabilities that physicians can use in day-to-day care, including electronic viewing, documenting, prescribing, lab order entry, care reminders, and messaging, as well as the capabilities found in CDMS software (see Table 3-2). Here we focus only on EHRs.

Order-of-Magnitude Cost Estimates

A rough estimate of overall “new” CIS capital spending on physician EHRs can be created by multiplying the number of active office-based phy-

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 3-2 System and Capabilities in Chronic Disease Management Systems and Electronic Health Records


System and Capabilities

Explanation and/or Benefits


Chronic Disease Management Systems (for chronic/preventive care patients)

Best products use electronic data from billing, scheduling, registration, and lab systems, plus some manually inputted data; keeps paper chart

Patient data summaries (paper) Reminders (paper)

Provides relevant data at the point of care On the paper data summaries

Lists of patient needing services

Permits outreach to patients overdue for tests or visits

Provider performance reporting

Enables managers and providers to understand Quality Improvement performance

Electronic Health Records

Replaces paper chart; best ones also replace chronic disease management systems

Prescribing

Permits drug–drug/allergy interaction alerts; reduces input errors

Lab ordering

Reduces input errors

Documenting

Best products have templates for types of patients

Messaging with providers

Improves provider communication

Messaging with patients

Improves patient–provider communication; best products enable patients to view data, order prescriptions, make appointments

AND

 

Patient data summaries

Provides relevant data during visit; enables customizable views

Reminders

Typically built into documenting and ordering

Lists of patient needing services

Permits outreach to patients overdue for tests or visits

Provider performance reporting

Enables managers and providers to understand Quality Improvement performance


SOURCE: Derived from Miller et al., 2009b.

sicians times the EHR capital cost per physician, less EHR costs already incurred (based on an estimate of physician EHR adoption), and adding new operating and depreciation costs over time. In 2005–2006, between 310,000 and 500,000 physicians practiced in office settings (depending on the data source), while the EHR capital cost per physician was around $40,000.4 If only 10 percent of the higher estimate of office-based physicians had robust EHRs, the total EHR capital cost over 8 years would be around

_______________

4 The lower estimate physician cost is based on data from E. Hing and C. Burt, Characteristics of Office-Based Physicians and Their Medical Practices: United States, 2005-2006, (Hyattsville, MD: National Center for Health Statistics, 2008), while the higher estimate is

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

$20 billion, or $2.5 billion per year, plus some hardware replacement cost. Since physicians have already spent funds on EHR capital projects, and are already spending funds on EHR capital expenses, “new” spending on EHRs might amount to about $15 billion over an 8-year period.

Using the same approach for office-based physician offices as for hospitals, and taking the higher office physician estimate, an order-of-magnitude estimate for new physician EHR spending would amount to roughly $40 billion to 50 billion over 8 years, including new operating expenses—or about a 1 percent to 1.25 percent average increase in physicians-services sector expenditures per year averaged over each of the 8 years.

Here, too, CIS cost over 8 years is a feasible expenditure for most physician practices, even in a worst-case scenario of no offsetting savings or increased revenues or subsidies. In fact, some evidence suggests that the financial return to office-based physicians could be substantial (even without subsidies), which could greatly reduce the net CIS expenditure figure (Miller et al., 2009a).

Prospects for New Clinical Information System Deployment

Large medical groups have been implementing EHRs at a good pace (DesRoches et al., 2008), because some groups face a favorable CIS business case, and because some large groups consider CISs a cost of doing business for reasons similar to the hospital sector. Solo/small groups (i.e., 10 physicians or fewer) were adopting EHRs more slowly, because the EHR business case was not perceived as favorable enough to physician practice owners, EHRs were disruptive and stressful to implement, and adequate technical support was typically hard to find. The pace of CIS implementation for all types of practices should increase to some extent, due to anticipated or actual patient pressure and greater reimbursement rewards for EHR-enabled performance, and (possibly) some subsidies from hospitals seeking to bind physicians to their organizations. Obviously, any new government subsidies or new support services could substantially increase EHR adoption. In the physician sector, absent any special CIS subsidies, financially weaker organizations would fall behind in CIS adoption, a special concern when some of those organizations also serve the disadvantaged and underserved.

_______________

based on D. Smart, Physician Characteristics and Distribution in the U.S.: 2006 Edition, (Chicago, IL: American Medical Association, 2006).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Is the Clinical Information System Expense Worthwhile?

Again, these estimates of new CIS costs for hospitals and physicians are very rough. They are intended as order-of-magnitude estimates to put the overall potential cost in the perspective of the overall spending in the hospital and physician healthcare sectors.

Will benefits justify the substantial cost for hospital and physician EHRs? We know that simply implementing EHR capabilities doesn’t mean they will be used for EBM. Much out-of-the-box EHR software lacks easy-to-use and useful evidence-based templates with reminders and alerts, reports on patients needing services, and reports on provider performance; meanwhile, the information systems staff expertise that hospitals and large groups can tap to compensate for software limitations has been unavailable to the majority of physicians in solo and small groups. Moreover, without special performance incentives, many healthcare providers won’t use even easy-to-use and useful software since practicing EBM requires difficult changes in workflow and sometimes additional staff. Even if providers did use the software, it could take years for comparative effectiveness researchers to obtain truly comparable data from many different organizations’ practice settings. Obtaining such data requires promulgating precise definitions of measures and methods of obtaining data, but enforcing such standards would be especially difficult to achieve given the wide variation in EHR and billing software, physician documentation and data validation practices, and data from health information exchange—all of which could affect the quality of CER measures.

Increasing EHR use and especially EHR use for quality improvement will depend on a series of substantial changes in out-of-the-box EHR software, government and payer financial incentives, public performance reporting, EHR support services, and improved health information exchange. While each can contribute to increasing EHR adoption and especially use for quality improvement, appropriate financial (dis-)incentives and public reporting are the most important policy carrots and sticks that can encourage providers to practice EBM.

DATA AND INFORMATION HUB REQUIREMENTS

Carol C. Diamond. M.D., Ph.D., Managing Director,
Healthcare Program, Markle Foundation

Overview

The vision set forth by the IOM’s The Learning Healthcare System is compelling, and it has been clearly articulated in that workshop summary (IOM, 2007). In a learning health system,

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
  • the “best evidence for the collaborative healthcare choices of patient and provider” is generated and applied,
  • the “process of discovery [is] a natural outgrowth of patient care,” and
  • the system encourages and ensures “innovation, quality, safety, and value in health care.”

Today’s healthcare system falls very short of realizing these goals. As outlined in the workshop summary, the healthcare system continues to suffer from escalating costs, poor quality and outcomes compared with other industrialized nations, large variations and inconsistencies in the delivery of evidence-based care, and a chronic failure to apply even the currently available research and evidence base to the actual care that is delivered. While research and innovation are rapidly accelerating the development of new treatments and diagnostics, progress in using and applying evidence in healthcare decision making lags far behind.

HIT holds great promise in accelerating both the research needed and its dissemination in order to bring about a learning health system. Indeed, it is arguable that the greatest promise of HIT lies in its ability to enable networked analysis—or the rapid learning via a networked and distributed approach to information sharing and evidence development about what works and what does not work in clinical care. To maximize this potential, it is critical to address four key challenges. The following things must be done:

  1. Clearly define the ultimate goal.
  2. Be open to reset definitions and assumptions about health data and research approaches.
  3. Articulate new, broadly accepted working principles based on 21st-century information paradigms.
  4. Develop an information policy framework that broadly addresses public hopes and concerns.

Clearly Defining the Ultimate Goal

What is the goal? First and foremost, the goal should be to generate and use information to improve actual healthcare decision making for the many and varied participants in health care, from patients and clinicians at the point of clinical decisions to policy makers charged with creating new financing approaches to public health experts responsible for detecting emerging public health risks.

Too often clinical research efforts fall short of these critical end goals. For example, a great deal of time and money are allocated to discussing the

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

specific technologies, data collection efforts, methodologies, and analytic protocols and standards, as well as to controlling for confounders. Rarely is the same effort or priority placed on putting “research into practice” (Woolf, 2008). All too often no one considers what it is that will have the biggest impact on the quality of everyday health decisions.

What is the alternative? Since 2002 the Markle Foundation has brought together over 100 leading experts in the fields of research, medicine, policy, and business in a public–private collaborative called Connecting for Health to tackle the thorny and practical issues involved in getting critical information into wider use throughout the healthcare system. As a way of painting a picture of the goals we are all working toward, we created a series of hypothetical future scenarios that demonstrate the power of networked analysis (Connecting for Health, n.d.a).5

In one example, we describe a physician in a small practice in the suburbs. In preparation for his visit with his next patient, Theresa, he clicks on a button and is able to access information that tells him the patient is coming in to determine whether she might need to switch to a new diabetes medication. At the same time, the information system he uses allows him to compare his own patients’ outcomes to the outcomes of his peers’ diabetic patients. He uses an information network to get the answers that he needs.

The patient in this scenario also has a respiratory infection. The literature suggests that a particular antibiotic is recommended, but taking into account recent community outbreaks and their sensitivity patterns, the system helps the physician determine that a different antibiotic will likely produce better results.

In this future scenario, the clinician’s decision making is based upon networked analysis of research evidence and local information in real time. Using these approaches, a different kind of “research” emerges. Instead of relying solely on the assembly of research data in large databases in centralized research centers where the data are analyzed over months and years by scientists outside of everyday healthcare delivery, networked information and distributed analytic tools make it possible for clinicians and patients to answer their real, practical questions in real time in order to make better decisions.

This is a very different paradigm from the one people live with today, and it is a reminder to challenge assumptions about health data and research approaches.

_______________

5 For more information see www.connectingforhealth.org/connectivity/ (accessed September 8, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Be Open to Resetting Definitions and Assumptions
About Health Data and Research Approaches

A comprehensive approach to clinical CER is inherently challenging today because of the lack of a controlled environment for assessing therapeutic options, the heterogeneity of patient characteristics, and the distributed nature of both the requests for, and the sources of, information. These challenges should encourage us to think in new terms. How can these “problems” become the jumping-off point for a new and much more distributed approach to research that is characterized by connectivity and networks and not just databases?

The current world of biomedicine is best characterized as a collection of information “islands” containing an unprecedented explosion of data, with more peer-reviewed articles published every year. The problems with the field lie not so much in the generation of new data or evidence but rather in the slow and uneven dissemination of innovation and information. While the findings may be cutting edge, the approach for sharing them and getting them into practice has been likened to “methods recognizable to Gutenberg” (Buetow, 2008).

Looking around at the environment today, it can be seen that most efforts are focused on data collection, data cleaning, and then compartmentalizing the data across a highly fragmented system. This approach is slow and costly and often falls short. Very often the data needed for a particular research question are hard to collect, of uneven quality, and ultimately poorly suited to provide the answers sought.

The current model also places a huge burden on providers because they are being asked to supply the same information to different requestors in different ways. This results in redundant repositories of information created in response to many different research questions or purposes. The model is inefficient in that it is difficult to repurpose or reuse specific pieces of data. Furthermore, large aggregate data sets create greater privacy and security risks. Most important to consider, however, is that the model lacks what is essential: connectivity and feedback loops. Only with networks to connect the fragmented knowledge base and the capacity to use feedback loops will it be possible to meet the goals of a learning health system: to get information when and where it is needed to make better decisions.

What does it look like when the gap between clinical research and clinical delivery is overcome? Childhood cancer is often used as an example of an area where the silos of clinical care and research have been connected. Clinicians and researchers are part of a unique community that has been able to use clinical data continuously to evaluate outcomes to improve protocols and treatments. The result has been a dramatic improvement in treatment and survival rates. Today 75 to 90 percent of children 10 years

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

old and younger with cancer are in a formal protocol (NCI, 2008a). This compares to only 3 percent of adults with cancer who are in formal protocols (Murthy et al., 2004). The result has been a significant improvement in childhood cancer survival rates over the past decade and a half.

So what makes this community different? It is not that they were early adopters of HIT per se. Rather, it is that the gap between researchers and clinicians has been narrowed, both in terms of the lag between evidence creation and its use in clinical care and in terms of the blurring of the line between the roles of researcher and clinician.

But what if the lines were to be blurred even farther? What would it mean if not just clinicians but also patients were involved in driving research? What if patients could bring their own very real and pressing questions and unique information about treatments, symptoms, and disease progression to networked health information? There are examples of this already happening today. The Web community PatientsLikeMe is an example of a highly evolved patient social networking site that actively conducts research (PatientsLikeMe, 2008a). According to the site’s founder, the site was “built to accelerate the transfer of knowledge about what works and what doesn’t” (PatientsLikeMe, 2008b). Its community includes more than 1,600 amyotrophic lateral sclerosis (ALS) patients, which is twice the number of patients in the largest ALS trial ever. While patients openly discuss a wide range of symptoms and side effects, which may include some very personal information, the most important point is that this is a research community. The data collection is highly structured, with patients using validated tools to collect a significant amount of data. The participants on the site have come together to share their data because they believe that, by doing so, they will learn more about their conditions in return. They are involved in the first real-world, open, and nonblinded patient-driven trial on the use of lithium for ALS.

It is fascinating to see what patient involvement and activation can do for research. For instance, one of the ALS research scales in use was not appropriate for patients with a forced vital capacity below 50 percent. A patient suggested a change to the scale, which was accepted, and the scale was modified so that it could better accommodate patients with low forced vital capacity.

Observing this network and seeing its potential makes it clear how valuable it can be to begin to shift current ways of thinking about data collection and patient involvement, and it encourages an opening of the aperture on how we think about data creation and collection.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Articulate New, Broadly Accepted Working Principles
Based on 21st-Century Information Paradigms

As illustrated in the examples above, it is important to acknowledge and leverage the characteristics of the 21st-century environment in which the needs for sharing and accessing information are increasingly distributed. As consumers, physicians, and others increasingly use the Internet to create, access, and use health information, the traditional paradigms are changing dramatically.

Connecting for Health has articulated a set of first principles that can serve as a guide or set of working principles to characterize the current environment (Connecting for Health, n.d.b).6 These principles suggest that as the need for information is increasingly distributed, it follows that it should be possible to leave the data distributed as well. The professional and ancillary sources of data—or, to use a network term, the “nodes”—are becoming more sophisticated in terms of analytic capabilities. It should not always be assumed that knowledge creation starts with collecting the data from the source. New paradigms should assess the merits of pushing the question to the data across a network and only analyzing the answers centrally. It is important to see research as occurring in a connected environment where clinicians, researchers, and patients are networked and where the strict roles of researcher and clinician are necessarily blurred.

A compelling example of a distributed network being used for national and international flu surveillance is the Distribute model.7 This model uses summarized counts of influenza-like-illness (ILI) syndrome reported from existing syndrome surveillance systems. Rather than centralizing the data collection or requiring standardized definitions across data sources, the system relies on local definitions of ILI and collects only aggregate counts of ILI by age band. The result is an ability to track flu trends more quickly and cost effectively than ever before. During the 2007–2008 flu season, the total weekly volume of the 8 regions participating in the current Distribute network (about 250,000–350,000 visits per week) was comparable to the volume of total visits in the nationwide sentinel reporting system (about 200,000–400,000 visits per week) funded and operated for decades by the Centers for Disease Prevention and Control (CDC). Furthermore, the very low risk of a privacy breach has encouraged voluntary participation by both national and international participants.

This example demonstrates what can be achieved when a project focuses on only the minimum information required to inform public safety. By reducing the time and effort required to centrally collect, clean, and

_______________

6 For more information see www.connectingforhealth.com/resources/first_principles.pdf (accessed September 8, 2010).

7 See http://www.syndromic.org/index.php (accessed August 5, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

analyze data from multiple sources, and by reducing privacy risks that can discourage participation, a distributed model allows for information to be shared with decision makers—from local public health departments to local hospitals or school systems—more quickly and cost effectively.

Develop an Information Policy Framework That
Broadly Addresses Public Hopes and Concerns

The purpose of this meeting is to conceive a new body or mechanism to address CER. Many suggest that government has a critical role to play in funding and establishing such a center. But to establish a powerful and distributed network to improve health and health care, the investment should focus on four things: motives, standards, methods, and rules.

The motive to share information is critical, and at present research communities still remain silos of information and data. Clearly government has an opportunity to link research funding to a requirement to participate in a more collaborative, distributed, and networked approach. The standards will determine how to make information shareable and useable without having to first collect it in one place. New methods are needed—and are emerging—for doing research in a more networked and distributed way. Examples of distributed models used in clinical, quality CER efforts show that networked models for handling composite data analysis can be flexible in order to address a range of research questions. For example, Shared Pathology Information Network was a research initiative of the National Cancer Institute (NCI) designed to allow cancer researchers access to a virtual database to locate appropriate human tissue specimens across pathology laboratories and institutions (NCI, 2004). The model allows authorized researchers greater access to data from multiple institutions while preserving local control of the data by those institutions. The distributed research network is an example of a distributed model for comparative effectiveness research.8 In this model, supported by the Agency for Healthcare Research and Quality (AHRQ), multiple types of information sources, including administrative data, EHRs, inpatient data, and disease registries, are leveraged across a federated system that will allow for composite data analysis without requiring the aggregation of all the raw data in a single centralized database. Lastly, Cancer Biomedical Informatics Grid, another project of the NCI, is an attempt to create a networked system that allows multiple users from the cancer community to access large data sets in standard formats without creating a centralized database of all of the raw data (NCI, 2008b).

_______________

8 See http://sites.google.com/site/phgrid/Distributed-Research-Network (accessed August 5, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Finally, and most critically, new rules are needed to promote public trust in information sharing. The Connecting for Health common framework offers a 21st-century privacy approach (Connecting for Health, 2008a). Since 2002 the Markle-led Connecting for Health collaborative has brought together key organizations from all sectors to develop a common approach to information policies. Our own efforts have demonstrated the value and importance of establishing information policies, rules, and technologies that satisfy the following three requirements:

  1. core privacy principles,
  2. sound network design, and
  3. oversight and accountability.

Core Privacy Principles

The nine core privacy principles, summarized below, are based on U.S. Fair Information Practices. Meaningful safeguards will be achieved by using both policy and technology tools to achieve the core privacy principles and by ensuring that these nine principles are applied together.

  1. Openness and transparency: Policies for information use and sharing are clearly communicated to participants.
  2. Purpose specification: The purpose of the data collection effort is clearly specified and narrowly suited to the need.
  3. Collection limitation and minimization: Only data needed for specified purposes are collected and shared.
  4. Use limitation: Data are used only for the agreed upon and stated purposes.
  5. Individual participation and control: Individuals can find out what data have been collected and who has access, exercise meaningful control over data sharing, have access to information about them, request corrections, and see audit logs.
  6. Data integrity and quality: Mechanisms to ensure the data are relevant, accurate, complete, and up to date.
  7. Security safeguards and controls: Tools and mechanisms are in place to ensure that data are secured against breaches, loss, or unauthorized access and improper authentication.
  8. Accountability and oversight: Mechanisms and accountable parties are established for monitoring compliance with policies and procedures for handling a breach.
  9. Remedies: Mechanisms for handling complaints and remedies for affected parties are established in the event of a breach.
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Sound Network Design

Sound network design helps ensure that information is protected while it is shared. Sound network design should do the following:

  • Incorporate technical tools that facilitate trusted use: audit, access, authorization, authentication, and accuracy.
  • Promote technological choices that limit the potential for abuse and mitigate the risks of large breaches, including distributed architecture and use of de-identified information.
  • Enable interoperability and flexibility, supporting a diversity of applications, using secure, open Web standards.
  • Support and encourage networked approaches to information sharing that are consumer accessible, including through the Internet and mobile devices.

Oversight and Accountability

HIT efforts must establish oversight and accountability, including critical governance and enforcement mechanisms. These mechanisms should do the following:

  • Include all affected in the development of approaches and policies.
  • Ensure that the framework and its attributes are adopted.
  • Include clear mechanisms of enforcement appropriate to the specific activity, such as through contractual agreements or regulatory mechanisms.
  • Designate responsibility for monitoring and oversight.

These attributes have guided the work in developing detailed policies and technology approaches for health information exchange and for services that enable consumers to access their own health information. The Connecting for Health common framework has been developed and adopted by providers, insurers, e-health companies, consumer groups, and privacy experts (Connecting for Health, n.d.c).9

The public understands the opportunity but needs to trust the system in order to fully participate. While individuals do not want their data to be misused, our most recent survey shows that three-fourths of those surveyed see the value in sharing their personal information to look for

_______________

9 For more information see www.connectingforhealth.org/resources/CCEndorser.pdf (accessed September 8, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

disease outbreaks or to improve information for research (Connecting for Health, 2008b). Patients won’t accept an either/or proposition: safeguard my data or use it to improve my health. They, like us, urgently want (and need) both.

INTEGRATIVE VEHICLES REQUIRED FOR EVIDENCE REVIEW AND DISSEMINATION

Lorne A Becker, M.D., Emeritus Professor, SUNY Upstate Medical
University, and Co-Chair, Cochrane Collaboration Steering Group

Overview

To implement what the IOM terms a learning health system, in which the most effective clinical practices reflecting the best available evidence are naturally embedded in patient care, necessary infrastructure must be established to efficiently develop and disseminate knowledge about what works best in health care. Evidence to support clinical decision making can come from multiple sources, but primary studies, whether randomized trials, cohort studies, case studies, cross-sectional studies, or studies using other designs rarely provide adequate answers to questions of clinical effectiveness as individual pieces of evidence. Research studies comparing the effectiveness of different treatment strategies must be combined using valid methods into evidence syntheses that show the combined results of all relevant research on a given topic to inform healthcare decisions.

Many countries have well-established national mechanisms for producing evidence syntheses. In addition, a growing number of international collaborative efforts have been developed to work together on various components of the evidence synthesis process in a way that spreads the load, reduces duplication, and benefits all. This paper outlines some of the opportunities available for the United States in these international collaborative activities.

Types of Evidence Synthesis

Many different approaches have been taken in designing evidence syntheses—with significant variations in the methods used, their complexity, and the reproducibility of their results. Several examples exist in the United States. AHRQ conducts and supports CER through its Effective Health Care program. Part of this program involves evidence-based practice centers (EPCs) established (1) to facilitate the synthesis of knowledge from data generated by a network of research organizations and (2) to then translate that knowledge into patient-targeted information. AHRQ

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

also sponsors the U.S. Preventive Services Task Force, a private panel of experts in primary care and prevention that reviews evidence and generates recommendations for clinical preventive services. Other federal programs include the Centers for Medicare & Medicaid Services (CMS) Medicare Evidence Development and Coverage Advisory Committee, established to conduct reviews of clinical effectiveness to advise on medical topics under evaluation at CMS; the Drug Effectiveness Review Project, a public–private collaboration reporting on the comparative effectiveness of drugs within and between drug classes; and the National Institutes of Health (NIH) Consensus Development Program conferences that convene independent panels of experts to collect information and develop consensus statements on a clinical topic selected by NIH staff. The Drug Effectiveness Review Project and the Consensus Development Program both consider the evidence reviews from EPCs in developing their reports.

In the private sector, Blue Cross and Blue Shield Association’s Technology Evaluation Center assesses clinical effectiveness for both public and private entities and is a designated EPC; the ECRI Institute, another EPC, is nonprofit and conducts cost-effectiveness analyses and technology assessments for both public and private health-sector organizations; and Hayes, Inc., is a for-profit organization that develops technology assessments for healthcare organizations and networks.

In virtually all of these U.S. examples, the focus is on relatively complex evidence syntheses, which attempt to synthesize evidence over a broad domain. By contrast, the majority of evidence syntheses produced in the United States and elsewhere are systematic reviews with a much more narrowly targeted focus. As an example, a focused systematic review might examine the effectiveness of a single intervention, such as inhaled corticosteroids (Nannini et al., 2007) or influenza vaccination (Poole et al., 2006) in preventing episodes of chronic obstructive pulmonary disease (COPD). Complex reviews, such as those produced by the organizations listed above, typically take on a much broader question that can only be addressed by synthesis of evidence about multiple interventions of different sorts. For example, the evidence synthesis prepared for the U.S. Preventive Services Task Force, in examining the question of whether asymptomatic individuals should be screened with spirometry to detect undiagnosed COPD (Lin et al., 2008), outlined eight different “key questions” addressing diverse issues that included prevalence and risk factors for COPD, accuracy of spirometry, smoking cessation rates, and the effectiveness and potential harms for a wide range of potential interventions for those individuals newly diagnosed as having COPD. Many of the individual key questions were sufficiently complex to each require a complex rather than a focused evidence synthesis—an example being the single key question addressing the effectiveness of pharmacologic treatments, oxygen therapy, or pulmonary

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

rehabilitation in reducing morbidity and mortality. A practice guideline is typically broader still, addressing and making recommendations about screening, diagnosis of symptomatic individuals, and management at all levels of the disease (Qaseem et al., 2007).

Because of their complexity and costs, only a handful of these complex evidence syntheses are produced each year. For example, only 167 such syntheses were produced in 2006 (IOM, 2008). Clinical practice guidelines are produced in much larger quantities with much less consistency in their quality or rigor in their methods. Hundreds of new or revised guidelines are added each year to www.guideline.gov, with a variety of organizations involved in their production, including government agencies, professional organizations, and health systems.

Annual production of focused systematic reviews is much larger than that of the more complex syntheses. In contrast to the 167 complex syntheses produced in the United States each year, it has been estimated (Moher et al., 2007) that approximately 2,500 systematic reviews were published in 2004, and this number is growing rapidly each year. Authors from the United States contributed about a quarter of these systematic reviews. However, taking into account all reviews done by other countries normalized to population, the United States contributes fewer reviews than other nations, such as New Zealand, Australia, Netherlands, United Kingdom, and Canada, on a per capita basis. This suggests not only an opportunity for increased United States involvement but also for significant gains possible through greater international coordination.

In addition to their narrow focus, systematic reviews tend to be selective about the types of research that they include. Approximately 60 percent of the articles reviewed by Moher et al. (2007) included only evidence from randomized controlled trials (RCTs). A further 25 percent included data from quasi-randomized studies or non-RCTs, and only 12 percent included data from cohort or case-control studies or studies with other observational designs. The focus on randomized trials is primarily based on concerns about methodological quality and the risks of bias that are inherent in studies that employ other research designs.

Because the RCT is the study design least susceptible to bias (particularly selection bias), the methods for finding and evaluating risks of bias in systematic reviews based on RCTs have been well studied and the best approaches to dealing with these risks have been outlined in detail (Higgins and Greene, 2008). There is much less agreement on the most appropriate methods for combining results from studies with other designs—primarily because of the difficulty in assessing the probability and magnitude of bias. Oxman et al. (2006) have summarized the types of approaches used by various organizations in addressing this issue in the preparation of clinical guidelines and concluded that “as the range of study designs that are

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

included is broadened, an increasing amount of work is required to drive decreasingly reliable estimates of the effects of interventions.”

Advantages of Focused Systematic Reviews

While focused systematic reviews are sometimes seen as less useful for decision makers because of their narrow scope and reliance on randomized trials, they have a number of advantages over more complex reviews. Because of their focus, systematic reviews are less expensive and require less effort to produce than complex evidence syntheses, which require review and processing of larger bodies of literature and often need a larger team that includes individuals with a broader set of methodological skills and content expertise. Systematic reviews may also have an advantage over complex syntheses in their generalizability, since complex reviews are more likely to include consideration of factors, such as costs, availability, and other issues that vary from setting to setting.

Because evidence syntheses of all sorts may become invalidated when new evidence appears, there is general recognition of the need for periodic updating of these documents. This is a larger problem for complex syntheses than for focused systematic reviews for two reasons. Because they summarize more evidence across a broader area of content, complex reviews are at higher risk that an important new study will appear that is relevant to one of the many topics they include. Thus complex reviews are likely to require updating more frequently than focused systematic reviews. Also, because of their complexity, updating is likely to be a more difficult task, because changes in the evidence for one of the subquestions of the synthesis may have important implications for other subquestions or their combined interpretation. Because of their narrow focus, methodological issues in systematic reviews have been subjected to much greater study than the methods for more complex syntheses. As noted above, much of this work is focused on reviews that combine results from RCTs. However, methods development continues for the incorporation of other designs, particularly for questions that may not be well addressed using RCTs alone. Thus focused systematic reviews that restrict themselves to RCTs are likely to give the most unbiased estimates of effects of the studies being grouped in the evidence synthesis, and are likely to have the greatest ability to assess and estimate the possible effects of a variety of risks of bias. Focused reviews that extend their scope to studies other than RCTs are more susceptible to bias, and complex reviews that combine different study designs for a variety of related questions are most susceptible to bias and have the least developed methods.

In contrast to complex reviews, for which funding is available from a variety of sources, there is very little grant or other funding available in the

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

United States to support the production of focused systematic reviews. The relatively large number of reviews produced despite this handicap reflects the fact that they are increasingly gaining recognition as valid pieces of scholarly work. Systematic reviews are frequently compact enough to be submitted as journal articles; those that are published are often highly cited. Some journals are de-emphasizing the traditional narrative review article in favor of this more scientific and disciplined approach to synthesis. In addition, the recent recommendation that planning for new research studies should always begin with identification of an up-to-date systematic review, or performance of one if no such review is available (Clarke et al., 2007), may further accelerate the increase in production of systematic reviews.

Priority Setting

Because of their complexity and costs, it is clear that a prioritization process will be needed to direct the efforts of those producing complex evidence syntheses to be certain that the limited number that can be funded will address the most appropriate topics, and any prioritization process will of necessity exclude many important but lower priority questions. However, even individuals with lower-priority conditions, or disorders requiring less expensive interventions, will be best served by a clear knowledge of the evidence available about comparative effectiveness. For the many decision makers, patients, clinicians, policy makers, and others who find themselves needing evidence on questions not covered by the small available set of complex evidence syntheses, focused systematic reviews can fill important gaps. In fact, with the rapid growth in the production of systematic reviews, it is intriguing to speculate that it may someday be possible to find all relevant high-quality studies in the literature summarized using this technique. An initial analysis of the number of systematic reviews needed to synthesize the evidence from all RCTs has already been completed (Mallett and Clarke, 2003).

Thus the need for prioritization is very different when viewed from the perspective of focused systematic reviews than when considering complex evidence syntheses. For the latter, it is important to concentrate resources on a relatively small set of high-priority, large-impact questions of comparative effectiveness. For systematic reviews, however, the priority is to create a comprehensive resource that provides syntheses that cover as much as possible of the existing evidence terrain. A second priority for those producing systematic reviews needs to be the coordination of their efforts, so that individuals wishing to find a relevant evidence synthesis will be able to do so without being confused by the availability of multiple overlapping and possibly conflicting systematic reviews.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Inclusion of Focused Systematic Reviews in Complex Reviews

Another advantage of focused systematic reviews is their ability to serve as building blocks for guidelines or more complex evidence syntheses. Because they restrict themselves to narrowly focused questions and tend to exclude broader contextual issues, their results may be less restricted to a specific country, population, or care setting. Results of a focused systematic review can then be combined with additional information from a variety of different sources (using complex syntheses, guidelines, or a variety of other techniques) to address broader questions—thus fitting with the strategy of “globalize the evidence: localize the decision” outlined by Eisenberg (2002).

This approach of using systematic reviews as building blocks is explicitly used by many producers of guidelines and other complex evidence syntheses. Detailed recommendations for the process of finding, evaluating, and incorporating systematic reviews have been made based on the experiences of producing complex evidence syntheses in U.S. EPCs (Whitlock et al., 2008).

A recent survey by the Appraisal of Guidelines, Research and Evaluation collaboration (Burgers et al., 2003) found that most guideline developers make use of systematic reviews in their guideline development process. In their series of background papers commissioned by the World Health Organization (WHO) to improve the use of research evidence in WHO guidelines and other products, Oxman and colleagues (2006) have also noted the widespread expectation that systematic reviews should be used to inform the development of guidelines and have produced recommendations about how this should be done.

Even if an existing review does not address the exact question needed for the complex synthesis, or if it is in need of updating, the detailed specification of search strategies and results included in high-quality systematic reviews, along with their critical appraisals of the studies included in the review, can give developers of complex reviews a head start and decrease the time needed to produce their reviews.

A Combined Approach to Focused and Complex Evidence Syntheses

Given these advantages, plans to increase support for evidence syntheses in the United States should recognize the need for both components—a targeted program of complex syntheses supported in a very directive fashion accompanied by more general efforts to build a diffuse network of skilled producers of focused systematic reviews that can be used as building blocks for guidelines and complex syntheses. A number of countries have

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

taken this approach and have developed mechanisms to actively support the production of both types of syntheses.

As an example, in the early 1990s, an explicit decision was made in the United Kingdom to set up two different types of centers to address these two functions (Sheldon and Chalmers, 1994). The National Health Service (NHS) Centre for Reviews and Dissemination was charged with either proactively commissioning or carrying out focused reviews for the NHS. The UK Cochrane Centre was given the task of participating in an international collaboration to build, maintain, and disseminate a database of focused systematic reviews of RCTs and to keep these reviews up to date. In subsequent years, a number of other countries (including Canada and Australia) have adopted a similar approach by funding activities of the Cochrane Collaboration as a mechanism for production of focused systematic reviews by participating in an organized international effort, in addition to setting up other mechanisms for the production of more complex syntheses.

The Cochrane Collaboration

The Cochrane Collaboration is an international network with the aim of improving healthcare decision making, by producing and regularly updating systematic reviews synthesizing the results of these controlled clinical trials. Approximately half of the 16,000 people in over 90 countries who work with the Cochrane Collaboration are review authors, but there are many additional roles that are filled by organized subgroups of the collaboration known as “entities,” of which there are four different types: Cochrane review groups provide the editorial role for Cochrane reviews and support authors in a variety of ways, including their search for evidence to include in the review. Each review group focuses on a particular area of health (e.g., colorectal cancer, infectious diseases, schizophrenia, tobacco addiction). Cochrane Centres and their branches support Cochrane contributors and entities located in their geographic and linguistic area—providing coordination, training, help with translations, and networking. Methods groups are made up of individuals interested in advancing the still young and rapidly evolving science of research synthesis. These groups develop methodological standards and advise the collaboration on how the validity and precision of systematic reviews can be improved. Each Cochrane methods group focuses on a specific area such as statistics, adverse effects, bias, or information retrieval. Networks (or “fields”) focus on dimensions of health care other than specific health problems, such as the setting of care (e.g., primary care), the type of consumer (e.g., older people), or the type of intervention (e.g., vaccines).

The majority of funding for Cochrane activities is directed to these entities—thus supporting the infrastructure required to produce reviews.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Cochrane authors for the most part volunteer their time, or find their own funding to support the authoring of their reviews. Funding for Cochrane entities comes from a large variety of national governments, international governmental and nongovernmental organizations, universities, hospitals, private foundations, and personal donations. The collaboration has made the decision not to accept funding from conflicted organizations, such as pharmaceutical companies, and this is clearly spelled out in organization-wide policy limiting uses of funds from corporate sponsors. Although much of the funding comes from national government sources, or from nongovernment funders based in a single country, the funds are used to support efforts in other countries as well. For example, Cochrane review groups based in the United Kingdom each provide support for authors in multiple countries in their area of content focus.

The editorial process for Cochrane reviews is quite different from that used by medical journals. It has been designed to not only ensure high-quality systematic reviews but to do so in a way that builds the skills of authors, and thus it has served to increase the workforce available to perform syntheses of this type. The process involves frequent iterative interactions between authors and editors at every stage of review production, beginning with the selection of a topic. No review can begin until the title has been approved by the relevant Cochrane review group. This avoids the duplication of effort that would result from different teams unknowingly working on different Cochrane reviews on overlapping topics and ensures that the planned scope of each review fits well with others in the Cochrane Library. Authors must next submit a protocol outlining in detail the approach they will take and the methods they will use for their review. Cochrane editors send this protocol to peer reviewers who have content, methodological, and statistical expertise, and who provide authors with detailed feedback. Completed reviews are again sent for peer review and are often extensively edited following reviewers’ comments. Because “involving and supporting people of different skills and backgrounds” is one of the collaboration’s key principles, involvement of relatively inexperienced authors is not discouraged.

Because Cochrane reviews have multiple authors, experienced reviewers can be mentors for their less experienced coauthors. In some cases, this mentorship process has been formally supported. One example is the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) Mentoring Programme (Oliver et al., n.d.). This collaborative project of the South African Cochrane Centre and the Cochrane HIV/AIDS review group (CRG) was established in 2000, when the CRG found that few of its HIV/AIDS systematic reviews were relevant to sub-Saharan Africa, and identified a need for first-time authors from the region to have ongoing support. The effort has been quite successful with 20 authors having

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

received mentorship. Three reviews have been published in the Cochrane Library, 12 are in progress, and the initiative has now been extended to include authors from South Asia.

The primary product of the collaboration is the Cochrane Database of Systematic Reviews (CDSR).10 From a small beginning in 1995 when it contained only 36 full systematic reviews, this database has grown with each issue and now contains almost 4,000 systematic reviews covering the whole range of healthcare interventions. Initially, Cochrane reviews addressed only questions of effectiveness of interventions, using primarily evidence from controlled clinical trials. While this is still largely true, many Cochrane reviews now also include results of studies using other designs, such as controlled before/after and interrupted time series designs, for questions that have not been studied using randomized trials. Because of the international scope of the Cochrane Collaboration, the reviews cover a broad range of topical areas, with applicability for both developing and developed countries. Beginning in 2008, the database was expanded beyond its initial coverage of only systematic reviews of interventions and now includes reviews of diagnostic test accuracy (Leeflang et al., 2008) and reviews that synthesize research on issues relevant to systematic review methodology.

The Cochrane approach of producing a coordinated database of focused systematic reviews using an international collaborative process has a number of advantages and has been an effective way to build capacity for evidence synthesis in the countries participating in this effort. In addition to the mechanisms for author development noted above, the Cochrane process has contributed to capacity building by advancing systematic review methods, developing tools to help in the evidence synthesis process, and forging important partnerships with universities and other academic institutions.

Working in a Cochrane methods group provides opportunities for methodologists to further develop the methods used in systematic reviews and also provides them with a large set of reviews and protocols to serve as a substrate for their research. Detailed guidance on systematic review production from these groups has been incorporated into two Cochrane Handbooks (Diagnostics Test Accuracy Working Group, 2009; Higgins and Green, 2008). The groups have also developed the Cochrane Methodology Register11 which is continuously updated and now includes more than 11,000 citations to journal articles, book chapters, conference proceedings, conference abstracts, and reports of ongoing methodological research. The aim of the register is to include all published reports of empirical method-

_______________

10 Available from http://www.thecochranelibrary.com (accessed June 15, 2009).

11 Available at http://www3.cochrane.org/access_data/cmr/accessDB_cmr.asp (accessed June 16, 2009).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ological studies that could be relevant for inclusion in a Cochrane methodology review, along with comparative and descriptive studies relevant to the conduct of systematic reviews of healthcare interventions.

In order to facilitate the production of high-quality systematic reviews by a widely dispersed international group of authors, the collaboration has developed a number of tools to help authors identify studies for inclusion in their reviews, use appropriate and standardized methods in conducting their reviews, and produce their reviews in the format specified for CDSR.

The literature identification tool is the Cochrane Central Register of Controlled Trials (CENTRAL). Each Cochrane review group maintains a database of relevant studies that includes references to both published and unpublished reports. These individual study registers are assembled quarterly into CENTRAL, which is then published as part of the Cochrane Library to make it available for broader public use by health researchers and others wishing to perform evidence syntheses. Approximately two-thirds of the references in CENTRAL are derived from specially designed searches of MEDLINE and Excerpta Medica Database (EMBASE). The remainder consist of references uncovered by authors or by organized efforts of Cochrane review groups, centers, or fields to find additional studies through activities, such as handsearching of journals or conference proceedings, follow-up of references from other studies, or contact with trialists or others who may have knowledge of additional studies not included in MEDLINE or EMBASE. A recent assessment showed that searching beyond CENTRAL found only a very small number of trials (Royle and Milne, 2003).

The collaboration’s authoring tool is a complex piece of software known as RevMan. The software has been continually refined over many years. It is structured to guide authors through the appropriate steps in conducting and writing up their review, and links are provided to the relevant section of the Cochrane handbooks at each step. RevMan also incorporates a number of statistical tools, developed in conjunction with the Cochrane methods groups, that allow authors to perform meta-analyses of their data in a standardized way.

In addition to serving as essential infrastructure components for the collaboration, these resources have been made freely available and are now in widespread use in the production of evidence syntheses by others. For example, the majority of systematic reviews include a search of CENTRAL as one method of identifying studies, many reference the methods outlined in one of the Cochrane handbooks, and a large number are prepared using RevMan.

Many of the aims and activities of the Cochrane Collaboration fit well with the missions of universities and academic institutions. This has led to a number of fruitful collaborations. Most Cochrane editorial groups and many centers and fields are located in university settings or have close ties

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

with specific university departments or other units. Both parties benefit from this sort of collaboration—the Cochrane entity is supported and finds colleagues and collaborators within the university, while the university faculty not directly employed in the Cochrane entity become linked with an international network of individuals with unique methods and content skills and knowledge.

The ability of the Cochrane Collaboration to simultaneously increase the available number of systematic reviews and to improve and build the infrastructure required to support production of evidence syntheses of all types has led several countries to build support for the collaboration into their budgets. The majority of this support has been directed toward funding of Cochrane infrastructure (for example, support for Cochrane Review Groups) and has not been tied to the production of systematic reviews on specific topics. The United Kingdom has been the leader in this regard. Cochrane activities in the United Kingdom have been funded continuously since 1992, and the funders continue to feel that this approach of funding the infrastructure to support the production of methodologically sound systematic reviews on topics chosen by review authors is an important component of their approach to supporting evidence synthesis.

The United States has provided support to some Cochrane groups as well, using a variety of funding mechanisms. One of the first Cochrane editorial groups to be formed, the Neonatal Review Group has had funding from the National Institute of Child Health and Human Development (USA) for the support of its infrastructure since its inception. This has allowed the preparation and continuous updating of a classified bibliography of virtually all reports of randomized trials in the field of neonatology and of systematic reviews (incorporating meta-analysis) of the results of this body of research.12 This group currently has 253 completed reviews with 65 reviews in progress. The HIV/AIDs review group has had CDC/Global AIDS Program (2008–present) and National Institute of Mental Health (2007–2008) support and has produced 52 reviews, with 51 reviews in progress. The Prostatic Diseases and Urologic Cancers has had Veterans’ Administration (1998–2003) and National Institute of Diabetes and Digestive and Kidney Diseases (2005–present) support and has produced 29 reviews, with 23 in progress. The Cochrane Eyes and Vision Group (CEVG), started in 1997, has 73 completed reviews and 54 reviews in progress. The CEVG U.S. Satellite, with support from the National Eye Institute from 2002 to 2009, has produced 18 completed reviews and has 36 additional reviews in progress, to date. One of the most active of the collaboration’s Fields/Networks is

_______________

12 The Cochrane Neonatal Group. Available at http://neonatal.cochrane.org (accessed December 14, 2008).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the U.S.-based Complementary Medicine Field,13 which is supported by the National Center for Complementary and Alternative Medicine. Unlike Cochrane Review Groups, Fields do not have a direct editorial role in the production of Cochrane reviews but identify health issues of importance to specific populations and/or intervention types and support CRGs in their production of relevant reviews in a variety of ways. One important function of Fields is their contribution of trials to CENTRAL, and the Complementary Medicine Field has been a major contributor in this regard, with a database that includes over 7,000 reports of clinical trials. The field has had a very active role in the identification, translation, and critical appraisal of reports of complementary medicine interventions published in Chinese journals, and also has assembled an organized list of all of the Cochrane reviews addressing complementary medicine interventions.

Advantages of a Collaborative International
Approach to Evidence Synthesis

The example of the Cochrane Collaboration demonstrates the many advantages of an organized international approach to the production of evidence syntheses, and the benefits it brings in terms of prioritization, methods development, and capacity development. This section explores these issues in more detail and also notes other international collaborative approaches, some of which involve more complex evidence syntheses.

Prioritization

Because of the many disparate stakeholders involved and the large variations in morbidity, resources, and other factors from country to country, it is difficult to conceptualize an international process that would adequately represent all of the relevant perspectives in prioritizing questions in order to decide on a relatively small number of questions to be addressed by complex evidence syntheses. However, as noted above, in the case of focused systematic reviews, prioritization is more about coverage of the entire evidence terrain and coordination of efforts to avoid duplication. Thus a model in which local actors produce focused evidence syntheses, in accord with their own local priorities but also in a coordinated collaborative fashion, can result in a comprehensive set of syntheses that addresses multiple disparate priorities and that can serve as building blocks for more specific analyses performed at a local level.

_______________

13 Available at http://medschool.umaryland.edu/integrative/cochrane_about.asp (accessed September 8, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Methods Development

An international approach can be helpful in developing the rigorous methods needed for evidence syntheses of all sorts. The required methodological expertise is frequently not available within a single country or region, and many of the issues are sufficiently complex that only a handful of individuals around the globe have a good grasp of them and are at the leading edge in their development. In addition to the work done within the Cochrane Collaboration, international collaborative groups have advanced the science of evidence synthesis in a number of ways.

One example is the family of standards for reporting of research studies and of evidence syntheses. The complete set has been assembled by the Enhancing the Quality and Transparency of Health Research Network—an international initiative that seeks to enhance the reliability of medical research literature by promoting transparent and accurate reporting of research studies.14 The Quality of Reporting Meta-analyses (recently renamed Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines address reporting of systematic reviews of randomized trials (Moher et al., 2000), while the Meta-Analyses and Systematic Reviews of Observational Studies guidelines address reporting of systematic reviews of observational studies (Stroup et al., 2000).

A second international collaborative activity has addressed the methods to be used to assess the quality of evidence and the strength of evidence-based recommendations in a standardized way. Such methods have been developed by groups such as the Grading of Recommendations Assessment, Development, and Evaluation Working Group alliance (Guyatt et al., 2008).

Additional Examples of International Collaboration in Evidence Synthesis

A number of groups other than the Cochrane Collaboration have now begun to organize evidence syntheses using an international collaborative model. The Joanna Briggs Institute publishes a library of systematic reviews relevant to nursing.15 The Campbell Collaboration, an international research network modelled after the Cochrane Collaboration, produces systematic reviews of the effects of social interventions involving areas such as education, crime and justice, and social welfare (Campbell Collaboration, n.d.). Some groups have had a more narrow content focus, such as the consortium of guideline development organizations and profes-

_______________

14 See http://www.equator-network.org/about-equator (accessed June 12, 2009).

15 See http://www.joannabriggs.edu.au/pdf/JBI_LibSR_info.pdf (accessed June 12, 2009).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

sional societies formed to produce joint guidelines for the management of COPD (Schünemann et al., 2009).

In other cases the international efforts have been directed at organizing a standard format for specific components of complex evidence syntheses so as to allow portions to be shared and used in other countries or settings. Developers of health technology assessments in the European Union have taken this approach. Through a consortium organization (European Network for Health Technology Assessment) with 63 partners from 32 countries and that they are developing a “core model” that defines 9 different domains of a health technology assessment and that defines standard elements for each domain.16 The model currently under development addresses only medical and surgical interventions, but a similar effort directed at diagnostic technologies is planned for the future. Similar work on standardization is being undertaken by the Guidelines International Network (GIN)—an international not-for-profit association of organizations and individuals involved in the development of clinical practice guidelines. GIN has defined a minimum data set that should be included in all evidence tables summarizing interventions (GIN, 2009) and is in the process of formulating a second to address evidence tables relating to diagnostic test accuracy. The aim is to have data in these tables presented in a consistent format that would allow guideline developers to use the efforts of others in developing their own evidence tables.

Future Directions

While the examples just listed show some beginning steps toward international integrated vehicles for evidence synthesis, there is much still to be done. Some of the additional needs include continued advancement in methods, particularly for complex evidence synthesis or syntheses involving designs other than RCTs; continued improvements in the quality of evidence syntheses; and improved coordination so as to decrease unnecessary duplication of effort.

Methods for focused systematic reviews that combine data from controlled clinical trials are well advanced. There is much less agreement, however, on the most appropriate methods for combining results from studies with other designs—primarily because of the difficulty in assessing the probability and magnitude of bias in these studies. Methods development for complex reviews and guidelines is even less advanced. As noted, international collaborative efforts have already made some beginnings in this area. On a national level, both the Centre for Reviews & Dissemination in

_______________

16 See http://www.eunethta.net/upload/Founding%20Partners/EUnetHTA%20Collaboration%20Work%20Plan%202009_June292009_FINAL.pdf (accessed July 20, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the United Kingdom and AHRQ through its EPCs in the United States have made major contributions. Both have produced publications that address some of the methodological issues involved in complex syntheses (AHRQ, 2008; CRD, 2009)

Methods development for guidelines is also needed. In an international survey of 18 clinical guideline programs, Burgers et al. (2003) found little consistency in methods, although they did note that all respondents intended to develop their guidelines using rigorous methods. They also reported a trend toward increasing use of evidence-based methods, such as the use of electronic database searches and systematic reviews. Most guideline processes also incorporated consensus procedures of some sort. A recent literature review commissioned by the WHO (Schünemann et al., 2006) to inform its guideline production process found no experimental research or studies that compared components of guideline methods advice. They did note, however that many organizations that produce guidelines have a “guidelines for guidelines” document to guide their processes, and they found empirical evidence that organizations that publish their guidelines for guidelines produce more methodologically sound guidelines. The authors of the review were able to recommend a set of 19 principles for use by the WHO in guideline development.

Given the current state of methods development for evidence syntheses, it is clear that at least some of the funding for comparative effectiveness studies in the United States should be directed to promotion of further advances in methodology. While this funding could take the form of increased support for existing organizations within the United States, there would be clear advantages to align these efforts with international groups that are performing similar work.

Avoiding Duplication of Effort

Currently many different groups perform evidence synthesis of various sorts, and do so in a relatively uncoordinated way—leading to much needless duplication of effort. Greater organization of these efforts on an international scale would be helpful. One useful first step would be the formation of a registry of systematic reviews, analogous to the registries of clinical trials currently being set up. Prospective registration would allow any individual contemplating the performance of a systematic review to determine if a relevant review were already available or in progress, and it would also simplify the process of searching for systematic reviews. A registry could also be effective in improving the quality of systematic reviews—particularly if it included a mechanism for ensuring that review protocols are always produced. Prospective registration and publication of protocols, as done by the Cochrane Collaboration and the Joanna

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Briggs Institute, have been suggested as a way to reduce the possibility of selective outcome reporting bias in systematic reviews (Schünemann et al., 2006) and to address the possibility of nonpublication bias.

Conclusions

In summary, a number of different private and public entities in the United States currently conduct and disseminate evidence syntheses of various types to support clinically effective medical practice, and an expansion of these efforts would be welcome and beneficial. While there is clearly a need for complex evidence syntheses to address the highest-priority topics, the number of these produced is likely to be relatively small (as at present) and to leave many gaps. These gaps can and will continue to be filled by focused systematic reviews and other evidence syntheses produced in the United States and in other countries. Clinicians, policy makers, and the public will benefit from these efforts regardless of the degree of direct U.S. involvement. However, increased participation by the United States in international collaborative efforts such as those discussed in this paper would bring a number of benefits in addition to increasing the number of high-quality evidence syntheses produced. These include additional opportunities for workforce training in the United States, as well as participation in international efforts to develop the tools, methods, and standards for evidence synthesis.

REFERENCES

AHA (American Hospital Association). 2007a. Continued progress: Hospital use of information technology. Chicago, IL: AHA.

———. 2007b. Fast facts on U.S. hospitals. Chicago, IL: AHA.

AHRQ (Agency for Healthcare Research and Quality). 2008. Effective healthcare program. Research Reviews. http://effectivehealthcare.ahrq.gov/healthInfo.cfm?infotype=rr&ProcessID=60 (accessed July 20, 2010).

Buetow, K. 2008. Presentation to AHIC. http://archive.healthit.hhs.gov/portal/server.pt/gateway/PTARGS_0_869092_0_0_18/all_materials.pdf (accessed July 20, 2010).

Burgers, J. S., R. Grol, N. S. Klazinga, M. Makela, J. Zaat, and AGREE Collaboration. 2003. Towards evidence-based clinical practice: An international survey of 18 clinical guideline programs. International Journal of Quality Health Care 15(1):31-45.

Campbell Collaboration. n.d. http://www.campbellcollaboration.org/about_us/index.php (accessed July 20, 2010).

Catlin, A., C. Cowan, M. Hartman, S. Heffler, and the National Health Expenditure Accounts Team. 2008. National health spending in 2006: A year of change for prescription drugs. Health Affairs 27(1):14-29.

CBO (Congressional Budget Office). 2008. Evidence on the costs and benefits of health information technology. http://www.cbo.gov/ftpdocs/91xx/doc9168/05-20-HealthIT.pdf (accessed July 20, 2010).

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Clarke, M., S. Hopewell, and I. Chalmers. 2007. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: A status report. Journal of the Royal Society of Medicine 100(4):187-190.

Connecting for Health. 2008a. We need a 21st-century privacy approach allowing Americans to protect and share health information to improve quality. http://www.connectingforhealth.com/resources/20080822_policy_brief.pdf (accessed July 20, 2010).

———. 2008b. Americans overwhelmingly believe electronic personal health records could improve their health. http://www.connectingforhealth.org/resources/ResearchBrief-200806. pdf (accessed July 20, 2010).

——— n.d.a Connectivity in the 21st century. http://www.connectingforhealth.org/connectivity/ (accessed July 20, 2010).

——— n.d.b Connecting for health decision-making for population health “first principles.” http://www.connectingforhealth.com/resources/first_principles.pdf (accessed July 20, 2010).

———. n.d.c Connecting for health common framework for networked personal health information: Statement of endorsement. http://www.connectingforhealth.org/resources/CCEndorser.pdf (accessed July 20, 2010).

CRD (Centre for Reviews and Dissemination). 2009. Systematic reviews: CRD’s guidance for undertaking reviews in health care. http://www.york.ac.uk/inst/crd/SysRev/!SSL!/WebHelp/SysRev3.htm (accessed July 20, 2010).

DesRoches, C. M., E. G. Campbell, S. R. Rao, K. Donelan, T. G. Ferris, A. Jha, R. Kaushal, D. E. Levy, S. Rosenbaum, A. E. Shields, and D. Blumenthal. 2008. Electronic health records in ambulatory care—A national survey of ohysicians. New England Journal of Medicine 359(1):50-60.

Diagnostic Test Accuracy Working Group. 2009. Handbook for DTA reviews. http://srdta.cochrane.org/en/authors.html (accessed June 12, 2009).

Eisenberg, J. M. 2002. Globalize the evidence, localize the decision: Evidence-based medicine and international diversity. Health Affairs 21(3):166-168.

GIN (Guidelines International Network). 2010. Evidence Tables Working Group. http://www.g-i-n.net/activities/etwg/members-of-the-etwg (accessed July 15, 2010).

Girosi, F., R. Meili, and R. Scoville. 2005. Extrapolating evidence of health information technology savings and costs. Santa Monica, CA: RAND Corporation.

Guyatt, G. H., A. D. Oxman, G. E. Vist, R. Kunz, Y. Falck-Ytter, P. Alonso-Coello, and H. J. Schünemann. 2008. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. British Medical Journal 336(7650):924-926.

Higgins, J. P. T., and S. Green, eds. 2008. Cochrane handbook for systematic reviews of interventions. Version 5.0.1. http://www.cochrane-handbook.org (accessed June 15, 2009).

HIMSS (Health Information Management Systems Society) Analytics. 2008. Hospital IT expenses and budgets related to clinical sophistication. Market findings from HIMSS Analytics. Chicago, IL: HIMSS.

IOM (Institute of Medicine). 2007. The learning healthcare system: Workshop summary. Washington, DC: The National Academies Press.

———. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press.

———. 2011. Digital infrastructure for the learning health system: The foundation for continuous improvement in health and health care: Workshop summary. Washington, DC: The National Academies Press.

Leeflang, M. M., J. J. Deeks, C. Gatsonis, P. M. Bossuyt, and Cochrane Diagnostic Test Accuracy Working Group. 2008. Systematic reviews of diagnostic test accuracy. Annals of Internal Medicine 149(12):889-897.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Lin, K., B. Watkins, T. Johnson, J. A. Rodriguez, and M. B. Barton; U.S. Preventive Services Task Force. 2008. Screening for chronic obstructive pulmonary disease using spirometry: Summary of the evidence for the U.S. Preventive Services Task Force. Annals of Internal Medicine 148:535-543.

Mallett, S., and M. Clarke. 2003. How many Cochrane reviews are needed to cover existing evidence on the effects of health care interventions? American College of Physicians Journal Club 139(1):A11.

MedPAC (Medicare Payment Advisory Committee). 2008. Medicare payment policy report to the Congress, March 2008. Washington, DC: MedPAC.

Miller, R. H., K. D’Amato, N. Oliva, C. E. West, and J. W. Adelson. 2009a. California’s digital divide: Clinical information systems for the haves and have-nots. Health Affairs 28(2).

———. 2009b. Barriers to financing clinical information systems in California healthcare delivery system organizations: Report to the Governor’s Health Information Technology Financing Advisory Commission. Sacramento: California Council on Science and Technology.

Moher, D., D. J. Cook, S. Eastwood, I. Olkin, D. Rennie, and D. F. Stroup. 2000. Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. British Journal of Surgery 87(11):1448-1454.

Moher, D., J. Tetzlaff, A. C. Tricco, M. Sampson, and D. G. Altman. 2007. Epidemiology and reporting characteristics of systematic reviews. PLoS Medicine 4(3):e78.

Murthy, V. H., H. M. Krumholz, and C. P. Gross. 2004. Participation in cancer clinical trials: Race-, sex-, and age-based disparities. Journal of the American Medical Association 291(22):2720-2726.

Nannini, L. J., C. J. Cates, T. J. Lasserson, and P. Poole. 2007. Combined corticosteroid and long-acting beta-agonist in one inhaler versus long-acting beta-agonists for chronic obstructive pulmonary disease. Cochrane Database System Review 4:CD006829.

NCI (National Cancer Institute). 2004. Shared Pathology Informatics Network (SPIN). http://spin.nci.nih.gov (accessed August 5, 2010).

———. 2008a. NCI Cancer Bulletin. http://www.cancer.gov/ncicancerbulletin/NCI_Cancer_Bulletin_031808/page4/print (accessed December 15, 2008).

———. 2008b. About caBIG. https://cabig.nci.nih.gov/overview (accessed August 5, 2010).

Oliver, J., N. Siegfried, G. Kennedy, and T. Horvath. n.d. Putting Africa first: Supporting novice authors of HIV/AIDS reviews in Africa. http://www.imbi.uni-freiburg.de/OJS/cca/index.php/cca/article/viewArticle/1334 (accessed June 12, 2009).

Oxman, A., H. Schünemann, and A. Fretheim. 2006. Improving the use of research evidence in guideline development: 7. Deciding what evidence to include. Health Research Policy and Systems 4(1):19.

PatientsLikeMe. 2008a. http://www.patientslikeme.com/ (accessed December 15, 2008).

———. 2008b. The value of openness. http://blog.patientslikeme.com/ (accessed December 15, 2008).

Poole, P. J., E. Chacko, R. W. Wood-Baker, and C. J. Cates. 2006. Influenza vaccine for patients with chronic obstructive pulmonary disease. Cochrane Database System Review 1:CD002733.

Qaseem, A., V. Snow, P. Shekelle, K. F. Sherif, T. J. Wilt, S. Weinberger, and D. K. Owens. 2007. Diagnosis and management of stable chronic obstructive pulmonary disease: A clinical practice guideline from the American College of Physicians. Annals of Internal Medicine 147(9):633-638.

Royle, P., and R. Milne. 2003. Literature searching for randomized controlled trials used in Cochrane reviews: Rapid versus exhaustive searches. International Journal of Technology Assessment in Health Care 19(4):591-603.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Schünemann, H., A. Fretheim, and A. Oxman. 2006. Improving the use of research evidence in guideline development: 1. Guidelines for guidelines. Health Research Policy and Systems 4(1):13.

Schünemann, H. J., M. Woodhead, A. Anzueto, S. Buist, W. Macnee, K. F. Rabe, and J. Heffner. 2009. A vision statement on guideline development for respiratory disease: The example of COPD. Lancet 373(9665):774-779.

Sheldon, T., and I. Chalmers. 1994. The UK Cochrane Centre and the NHS Centre for reviews and dissemination: Respective roles within the information systems strategy of the NHS R&D programme, coordination and principles underlying collaboration. Health Economics 3(3):201-203.

Stroup, D. F., J. A. Berlin, S. C. Morton, I. Olkin, G. D. Williamson, D. Rennie, D. Moher, B. J. Becker, T. A. Sipe, and S. B. Thacker. 2000. Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-analysis of Observational Studies in Epidemiology (MOOSE) group. Journal of the American Medical Association 283(15):2008-2012.

Whitlock, E. P., J. S. Lin, R. Chou, P. Shekelle, and K. A. Robinson. 2008. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine 148(10):776-782.

Woolf, S. H. 2008. The meaning of translational research and why it matters. Journal of the American Medical Association 299(2):211-213.

Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 153
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 154
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 155
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 156
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 157
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 158
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 159
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 160
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 161
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 162
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 163
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 164
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 165
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 166
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 167
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 168
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 169
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 170
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 171
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 172
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 173
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 174
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 175
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 176
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 177
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 178
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 179
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 180
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 181
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 182
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 183
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 184
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 185
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 186
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 187
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 188
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 189
Suggested Citation:"3 The Information Networks Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 190
Next: 4 The Talent Required »
Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary Get This Book
×
Buy Paperback | $75.00 Buy Ebook | $59.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

It is essential for patients and clinicians to have the resources needed to make informed, collaborative care decisions. Despite this need, only a small fraction of health-related expenditures in the United States have been devoted to comparative effectiveness research (CER). To improve the effectiveness and value of the care delivered, the nation needs to build its capacity for ongoing study and monitoring of the relative effectiveness of clinical interventions and care processes through expanded trials and studies, systematic reviews, innovative research strategies, and clinical registries, as well as improving its ability to apply what is learned from such study through the translation and provision of information and decision support.

As part of its Learning Health System series of workshops, the Institute of Medicine's (IOM's) Roundtable on Value & Science-Driven Health Care hosted a workshop to discuss capacity priorities to build the evidence base necessary for care that is more effective and delivers higher value for patients. Learning What Works summarizes the proceedings of the seventh workshop in the Learning Health System series. This workshop focused on the infrastructure needs--including methods, coordination capacities, data resources and linkages, and workforce--for developing an expanded and efficient national capacity for CER. Learning What Works also assesses the current and needed capacity to expand and improve this work, and identifies priority next steps.

Learning What Works is a valuable resource for health care professionals, as well as health care policy makers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!