National Academies Press: OpenBook
« Previous: 3 The Information Networks Required
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

4


The Talent Required

INTRODUCTION

Most of the activities integral to comparative effectiveness research (CER) have been conducted on a small scale over the past several decades; yet, meeting an increased demand for CER and the efficient translation and application of CER findings requires more than simply expanding existing programs and infrastructure. In addition to incorporating the new structures, systems, and elements of health information technologies (HITs) into current practice, innovative new approaches will be needed to drive improvements in both research and practice. Work will be increasingly interdisciplinary—requiring coordination and cooperation across professions and healthcare sectors. One of the key themes of workshop discussion was the need for increased funding and support for training a workforce to meet the unique needs of developing and applying comparative effectiveness information.

Papers in this chapter were presented in draft form at the workshop to begin to characterize the workforce needs for the emerging discipline of CER.1 William R. Hersh and colleagues explore the heterogeneous set

_______________

1 Comments of workshop reactor panel participants guided the development of the manuscript by Hersh and colleagues presented in this chapter. Sector perspective panelists included Jean Paul Gagnon (sanofi-aventis), Bruce H. Hamory (Geisinger Health System), Steve E. Phurrough (Centers for Medicare & Medicaid Services), and Robert J. Temple (Food and Drug Administration). Panelists commenting on training and education needs included Eric B. Bass (Johns Hopkins University), Timothy S. Carey (University of North Carolina at Chapel Hill), Don E. Detmer (American Medical Informatics Association), David H. Hickam (Eisenberg Center), and Richard N. Shiffman (Yale University).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

of activities that contribute to the field of CER and define key workforce components and related training requirements. CER will draw its workforce from a variety of backgrounds—clinical medicine, clinical epidemiology, biomedical informatics, biostatistics, and health policy—and settings, including academic units, university centers, contract research organizations, government, and industry. A key challenge will be developing programs to foster interdisciplinary and cross-sector approaches.

To provide an example of how different workforce elements might be best organized and engaged in a system focused on developing and applying clinical effectiveness information, Sean R. Tunis and colleagues present an overview of a program for health interventions assessment in Ontario, Canada. A direct link between decision makers and CER entities facilitates research timeliness and a clear focus on the information needs of decision makers. Ontario’s experience provides insights on how the United States might best expand CER capacity, offers a model for developing an integrated workforce that addresses important organizational and funding issues, and suggests some possible efficiencies to be gained through international cooperation.

COMPARATIVE EFFECTIVENESS WORKFORCE—
FRAMEWORK AND ASSESSMENT

William R. Hersh, M.D., Oregon Health and Science University;
Timothy S. Carey, M.D., M.P.H., University of North Carolina;
Thomas Ricketts, Ph.D., University of North Carolina; Mark Helfand,
M.D., M.P.H., Oregon Health and Science University; Nicole Floyd,
M.P.H., Oregon Health and Science University; Richard N. Shiffman,
M.D., M.C.I.S., Yale University; David H. Hickam, M.D., M.P.H.,
Oregon Health and Science University
2

Overview

There have been increasing calls for a better understanding of “what works” in health care (IOM, 2008), driven by a system that allows for learning and improvement based on such an understanding (IOM, 2007).

_______________

2 We thank the following individuals who provided comments, critiques, and additions to early versions of this report: Mark Doescher, M.D., M.P.H., University of Washington; Erin Holve, Ph.D., AcademyHealth; Marian McDonagh, Pharm.D., Oregon Health & Science University; Lloyd Michener, M.D., Duke University; Cynthia Morris, Ph.D., Oregon Health & Science University; LeighAnne Olsen, Ph.D., Institute of Medicine; Robert Reynolds, Sc.D., Pfizer Corp.; Robert Schuff, M.S., Oregon Health & Science University; Carol Simon, Ph.D., The Lewin Group; Brian Strom, M.D., M.P.H., University of Pennsylvania; Jonathan Weiner, Dr.P.H., Johns Hopkins University.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

One of the means for assessing what works is CER. The AcademyHealth Methods Council defines CER as “research studies that compare one or more diagnostic or treatment options to evaluate effectiveness, safety, or outcomes” (EHR Adoption Model, 2008). The goals of this report are to define the many components of CER, to explore the necessary training requirements for a CER workforce, and to provide a framework for developing a strategy for future workforce development.

The objective of CER is to provide a sustainable, replicable approach to identifying effective clinical services (IOM, 2008). However, although the term CER is widely used, there is no consensus on how best to achieve this objective, and there is little understanding of the challenges required to meet it. There is, for example, wide disagreement about the importance of its different components. The Institute of Medicine (IOM) committee on “knowing what works in health care” emphasizes the central role of comparative effectiveness reviews as a critical linkage between evidence-based medicine (EBM) and practice guidelines, coverage decision making, clinical practice, and health policy (IOM, 2008), whereas Tunis views the knowledge of CER as deriving from practical clinical trials that compare interventions head to head in real clinical settings (Tunis, 2007). The IOM Roundtable on Value & Science-Driven Health Care expands the notion of CER to include other forms of learning about health care (IOM, 2007), such as the growing amount of data derived from secondary sources, including electronic health record (EHR) systems, which feeds other analyses, such as health services research (HSR). This knowledge in turn drives the development and implementation of clinical practice guidelines, benefits coverage decisions, and allows the general dissemination of knowledge to practitioners, policy makers, and patients. The ideal learning health system will feed back knowledge from these activities to inform continued CER.

While some organizations take an optimistic view of the benefits that CER can bring to improving the quality and cost-effectiveness of health care (Swirsky and Cook, 2008), others sound a more cautionary note. CER will not occur without political and economic ramifications. For example, the Congressional Budget Office notes that CER might lower the cost of health care, but only if it is accompanied by changes in the incentives for providers and patients to use new, more expensive technologies even when they are not proven to be better than less expensive ones (Ellis et al., 2007). A report from the Biotechnology Industry Organization raises concerns that population-based studies may obscure benefits to individual patients or groups and that even in the absence of statistically significant differences among interventions, some individuals may benefit more from some treatments than others (Buckley, 2007). Finally, many argue that CER could turn out to be ineffective unless it is funded and conducted independently of the

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

federal executive branch by a dedicated new entity (Emanuel et al., 2007; Kirschner et al., 2008; Wilensky, 2006).

In the United States, a clear leader in CER has been the Agency for Healthcare Research and Quality (AHRQ). The AHRQ research portfolio includes evidence-based practice centers (EPCs) (Helfand et al., 2005), which perform comparative effectiveness reviews—that is, syntheses of existing research on the effectiveness, comparative effectiveness, and comparative harms of different healthcare interventions (Slutsky et al., 2010). The work of the EPCs feeds AHRQ’s Effective Health Care Program,3 which also supports original CER through the Developing Evidence to Inform Decisions about Effectiveness network and via dissemination through the John M. Eisenberg Clinical Decisions and Communications Science Center (Eisenberg Center). AHRQ has also made a substantial investment in funding HIT projects to improve the quality and safety of healthcare delivery. The agency also funds health services research as well as pre-and postdoctoral training and career development (K awards) in all of these areas.

Another potential venue for increased CER is the effort by the National Institutes of Health (NIH) to promote clinical and translational research (Zerhouni, 2007). While many think of clinical and translational research as “bench to bedside” (i.e., moving tests and treatments from the lab into the clinical setting), the NIH and others have taken a broader view. With the traditional bench-to-bedside translational research labeled as “T1,” other types of translation are defined as well, such as “T2” (assessing the effectiveness of care shown to be efficacious in controlled settings, or bedside to population) and “T3” (delivering care with quality and accountability) (Woolf, 2008). NIH has sponsored many trials that qualify as CER, and although this type of research is not a primary focus for the agency, the training needed to conduct CER overlaps that of T2 and T3 translation. Thus the Clinical and Translational Science Awards (CTSA) initiative greatly expands the clinical research training needed to conduct CER.4 As CER absorbs researchers and staff, however, it may also compete with other types of research programs in T1 and some T2 areas. Over the past 3 years, the NIH has awarded funding to 38 CTSA centers, with a goal for an eventual steady state of 60 centers. These centers aim to speed the translation of research from the laboratory to clinical implementation and to the community. The work of CER, examining the effectiveness of treatments in real-world settings, including watching

_______________

3 For more information, see http://effectivehealthcare.ahrq.gov (accessed September 8, 2010).

4 Since this paper was originally authored, the 2009 American Reinvestment and Recovery Act provided $1.1 billion of funds for activities related to CER—including $400 million to the Office of the Secretary of the Department of Health and Human Services, $600 million to AHRQ, and $400 million to the NIH.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

for harms to patients with multiple comorbidities, is highly relevant to the CTSA initiative.

One challenge for CER is that it currently exists as a heterogeneous field rather than a specific discipline. While this heterogeneity is probably appropriate to the status of CER as an emerging field of study and effort, it also makes planning for its workforce needs challenging. Investigators and staff in CER come from many backgrounds, including clinical medicine, clinical epidemiology, biomedical informatics, biostatistics, and health policy. They work in a number of settings, including academic units, university centers, contract research organizations, government, and industry. It is not known how well the capacity of the current workforce would absorb any sort of marked increase in demand for CER activities. Finally, there is no specific entity that funds CER, despite calls for there to be so (Wilensky, 2006).

Nonetheless, a variety of stakeholders must have access to the best comparative information about medical tests and treatments (Drummond et al., 2008). Physicians need to be able to assess the benefits and harms of various clinical decisions for their patients, who in turn themselves are becoming increasingly involved in decision making. Likewise, policy makers must weigh the evidence for, and against coverage of, increasingly expensive technologies, especially when marginal costs vastly exceed marginal benefits.

Therefore this report was approached with the assumption that CER should be encouraged as part of the larger learning health system. The authors of this report, leaders with expertise in major known areas of CER, were recruited to define the scope of CER, answer a set of questions concerning the workforce, and work together to develop a framework and a plan for future work. The first task was to achieve consensus among ourselves for defining the components of CER. The next task was to develop a framework for enumerating the workforce and to propose an agenda for defining its required size, skill set, and educational requirements. A draft of this report was presented at the workshop described in this proceedings on July 30–31, 2008. A reactor panel provided some initial feedback, and subsequently more experts were contacted, all of whom are listed in the footnotes on pp. 191 and 192. This led to finalization of the framework and agenda for further research and policy making related to the CER workforce.

Framework for Comparative Effectiveness Research Workforce Characterization

The scope of CER was defined by developing a figure that depicts the subareas of CER and that is organized around the flow of information and knowledge. Next a preliminary model was developed for how workforce

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

needs might be quantified. The knowledge and challenges in each area were elaborated, followed by a discussion of the issues that will arise with efforts to expand the scope and capacity of CER.

As illustrated in Figure 4-1, information and knowledge originate from clinical trials and other clinical research studies, particularly studies using registries, EHRs, practice network data, and pharmacoepidemiologic studies. This information is synthesized in comparative effectiveness reviews and technology assessments, sometimes including meta-analyses, decision analyses, or economic analyses, which inform the development of evidence-based clinical guidelines and decisions about coverage. HSR evaluates the optimal delivery and the societal health and economic effects of the corresponding changes in the health system. Finally, the information and knowledge are disseminated to both patients and professionals. Each of these components cycles back to its predecessors, and the continuously learning health system maintains a constant interaction among them.

It was also recognized that there are many areas of overlap among the components. For example, experts in biomedical informatics can work synergistically with clinical epidemiologists to determine data requirements and information needs for CER studies. Likewise, clinical guideline developers and implementers can collaborate with health services researchers in technology assessment.

Characterization of Specific Components of the Workforce

The next task was to develop a framework for enumerating the workforce and to make some estimates of its necessary size. Each author was assigned one of the major components of Figure 4-1 and asked to address the workforce needs in that particular area, taking into account the following questions:

  1. What are the issues and problems for the workforce at present?
  2. What skill set is needed to address current issues and problems?
  3. Where are these skills currently developed or obtained?
  4. What will be the projected needs as CER scales up in healthcare settings? Do we need more people? Do we need to further develop current capacity? What are the training needs?
  5. What are the recommendations for assessing and measuring the needs for the current and future workforce?

Clinical Epidemiology

A core concept underlying CER is that there is a continuum that begins with research evidence, then moves to systematic review of the overall body

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 4-1 Key activity domains for comparative effectiveness research. Workforce development will be critical to support the many primary functions within each of these domains as well as to foster the cross-domain interactions and activities identified (e.g., methods development, identifying information needs).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

of evidence, and then to the interpretation of the strength of the overall evidence that can be used for developing credible clinical practice guidelines (IOM, 2008). While they overlap with other disciplines, the skills required to conduct CER are not widely taught. This section focuses on the four types of research involved in CER analyses as well as the personnel needed to conduct those analyses: (1) practical clinical trials and conventional clinical research, (2) systematic evidence reviews and technology assessment, (3) pharmacoepidemiologic research, and (4) clinical epidemiology methods research.

Practical Clinical Trials and Conventional Clinical Research

A wide variety of studies are useful in CER (Chou et al., 2010). Most would agree, however, that increasing the amount of CER will require expanding the capability for conducting practical, head-to-head “effectiveness” trials. Such trials are distinct from the so-called efficacy or explanatory clinical trials performed in the regulatory approval process. Explanatory trials, which focus on comparison with placebo treatments in highly selected subjects, are a necessary step in evaluating new therapies, but they are usually not an adequate guide for clinical practice. It can be difficult to determine from such trials—and from the systematic reviews that aggregate them—what the “best” treatments are. In contrast, effectiveness trials, such as practical clinical trials, compare treatments in a head-to-head manner in settings that can be applied to real-world clinical practice. The characteristics that distinguish effectiveness from explanatory (efficacy) studies are listed in Box 4-1 (Gartlehner et al., 2006).

Tunis and colleagues note a number of disincentives to perform-

BOX 4-1
Characteristics Distinguishing Effectiveness from Explanatory Studies

  1. Populations in primary care or general population
  2. Less stringent eligibility criteria
  3. Health outcomes
  4. Long study duration; clinically relevant treatment modalities
  5. Assessment of adverse events
  6. Adequate sample size

SOURCE: Gartlehner et al. (2006).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ing head-to-head comparisons of treatments, such as the disease-oriented nature of the NIH and the commercial motivations of pharmaceutical and other companies (Tunis et al., 2003). Indeed, few such trials have been performed. In a recent survey, Luce and colleagues were able to identify fewer than 20 such trials in the literature (Luce et al., 2008). A frequently stated goal for comparative effectiveness is for the number of effectiveness trials performed each year to grow to 50 trials. As discussed below, accomplishing this goal will require methodological advances in designing and conducting studies as well as training programs devoted to this new type of clinical trial research.

Because so few effectiveness trials have been performed, training in how to design and conduct them is not widely available. While there is overlap, the expertise and the team composition required for practical clinical trials differ from what is required for smaller efficacy trials. For example, practical clinical trials will need to use streamlined, more efficient procedures for recruitment and monitoring than large efficacy trials use (Califf, 2006). They should take advantage, for instance, of Web-based tools for trial management and the potential for using EHR systems to identify, recruit, and allocate subjects to treatment arms within and across health systems (Bastian, 2005; Langston et al., 2005; Reboussin and Espeland, 2005). They also need to develop methods for involving consumers and, for trials conducted in practice networks, office-based clinicians in the design and conduct of trials. Finally, some practical trials require specialized statistical skills (Berry, 2006).

Comparative Effectiveness Reviews and Technology Assessments

Comparative effectiveness reviews are a cornerstone of evidence-based decision making (Helfand, 2005). These reviews follow the explicit principles of systematic reviews, but they are more comprehensive and multidisciplinary, requiring a wider range of expertise. As noted in the EPC Guide to Conducting Comparative Effectiveness Reviews, comparative effectiveness reviews “expand the scope of a typical systematic review, which focuses on the effectiveness of a single intervention, by comparing the relative benefits and harms among a range of available treatments or interventions for a given condition. In doing so, [comparative effectiveness reviews] more closely parallel the decisions facing clinicians, patients, and policy makers, who must choose among a variety of alternatives in making diagnostic, treatment, and healthcare delivery decisions” (Methods Reference Guide, 2008). While some technology assessments are similar in scope to a comparative effectiveness review, most are smaller, more focused reviews that require a narrower range of expertise.

Within the emerging, somewhat poorly defined field of CER, conduct-

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ing comparative effectiveness reviews and technology assessments is the most developed component. In contrast with other components of CER, guiding principles and explicit guidance for the conduct of comparative effectiveness reviews are available and are widely used. Examples include guidance tools from the UK National Institute for Health and Clinical Excellence (NICE)5 and the recently released EPC Guide (Methods Reference Guide, 2008).

The underlying disciplines for conducting CER are clinical epidemiology and clinical medicine. Individual comparative effectiveness reviews are usually conducted by project teams led by a project principal investigator under the oversight of a center director. The center director must have exceptional, in-depth disciplinary knowledge and skills in the underlying core disciplines of clinical epidemiology, clinical medicine, and medical decision making. The director should also have applied experience in addition to theoretical knowledge of these areas. For example, it is essential that the director have experience working with guideline panels, coverage committees, health plans, consumer groups, and other bodies that use evidence in decision making. Without such leadership, comparative effectiveness reviews may miss the mark, failing to address the information needs of the target audiences.

It is also important that the director, or other senior investigators, have experienced conducting clinical research studies and not just appraising them. Qualifications for center directors generally include an M.D. degree with additional training leading to a master’s degree plus a record of academic productivity representing outstanding contributions in a field such as clinical research design, literature synthesis, statistics, pharmacoepidemiology, or medical decision making. The most important competencies of the project leader are an understanding of clinical research study designs and clinical decision making. Collectively, the project leader and other investigators and staff must have expertise in various areas, such as interviewing experts (including patients) to identify important questions for the review to address, protocol development, project management, literature retrieval and searching, formal methods to assess the quality and applicability of studies, critical appraisal of studies, quantitative synthesis, and medical writing.

This workforce can be characterized based on the experience of the AHRQ EPCs. Through the Effective Health Care Program, the EPCs have completed 15 CERs over a period of approximately 3 years. The average cost of an AHRQ CER is $250,000 to $350,000, depending on its complexity. In these centers, investigators usually have a Ph.D. in epidemiology, pharmacoepidemiology, or biostatistics, or an M.D. with research fellowship training and a master’s degree in a pertinent field. Ideally, all

_______________

5 See http://www.nice.org.uk/ (accessed September 8, 2010).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

participants should have experience conducting systematic reviews and an understanding of methodological research in the area of systematic reviews, clinical epidemiology, meta-analysis, or cost-effectiveness analysis. Most importantly, they should have the ability to work with healthcare decision makers who need information to make more informed decisions; they should be able to formulate problems carefully, often working with technical experts (including patients and clinicians) to develop an analytic framework and key questions addressing uncertainties that underlie controversy or variation in practice; they should have a broad view of eligible evidence, one that has recognized that the kinds of evidence included in a review depends on the kinds of questions asked and on what kinds of evidence are available to answer them; and they should understand that while systematic reviews do not in themselves dictate decisions, they can play a valuable role in helping decision makers clarify what is known as well as unknown about the issues surrounding important decisions and, in that way, affect both policy and clinical practice (Helfand, 2005).

Also required for systematic reviews are research librarians who have skills in finding evidence for systematic reviews through using electronic bibliographic databases, citation-tracking resources, regulatory agency data repositories, practice guidelines, unpublished scientific research, Web sites and proprietary databases, bibliographic reviews, expert referrals, and publications of meeting proceedings, as well as hand-searching of key journals. Statisticians are needed who have skills in providing advice and critique on the statistical methods used in published and unpublished clinical studies; in conducting statistical analyses, including meta-analysis and other standard analysis and computation; and in preparing statistical reports, including figures and tables. EPCs also require editors who can improve the readability and standardization of evidence reports. In addition, EPCs require research support staff. Research associates must have the ability to critically assess the effectiveness and safety of medical interventions; experience with systematic reviews of the medical literature; knowledge of the fundamentals of epidemiology, study design, and biostatistics; facility in conceptualizing and structuring tasks; and experience with clinical research methods. Research assistants need skills in maintaining bibliographies; coordinating peer review contacts and documents; and assisting in the development of summary reports, figures, tables, and final reports using particular style guidelines. Table 4-1 shows the typical staffing for a CER evidence report funded by AHRQ for a 1-year period.

Although the number of systematic reviews that is necessary may be among the easier of the “how much” questions to ask, there is no clear answer. The Cochrane Collaboration6 originally estimated a need

_______________

6 See http://www.cochrane.org/ (accessed September 8, 2010).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 4-1 Required Staffing for a Comparative Effectiveness Research Evidence Report

Role Activity Training Full-Time Equivalent
Center director Leadership Clinical epidemiology, clinical medicine, decision making 0.05–0.3
Principal investigator Leadership Clinical epidemiology and clinical medicine 0.4
Co-investigator Domain expertise Clinical 0.2
Co-investigators Methods expertise Clinical + fellowship or master’s or Ph.D. 0.4 to 0.6, depending on scope
Research associate Critical appraisal M.S./M.P.H./other master’s 1.0
Research assistant Data management B.S. or M.S. 0.5
Librarian Literature searching M.L.S. 0.05
Statistician Statistical analysis M.S. or Ph.D. 0.1

for 20,000 reviews; to date, it has completed 3,539 and developed 1,868 protocols for reviews that are proposed or under way. The AHRQ EPCs have produced 168 evidence reports and 16 technical reviews.7 The Drug Effectiveness Review Project8 produced 28 original reports and updated 45 reports in its first 3 years.

Of course, systematic reviews are not static documents and, as such, require updating when new evidence becomes available. As increasing numbers of reports are completed, the workforce needs will shift from producing reports to updating them. Shojania et al. have noted that systematic reviews published in the medical literature have a half-life of about 5.5 years, with about 23 percent requiring updating within 2 years of publication (Shojania et al., 2007). Moher et al. surveyed the literature on signals that updates are required and noted that few robust methods exist for detecting them. It is clear, however, that the growing number of systematic reviews being performed will require updating as new evidence from CER and related work becomes available (Moher et al., 2007).

As the existence of the Cochrane Collaboration, an international effort, indicates, efforts going on in other countries may be useful in various areas

_______________

7 See http://www.ahrq.gov/clinic/epcix.htm (accessed September 8, 2010).

8 See http://www.ohsu.edu/drugeffectiveness/ (accessed September 8, 2010).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

of CER—and especially useful in systematic reviews, which are based on scientific literature. For example, NICE produces reports of evidence for health care. Programs in Canada and Sweden provide these as well.

Current training programs are probably adequate to absorb a moderate increase in demand for systematic reviews, but significant expansion of systematic reviews will require more capacity, which will inevitably lead to competition with other clinical research needs. The current training pathways are heterogeneous, and schools of public health and medicine could be much more explicit in developing tracks and certificate programs in systematic review and related areas. These could exist within degree programs in clinical effectiveness, epidemiology, informatics, and so forth. There is a substantial need, however, for biostatisticians and methodologists (who may be in epidemiology or other disciplines) to advance meta-analytic methods in systematic reviews.

Pharmacoepidemiology

An additional area where particular expertise will be needed is pharmacoepidemiology. Recent efforts of the Food and Drug Administration (FDA) to expand drug safety monitoring will require the employment of dozens of pharmacoepidemiologists, creating a competition for their services with the increasing CER activities. The recent FDA Administration Amendments Act of 20079 calls for expanding the Prescription Drug Use Fee Act to devote more effort to drug safety, including in areas such as pharmacoepidemiology (Kirschner et al., 2008). It has been estimated that this could require the additional need for 80 to 100 pharmacoepidemiologists (Mullin, 2007). This will create competition for pharmacoepidemiologists who could perform CER work.

An even more challenging problem is that the number of Pharm.D.’s and Ph.D.’s specifically trained in pharmacoepidemiology in North America is small and inadequate to meet growing needs. To meet new additional demand will take time and a several-fold increase in graduates. This will require expanding existing programs and establishing new programs. No one knows how easily individuals trained in other subsets of epidemiology (infectious disease, cardiovascular, environmental, and so on) can be retrained into pharmacoepidemiologists. Device safety and CER will be even more challenging, given their specialized nature and the paucity of high-quality randomized controlled trials (RCTs). Not all pharmacoepidemiology programs examine devices; it will be necessary to expand the field at the same time that training is being expanded.

_______________

9 Food and Drug Administration Amendments Act of 2007. 2007. HR 3580, 110th Cong.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Clinical Epidemiology Methods Research

Clinical epidemiology integrates epidemiologic methods and knowledge of clinical practice and decision making in order to develop clinical research methodology and to appraise clinical research (Fletcher and Fletcher, 2005; Haynes et al., 2005). Its purpose is to develop and apply methods to observe clinical events that will lead to valid conclusions. The availability of senior clinical epidemiologists is limited. This limitation is important because it will affect the capacity to train clinical researchers, conduct practical clinical trials and comparative effectiveness reviews, and develop new methods for clinical research.

To sum up the areas considered so far, analysis of the relevant data in the areas of clinical epidemiology, clinical research, pharmacoepidemiology, and EBM show that the workforce required is likely to be substantial, not available solely based on those who are currently trained, and dependent on the amount of systematic reviews, clinical trials, and other work related to CER that policy makers and others believe must be funded. Furthermore, for all categories of workers, and especially physicians, CER will find itself in a competition both for various types of clinical researchers and also for clinical practitioners, for which there is already a looming shortage (Dall et al., 2006), especially in primary care (Goodman, 2008), which is the area where the need is greatest and from which CER physician researchers are likely to be drawn.

Biomedical Informatics

Another discipline with many contributions to make in CER is biomedical informatics (BMI). This discipline is focused on the acquisition, storage, and use of information in health care and biomedical research, usually assisted by information technology (IT) (Hersh, 2002). The use of BMI for CER is one of a number of “re-uses” or secondary uses of clinical data derived from the EHR and other sources of patient information (Safran et al., 2007). An example of how this has been done is provided in the learning health system workshop summary (Weissberg, 2007). Other potential areas for reuse of EHR data include public health surveillance, health information exchange, clinical and translational research, and personal health records.

The reuse of clinical data currently accounts for a negligible portion of the effort that healthcare delivery organizations devote to clinical IT implementation. Most of the effort goes to deploying systems and is focused on their optimal use for direct clinical care. Furthermore, many individuals who work in the collection or storage of data potentially useful for CER also have other, and sometimes more prominent, roles in the workforce.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

However, new training and skills will be required as additional reuse of clinical data is undertaken.

It must be noted that much data in EHR systems are not research-quality data. Clinical documentation is often not a high priority for clinicians. Forms and other types of clinical data capture can be cumbersome and time consuming for busy clinicians to use, and clinicians often do not appreciate the importance of entering high-quality data as part of routine clinical care. BMI workers must be well attuned to the needs of CER and related disciplines if they are to meet the informatics needs of CER.

Of course, implementing EHRs and reusing their data are not the only areas of BMI that are of importance to CER. Biomedical informaticians have skills that are needed in a variety of other areas, including the following:

  1. information needs assessment;
  2. data mining, text mining, and other forms of knowledge discovery (e.g., tools that help streamline the production of systematic reviews) (Cohen et al., 2006); and
  3. ontology development and knowledge management (e.g., projects like The Biomedical Research Integrated Domain Group (BRIDG) Model and other efforts to improve BMI in clinical research) (Fridsma et al., 2008).

Before getting into the details of the informatics workforce for CER, let us take a broader look at that workforce more generally. Most research assessing the HIT workforce has looked only at specific settings or professional groups. In developed countries, there are generally three categories of professionals who make up the HIT workforce:

  1. IT professionals—usually with a technical background, such as computer science or management information systems,
  2. health information management professionals—the allied health profession historically focused on medical records, and
  3. biomedical informatics professionals—working at the intersection of IT and health care, usually with a formal background in one or both.

Probably the most comprehensive assessment of the HIT workforce was carried out in England (Eardley, 2006). This analysis estimated that the HIT workforce employed 25,000 full-time equivalents (FTEs) out of 1.3 million workers in the National Health Service, or about 1 IT staff per 52 non-IT workers. Studies done in the United States have generally focused on one group in the workforce, such as IT or health information management

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

professionals. Gartner Research assessed IT staff in integrated delivery systems of varying size (Gabler, 2003). Among 85 such organizations studied, there was a consistent finding of about 1 IT staff per 56 non-IT employees, which was similar to the ratio noted above in England.

More recently, Hersh and Wright used the Healthcare Information and Management Systems Society (HIMSS) Analytics Database10 to analyze hospital IT staff (Hersh and Wright, 2008). This database contains self-reported data from about 5,000 U.S. hospitals, including elements such as number of beds, total staff FTEs, total IT FTEs (as well as broken down by major IT job categories), applications, and the vendors used for those applications. A recent addition to the HIMSS Analytics Database is the Electronic Medical Record (EHR) Adoption Model, which uses eight stages to rate hospitals on how far they have gone toward creating a paperless record environment (EHR Adoption Model, 2007). “Advanced” HIT is generally assumed to be stage 4, which includes computerized physician order entry (CPOE) and other forms of clinical decision support that have been shown to be associated with improvements in the quality and safety of health care (Chaudhry et al., 2006).

Hersh and Wright found the overall IT staffing ratio to be 0.142 IT FTE per hospital bed. Extrapolating to all hospitals beds in the United States, this suggests a total current hospital IT workforce size of 108,390 FTEs. They also found that average IT staffing ratios varied based on the EMR Adoption Model score. Average staffing ratios generally increased with adoption score, but hospitals at stage 4 had a higher average staffing ratio than hospitals at stages 5 or 6. If all hospitals in the United States were operating at the same staffing ratios as stage 6 hospitals (0.196 IT FTE per bed), a total of 149,174 IT FTEs would be needed to provide coverage—an increase of 40,784 FTEs from the current hospital IT workforce.

No studies have quantified the numbers of BMI professionals, although some studies have qualitatively assessed certain types, such as chief medical information officers (Leviss et al., 2006; Shaffer and Lovelock, 2007). The value of BMI professionals is also hinted at in the context of studies showing flawed implementations of HIT leading to adverse clinical outcomes (Han et al., 2005), which may have been preventable with application of known best practices from informatics (Sittig et al., 2006), and other analyses showing that most of the benefits from HIT have been limited to a small number of institutions with highly advanced informatics programs (Chaudhry et al., 2006). Others have documented the importance of “special people” in successful HIT implementations (Ash et al., 2003).

With this general framework, it is possible to discuss the needs of the

_______________

10 This database is derived from the Dorenfest IDHS+ Database, see http://www.himssanalytics.com (accessed September 22, 2010).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

informatics workforce for CER. One place to start is with the institutions funded by the CTSA program. Many institutions funded under the CTSA initiative have developed research data warehouses for clinical data that can be used for reuse of clinical data. Workforce needs include those with IT skills in deploying EHR systems, relational databases, and networked applications, as well as those with a more clinically focused orientation who will actually carry out CER activities. The skill set for CER varies depending on the job. Table 4-2 lists the job titles, job responsibilities, and degrees and skills required for various HIT positions. There is unfortunately very little standardization in these jobs. There is also minimal overlap in the skill sets. The jobs can be broadly divided into those that are IT (more technical and less requiring of clinical expertise) and BMI (less technical and more requiring of clinical expertise).

Where will these BMI skills be developed or obtained? Although some technical and clinical skills are obtained through one’s formal education, much skill development in BMI currently takes place on the job. In addition, with the rapidly changing nature of IT and BMI, many skills must be learned on the job because some applications did not exist during the individual’s education or training. A repeated statement heard from employers of IT and BMI personnel is that “soft skills” are essential. These include the ability to work in groups as well as to communicate effectively orally and in writing. BMI personnel in particular are often viewed as functioning in a “bridge” capacity among IT and clinical personnel.

What are the projected needs as CER is scaled up in healthcare settings? The research by Hersh and Wright cited above indicates that the need for IT personnel increases with the increasing sophistication of EHR adoption, perhaps leveling off after the implementation of CPOE and clinical decision support are reached. The estimates by Hersh and Wright do not include any BMI personnel because the resource they used for their work did not include data on these personnel. In addition, because the data resource did not include data on those who specifically do CER activities, the researchers also do not explicitly include any of these activities. CER informatics work will require both IT and BMI personnel. One common assertion concerning BMI personnel is that there should be one physician and one nurse trained in BMI in each of the 5,000+ hospitals in the United States (Safran and Detmer, 2005). This has led to the 10×10 (“ten by ten”) program of the American Medical Informatics Association, which aims to provide a detailed introduction in BMI to 10,000 individuals by the year 2010 (Hersh and Williamson, 2007).

Of course, there are other needs for BMI professionals and researchers as well. Areas described above, such as information needs assessment, data and text mining, and ontology and knowledge management will require even more personnel. Indeed, the BMI field is rapidly evolving, with grow-

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 4-2 Job Titles, Responsibilities, and Training Required for Health Information Technology Professionals


Job Title

Job Responsibilities

Degrees and Skills Required


Information Technology (IT)

Chief information officer

Oversees all IT operations of organizations

IT, computer science (CS), or management information systems (MIS)

Director, clinical research informatics

Oversees clinical research applications, including comparative effectiveness research (CER)

biomedical informatics (BMI)

Data warehouse manager

Oversees development of research data warehouse

IT, CS, or MIS

Web designer

Designs Web front end for data access systems

IT, CS, or MIS

Web engineer

Deploys Web back end for data access systems

IT, CS, or MIS

Research applications programmer

Develops CER and other applications

IT, CS, or MIS

Database administrator

Administers research data warehouse

IT, CS, or MIS

Project manager

Manages CER and other projects

IT, CS, or MIS

Biomedical Informatics

Chief medical information officer

Oversees clinical IT applications, including research data warehouse

BMI

Physician leads

Provide leadership in implementation and use of electronic health records

BMI, formally or informally

Medical informatics researcher

Oversees data mining activities for CER

BMI

Medical informatics researcher

Oversees information needs assessment for CER

BMI

Research analyst

Works with medical informatics researchers to collect data and carry out analysis with medical informatics researchers

Variety


Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

ing attention paid to the need for professional development and recognition (Hersh 2006 2008).

There is a need for more research to better characterize the optimal IT, health information management, and BMI personnel for general operational EHR and related systems as well as for CER activities specifically. Such research must measure not only health IT practices as they are now, but also what they may become as the implementation agenda advances. Research must sample a wide variety of health organizations to determine quantitatively (e.g., number of people and their skills and education required) as well as qualitatively (e.g., notions of what they need that is not measured by surveys) what they do now, plan to do in the future, and should be doing to achieve an optimal learning health system.

Clinical Guidelines Development and Implementation

Practice guidelines represent an effector arm of the comparative effectiveness process (IOM, 1990). Once scientific studies are performed and their outcomes are systematically reviewed, multidisciplinary guideline development panels are convened to transform the summarized knowledge into recommendations about appropriate care. Those recommendations are then disseminated and presented to many different types of teams for implementation. Because there is some confusion about the use of these terms, guideline authoring will be defined as the translation of scientific evidence and expert consensus into policy statements. Dissemination refers broadly to the publication and spread of those policy statements. Guideline implementation refers to the operationalization of policies in clinical settings with the goals of improving specific processes and outcomes of care and of addressing specific barriers and challenges to uptake.

Ideally, guidelines are produced by multidisciplinary teams that together provide a complete skill set. Unfortunately, these teams are often convened for a single purpose, and skills and knowledge accumulated by team members are not reused in subsequent guideline development efforts.

Guideline development requires topic (domain) expertise that varies from one topic to the next. To create evidence-based guidelines, knowledge must be distilled from the scientific literature and combined with expert judgment. Authors typically work from evidence tables, meta-analyses, and systematic reviews to summarize the facts that are known about a topic. Such evidence summaries may be sought externally or produced by experts within the team itself. Often, however, there are “holes” in the evidence base that must be addressed, either by eliciting expert opinion and experience or by developing an agenda for further research.

Even within a single guideline topic there are often multiple clinical perspectives that should be represented, such as primary care and specialty

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

care, medical and surgical approaches, and the insights of paraprofessionals. Guideline authoring teams also require two types of methodologists: those who can help topic experts to understand the evidence and its quality, and those who understand the guideline development process, including how evidence quality and benefits and risks should be weighed in order to create statements about the strength of a recommendation. Furthermore, the perspectives of patients who suffer from the condition of interest are often invaluable in formulating recommendations that accommodate the values of that group of people with the greatest stake. The process of formulating policy from scientific evidence requires yet another skill set. Finally, skills in teambuilding, mediation, project management, and leadership are essential to ensure that a well-designed product emerges from the process in reasonable time.

Where are these skills currently developed or obtained? Expertise in clinical care comes most often from clinical training and experience. Expertise in judging evidence may come from coursework in epidemiology and study design. Expertise in policy development usually derives from experience for clinicians, but it may be obtained in formal health policy and public policy studies. Skill in the implementation of guideline recommendations is developed in different ways depending on the specific intervention. Expertise in education, evidence-based decision making, marketing, psychological conditioning, informatics, social and organizational behavior, regulation, financial analysis, and healthcare administration may all be useful in various situations, depending on the implementation strategies selected. Also, as noted above, the experience gained in authoring or implementing a specific guideline is often wasted when not reused.

Because guideline authors tend to focus on policy creation, they pay little attention to how those policies will be implemented. In many situations, authors deliberately introduce vagueness and underspecification because they are unsure how to address such things as gaps in evidence, lack of consensus, and potential legal implications. These limitations in clarity must be identified and resolved before the guideline recommendations can be implemented. The American Academy of Pediatrics is piloting a program called the Partnership for Policy Implementation in which a pediatrician-informatician is made a part of the guideline development team to help ensure that the guideline product can be implemented effectively.

What are the projected needs in this area? Currently, AHRQ’s National Guidelines Clearinghouse contains more than 2,000 “evidence-based” guidelines. Based on observations of how soon they become outdated (Shekelle et al., 2001), these guidelines should be reviewed and reaffirmed, revised, or retired every 5 years. Even allowing for no growth, 400 guideline review teams must reassemble each year. If the advice of the IOM is followed and a central agency is developed to help standardize guideline devel-

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

opment efforts, this could decrease the number of teams creating guidelines and would probably—through “certification”—result in improved reuse of guideline development skills.

The guideline authoring process needs more individuals who are skilled at evidence searching, extraction, and filtering, as well as in policy development. The EPCs may be capable of meeting the current needs of requesting organizations, but additional numbers for performing updates, horizon scans, and filling in holes in the evidence base will be necessary. It will also be valuable to have additional staff within the national professional organizations who can coordinate and lead guideline development initiatives.

The lack of a central organization complicates the estimation of workforce needs. There is little opportunity for guideline developers and implementers to interact—there are no national organizations or national meetings that are attended by both groups. This contributes to the poor communication described above. The Guidelines International Network provides such a venue, but it is composed mostly of Europeans. AHRQ and the National Guidelines Clearinghouse might consider convening such an activity. The quantity of workers required in this area would depend on the number of guidelines necessary, followed by an assessment of what organizations require in order to implement them in their local settings.

Health Services Research

HSR is currently a robust and growing field. It draws from a number of disciplines. Health services researchers regularly participate in CER, and organizations and departments using the label “HSR” successfully compete for grants and contracts in this area. Informal conversations with training program directors indicate that graduates today have multiple job opportunities.

There are many programs claiming to train health services researchers. Whether they evolved from a conscious assessment of what the health system required in terms of research or whether they evolved from other beginnings is not necessarily important; what is important is that the field is established, recognized in formal policy, supported by institutions and professions, and capable of guiding its own destiny. The future of HSR is open to speculation—whether it will evolve as a specialized profession with a coherent and formally bounded sphere of influence or remain more an informally defined “point of view” is an open question. The text in this section is derived from a monograph authored by Ricketts (2007).

As CER gains more attention from scientists, practitioners, and funders, health services researchers will likely adapt their skills and content expertise toward issues of CER. There will, however, be some particular challenges to health services researchers as they become involved in CER studies. CER

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

requires detailed knowledge of randomized trial design, and many HSR studies use observational studies. CER will also require knowledge of the clinical conditions under study and the practice contexts in which the treatments are applied. Nonclinicians will need either to acquire this knowledge or to develop detailed collaborations with clinicians who have symmetric training in methods.

As a field rather than a specific discipline, HSR has skill sets that cut across multiple domains. In the past, there have not been specifically defined core competencies within HSR. Over the past several years AHRQ partnered with AcademyHealth to define doctoral-level “core competencies” for HSR through a series of white papers and meetings (Forrest et al., 2005, 2009). Table 4-3 shows the current list of competencies.

Some of these competencies extend beyond CER. For example, most CER does not involve primary data collection. There is a substantial overlap between the core competencies in Table 4-3 and the skills needed to conduct CER, including a knowledge of study designs, the ability to develop conceptual models, responsible conduct of research, secondary data methods, study design, implementation of protocols, clear scientific communication, and collaboration with stakeholders.

HSR is recognized in universities and research institutes as a pathway for development of advanced inquiry. Academics are recognized by the vocational cognomen “health services researcher” as often as by the more academic discipline titles of “economist” or “sociologist.” There is now an extensive infrastructure in universities, research institutes, and centers for training health services researchers. AcademyHealth, in its inventory of training programs, lists 127 graduate programs in HSR in the United States and Canada. The complexity of the field is reflected in the variety and scope of programs that identify themselves as preparing health service researchers. To complicate matters more, the practicing health services researcher may not actually be formally trained in a program called “HSR.”

Some of the programs will likely adapt relatively easily to the need to incorporate skill sets important to CER. As noted elsewhere in this paper, meeting future demands for literature synthesis and meta-analysis will probably not be difficult. Other components of CER, such as pharmacoepidemiology and the assessment of treatment harm through the merging of disparate secondary data, may need to invest in additional doctoral-level training positions in order to meet rising demand. One risk of not meeting increased demand will be the need to use less-well trained individuals to conduct CER.

AHRQ and AcademyHealth have recently completed a general assessment of the training issues and needs for HSR professionals (Ricketts, 2007). Those authors recognized that CER represented one aspect of HSR,

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 4-3 Competencies for Health Services Research


Core Competency

Educational Domains


Breadth of health services research (HSR) theoretical and conceptual knowledge

Health, financing of health care, organization of health care, health policy, access and use, quality of care, health informatics, literature review

In-depth disciplinary knowledge and skills

(Variable depending on the discipline or interdisciplinary area of specialization)

Application of HSR foundational knowledge to health policy problems

Health, financing of health care, organization of health care, health policy, access and use, quality of care, health informatics, literature review

Pose innovative HSR questions

Scientific method and theory, literature review, proposal development

Interventional and observational study designs

Study design, survey research, qualitative research

Primary data collection methods

Health informatics, survey research, qualitative research, data acquisition and quality control

Secondary data acquisition methods

Health informatics, HSR data sources, data acquisition and quality control

Conceptual models and operational measures

Scientific method and theory, measurement and variables

Implementation of research protocols

Health informatics, survey research, qualitative research, data acquisition and quality control

Responsible conduct of research

Research ethics

Multidisciplinary teamwork

Teamwork

Data analysis

Advanced HSR analytic methods, economic evaluation and decision sciences

Scientific communication

Proposal development, dissemination

Stakeholder collaboration and knowledge translation

Health policy, dissemination


SOURCE: Data derived from Forrest et al., 2005, 2009.

but that the field is currently very labile. Three approaches to workforce planning for CER and HSR can be recommended:

  1. Researchers conducting CER as well as funders and policy makers planning new initiatives should regularly communicate with educators so that new needs for training can be incorporated in a timely fashion. While training programs should not alter curriculums with each new federal request for applications, they should be responsive to changes in the research and policy environment.
  2. AHRQ and AcademyHealth should continue to conduct periodic surveys (every 2 to 3 years) and key informant interviews to assess
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

the state of the workforce for health services and CER, communicating with researchers in industry, contract research, and academe regarding the quality and availability of personnel at multiple levels of training.

  1. AHRQ and AcademyHealth should also regularly examine the number and type of training programs in HSR and CER and communicate with funders regarding adequacy of supply and congruence of curriculums with the expressed need of the research organizations conducting CER. Modifications to predoctoral, postdoctoral, and career development (K series) programs can be based on these evaluations.

Dissemination

The purpose of evidence translation and dissemination is to develop practical tools to improve decision making by end users. The group of end users is broadly defined and includes people who have medical problems (patients), their families and caregivers, clinicians, healthcare administrators, governmental policy makers, and employers. Evidence translation is the process of extracting key messages from evidence summaries (systematic reviews or technology assessments) and placing those messages into the context of the decisions made by end users. To be useful, translation needs to lead to the creation of products (such as summaries tailored for particular audiences) that can be accessed by those end users. Dissemination is then the step of making those products accessible for the end users. Dissemination can occur through various avenues, including the distribution of printed products, making the products available on Web sites, and other modes of electronic distribution (such as interactive decision aids, podcasts, and e-mail).

To help individuals use clinical evidence in their decisions, the summaries of that evidence must be unbiased. The evidence sources are often complex scientific documents that provide detailed and highly nuanced explanations of the body of evidence. The process of evidence translation requires careful analysis and summarization to avoid errors or oversimplification. Evidence summaries are only useful if they are applicable to the actual decisions made by the people involved in health care. This activity requires a thorough knowledge of the methodologies used in clinical research and systematic reviews. It also requires a clear understanding of the clinical context in which the evidence will be applied. Thus, the technical skills required to perform translation are similar to those required for the development of systematic reviews. Individuals must be able to understand the methodologies in order to translate the reports without introducing bias. The process of developing key messages often involves

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

simplification, and this requires a careful deconstructing of the information in a systematic review.

After the process of evidence translation has determined the messages that will be useful for decision makers, the next step is to summarize this information in products that can serve as tools to aid decisions. Because such tools can take various forms, the skills are multidisciplinary and include the ability to provide clearly written documents and to design interactive content for electronic distribution. Developing effective decision tools can also benefit from the input of stakeholders and opinion leaders. This input helps to ensure that the evidence translation will meet the critical needs of decision makers. After the decision tools are developed in draft form, testing them with end users provides valuable insight into how they can be modified and improved. Both of these steps (obtaining formative input and performing testing with end users) require qualitative research skills. Finally, after the decision tools are developed and tested with end users, they are ready for public release and dissemination. Dissemination is a specialized activity that requires skills related to public relations, journalism, and communications.

The multidisciplinary team required to perform evidence translation and dissemination includes a variety of individuals who have different skills and who commonly have diverse educational backgrounds. While some individuals may play more than one role (e.g., a clinician with skills in clinical research methodologies), the required skills are so diverse that a multidisciplinary team is needed. Table 4-4 summarizes the individuals who compose the team.

What are the projected needs in this area as CER is scaled up? At present there are relatively few groups doing state-of-the-art evidence translation and dissemination in the United States. As programs to increase CER move forward, there will be a growing need for individuals to perform translational work. Some members of the multidisciplinary team (particularly clinicians and methodologists) will be drawn from the same pool as those performing the work of information synthesis. Thus, the need for translation will increase the need for an infrastructure to train such individuals. For the other team members, the necessary training will come from programs that provide training in informatics, qualitative research, and communication.

The need for additional workers in dissemination-related areas of CER is based on the amount of activity deemed required to most effectively distribute such knowledge. This in turn is a function of the output from the other activities shown in Figure 4-1 that feed into dissemination. Another factor in quantifying the amount of dissemination required is the different types of healthcare professionals (e.g., physician specialists, physician

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TABLE 4-4 Roles, Skill Sets, and Backgrounds of Personnel Involved in Comparative Effectiveness Research Dissemination

Role Skill Set Contribution Educational Background

Clinician

Understanding of clinical issues

Defining key decisions to which the evidence will be applied

Medicine, nursing, pharmacy

Research methodologist

Understanding sources of bias

Defining key messages derived from systematic reviews or technology assessments

Clinical epidemiology

Writer

Synthesizing contextual information and key messages

Creating plain language explanations

Health communication

Qualitative researcher

Qualitative data collection and synthesis

Interviewing key informants and end users

Qualitative methods

Computer programmer

Creating Web-based and other interactive content

Developing electronic tools to aid decision making

Biomedical informatics and/or computer science

Dissemination specialist

Understanding audiences and avenues for dissemination

Developing effective dissemination strategies

Communication, public relations, journalism

generalists, nonphysicians) and patients (e.g., those with varying levels of health and general literacy).

Overlapping Areas

Figure 4-1 shows two areas of explicit overlap, and there are likely to be more. Each of these areas is likely to generate additional needs, such as people who work, perform research, and teach at these margins. The first of these areas of overlap is information needs assessment. Trialists and systematic reviewers must, for example, work with domain experts to determine the key aspects of their research questions to be studied. Likewise,

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

guideline implementers and developers must be driven by an information needs assessment process.

The second area of overlap is methodology. As with most research in general, CER is driven by a diverse set of methods applied in many medical domains. Furthermore, the intersection of these areas may create the need for new methodology, such as the best methods for approaching, if possible, research-quality data in an EHR. It will take additional workforce to conceptualize and develop this methodology, followed by practitioners to implement it and professors to teach it. Technology assessment is one specific area of methodology that will require all of this.

Summarizing and Quantifying

The above analysis of workforce components has identified the broad range of activities that make up CER. These will come not only from research in traditional clinical research and related areas but also through analysis of the growing amount of data in EHR systems, experiences with clinical guidelines development and implementation, aspects of HSR, and more widespread dissemination of knowledge. Current research and other work in these areas remains productive, but a substantial scaling up of funding will require better policy coordination, more funding, and—what is assessed in this paper—understanding and planning for workforce needs. There are several challenges to achieving the vision and goals for CER related to workforce needs.

The first challenge in defining the CER workforce is to grapple with the larger question of the quantity of CER that is necessary for the learning health system. To quantify the needs, these questions must be answered:

  1. What quantity of comparative clinical trials and other clinical research will be required?
  2. What quantity of CER systematic reviews will need to be performed?
  3. What amount of pharmacoepidemiological and related analysis will be required, or even possible, especially given the small number of pharmacoepidemiologists?
  4. How many medical centers will be willing or able to use their EHR systems or local guideline implementation to provide data for CER?
  5. What quantity of clinical practice guidelines will need to be produced?
  6. What types and amounts of HSR will be necessary for CER?
  7. What types and quantities of dissemination will be required for CER? At how many levels will the content require reformatting?

These various aspects of CER do not exist in isolation. In this analysis,

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

a number of areas have been identified that require an interaction of activities and skills across various areas of CER. Therefore it will not be enough simply to plan in discrete areas such as clinical epidemiology, BMI, or HSR. Furthermore, this analysis focused only on the actual work to be done and not on the leadership required to guide CER work. As in all fields, leadership will be necessary to develop and advocate for the vision of CER and the learning health system, to manage its deployment and training, and to interact with the leadership of related and separate disciplines.

Recognizing that there are different areas of CER and diverse skill needs within them, a total summation of the workforce will quantify the need in each area and then sum those needs across all the areas. How much CER, and how much of each component of CER, must be quantified by policy makers and others who must take into account the demand for each type of CER, the supply of the workforce, competition for other tasks these workers might do, and funding available for CER.

The original intention in this report was to provide a quantitative first approximation of the workforce needs for CER. However, as the authors developed the framework and explored the issues more deeply, it was apparent that there are too many unanswered questions about the scope, breadth, and quantity of CER that need to be clarified in order to achieve the larger goals for a learning health system. This view was validated by many of the experts listed in the acknowledgements, who advised against attempting to quantify needs as long as there is such an unclear picture of, and future for, CER.

There are a number of reasons why a quantitative assessment of the CER workforce is not possible. The main one is that the true scope of CER is not known. For example, even in the area of clinical epidemiology, which has probably the most clarity about needs of any of the areas that were assessed, there is no clear answer about how many systematic reviews, practical and other clinical trials, and pharmacoepidemiological analyses are required. While the number of personnel required for systematic reviews is relatively well understood, the requirements for the other categories of clinical trials and pharmacoepidemiological analyses are much less clear.

Beyond clinical epidemiology, the picture becomes even less certain. While BMI, development and implementation of clinical guidelines, and dissemination could become a major part of CER, the amount of each that will need to be done—or that even falls under the rubric of CER—is not clear. Furthermore, in all of these areas, CER would be secondary to the larger tasks of maintaining IT systems for clinical care; using guidelines to improve the quality, safety, consistency, and cost effectiveness of operational clinical care; and disseminating all types of clinical knowledge. How much of the work would actually be CER is not known or easy to determine. Even in HSR, the amount of research to be done that could be

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

classified as CER is not certain. Clearly, in an analysis held under the rubric of “EBM,” there is little evidence to make sound judgments about specific workforce needs.

It is certain, however, that CER will require a diverse array of skilled workers to meet its agenda. Any effort to undertake CER in a major way, such as through the establishment of a centralized public agency or private institute, should have the quantification, efficient deployment, and required education of the workforce as an early research agenda item. Determining how to implement and scale up CER will be a major challenge if its size and scope is to be seriously increased.

This leads to a number of larger policy questions. For example, how will CER be financed? In the case of research studies, who will fund the work comprehensively, especially in light of a national research enterprise that focuses on disease-based and investigator-initiated research? Likewise, who will fund the development of clinical practice guidelines? For studies derived from the secondary use of clinical data, which medical centers and health systems will be required to participate, and how will they be funded? Will it become an expected part of healthcare delivery? Finally, how will the knowledge generated from CER be disseminated? What amount of dissemination will be required, and how will it be funded?

It is also worth noting that work in CER will face competition from other areas for researchers and their staffs, especially among physician researchers. As the baby boomer generation enters the Medicare age group, there is a growing need for physicians and other clinical practitioners. Likewise, there are also demands for physicians to enter non-CER research areas, such as those encouraged by the CTSA program, which could be a help or a hindrance to CER research. This is true for other areas of CER as well, such as the need for pharmacoepidemiologists because of the growing amount of drug monitoring and safety called for in recent FDA-related legislation. As such, any policy or funding that increases CER will need to recognize the competition for workers and skills from related areas, which will drive up the salaries of researchers and their staffs.

There is also competition for workers even within CER work. This analysis has mostly focused on academic settings, but there are others who have an interest in performing CER and related work. This includes government agencies, nonacademic healthcare systems, and manufacturers of drugs, devices, and other medical tests and treatments. One possible silver lining in this competition is the potential to partner with international organizations engaged in similar work, such as NICE. While not all CER work transfers easily across borders, populations, and cultures, there is likely some amount that can.

Research and policy development will also need to be provided to the locations where leadership in CER will be required to pilot the leading edge

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

of research, especially in methodologies, and to train the next generation. The best sites to establish these centers of excellence will probably be those that house EPCs, CTSA centers, AHRQ HSR training, and informatics research and educational programs. As in all of these specific initiatives, national consortiums should be established to share the vision, best practices, and policy for developing the pipeline of new researchers.

The determination of the scope and amount of workforce required for CER is a research agenda itself. Particularly for the workforce required, and for the components within it (such as clinical epidemiology, BMI, and HSR), research should be undertaken to identify not only the skills required now, but also how the workforce will be best organized in the future for maximum efficiency, best-quality output, and the anticipated expansion. This should be done through a variety of methods, including estimates of quantitative needs (e.g., amount of research, number of researchers and their staff, existing capacity, how much expansion is required) as well as qualitative understanding (e.g., people and organizational challenges, academic homes, career advancement).

Once this research agenda identifies the workforce and the skills it needs, it will also be necessary to determine the types of educational programs required to train those individuals, such as the competencies and curriculums of such programs. This will require policy on how to fund such education, especially for those with increasing burdens of educational debt already acquired for their basic education. This may be another area where international partnerships may be helpful.

CER promises an exciting approach to improving the quality of health care while reducing its cost though more efficient use of the most effective approaches to clinical care. A major part of achieving this system will be a coordinated and adequately funded approach. There are many challenges to reaching that goal, including the provision of a workforce that can bring the requisite knowledge and skills to CER problems and solutions. Much further research, policy, and funding will be required to achieve this vision.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

TOWARD AN INTEGRATED ENTERPRISE—THE ONTARIO, CANADA, CASE

Danielle Whicher, Kalipso Chalkidou, Irfan Dhalla,
Leslie Levin, and Sean R. Tunis
Center for Medical Technology Policy, Baltimore, Maryland, USA
National Institute for Health and Clinical Excellence, London, UK
Department of Medicine, University of Toronto, Ontario, CA
Medical Advisory Secretariat (MAS), Ontario Ministry of Health and
Long Term Care, Toronto, Ontario, CA

Overview

In an effort to ensure that important emerging health technologies are not used indiscriminately but are available to patients for whom the risk–benefit ratio is favorable, the Ontario provincial government has recently expanded its capacity to conduct comparative effectiveness studies. The new system allows purchasers (primarily hospitals) to request that a health technology be reviewed by the Ontario Health Technology Assessment Committee (OHTAC), an arms-length advisory committee to the Ministry of Health and Long-Term Care (MOHLTC). If, after completion of this assessment, which generally includes a systematic review produced by a governmental agency, OHTAC decides there is insufficient information to recommend a coverage decision, it may request a “conditionally funded field evaluation.” These studies, led by government-funded independent research entities, are designed to produce the evidence necessary for policy makers to make coverage decisions. Funding this research requires approximately $8 million to $10 million in incremental spending per year and the support of Ministry of Health staff, as well as of hospital and university investigators with a wide variety of expertise, including epidemiologists, biostatisticians, physicians, health economists, health policy experts, and health services researchers.

The direct and explicit link between the decision makers and the CER entities facilitates research timeliness and helps ensure a clear focus on generating information that is carefully designed to satisfy the needs of decision makers. Because purchasers contact OHTAC prior to investing in medical technologies, this system encourages evidence-based technology diffusion.

Although the healthcare system in the United States differs greatly from Ontario’s in size, complexity, and design, Ontario’s experiences provide insights in terms of workforce issues, organization, and funding that are relevant to U.S. efforts to build comparative effectiveness capacity. In addition, U.S. efforts may benefit from various collaborative activities with Ontario and other international evidence-generating entities, such as clini-

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

cal trials, patient registries, and standards of study design, which may help to globalize CER in the future.

Background

Healthcare delivery in Canada is primarily a provincial responsibility, but the federal government does provide substantial funding contingent on the provinces adhering to the conditions set forth in the Canada Health Act of 1984. According to the act, provinces must provide all “medically necessary” hospital and physician care free of charge (Iglehart, 2000; Lewis et al., 2001). Hospital services include diagnostic tests, inpatient medicines, and medical devices and equipment, as well as inpatient medical and nursing care. Outpatient physician care is also covered. In contrast to inpatient medication use, the public coverage of outpatient prescription medications varies by province, and most provinces provide universal coverage only for elderly and poor people. However, most non-elderly Canadians have private drug insurance through their employer. Provinces also differ in their coverage of home care, nursing homes, and other community-based care (Iglehart, 2000; Lewis et al., 2001).

Although the proportion of health care paid for by the federal government decreased through the 1990s, increased fiscal capacity has allowed the federal government to increase its provincial transfers over the last decade. Nevertheless, provincial governments still pay for the majority of healthcare costs, and the recent surge in the development of promising but often unproven medical technologies has placed added pressure on both public payers (i.e., provincial ministries of health) and providers (i.e., hospitals) throughout Canada (Levin et al., 2007). Hospital chief executive officers throughout Ontario have expressed frustration with the public pressure to adopt new technologies in the absence of objective information regarding their benefits and risks in comparison to those currently available. The Canada Health Act obliges hospitals to provide interventions that are “medically necessary,” but the evidence base necessary to make this determination is often incomplete, not unlike the situation in the United States and most other countries working to develop evidence-based policy decisions (Levin et al., 2007). Furthermore, because hospitals in Canada negotiate their budgets with provincial ministries of health or regional health authorities, they are limited in their ability to raise funds to pay for new technologies (Iglehart, 2000).

Ontario, like other Canadian provinces and the United States, faces the challenge of improving healthcare quality while simultaneously limiting increases in healthcare spending. In an effort to manage the diffusion of novel health technologies and to ensure that they are received by patients in whom the risk–benefit ratio is favorable (consistent with the manner in

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

which the term medically necessary is interpreted in the Canadian context), the MOHLTC in Ontario has developed a unique network to carry out CER on nondrug healthcare interventions (Goeree and Levin, 2006; Levin et al., 2007). There is no similar network for the assessment of emerging drug technologies. For a brief explanation of comparative effectiveness entities for medical drug technologies, see Box 4-2.

This paper begins with a description of the various entities involved in CER in Ontario. Following a brief overview of the workforce and funding requirements, it discusses potential lessons for policy makers in the United States.

Developing a Comparative Effectiveness Capacity for Nondrug Medical Technologies

Establishing an Agenda and Making Policy Recommendations

As has happened in most jurisdictions, nondrug healthcare interventions have historically rapidly diffused into the Ontario healthcare system, even in the absence of definitive clinical evidence of benefit. One reason for this is that technologies can enter the healthcare system through a variety of “portals,” including hospitals and other healthcare providers, community programs, and nursing homes (Goeree and Levin, 2006; Levin et al., 2007). To manage the diffusion of new health technologies and improve care, the Ontario MOHLTC established the Medical Advisory Secretariat (MAS) in 2001 and the OHTAC in 2003. These two entities work in concert to determine which technologies should be used in Ontario, which should not, and which require further research (Figure 4-2). In making this decision, the Ministry of Health considers many factors, including not only clinical effectiveness and safety but also cost effectiveness and budget impact.

Although any interested party can ask OHTAC to assess a new health technology, most requests are made by hospitals or the Ministry of Health. These requests are initially processed by the Medical Advisory Secretariat (MAS), a unit within the ministry that is staffed by an information specialist, 2 policy analysts, 10 clinical epidemiologists, and 3 administrative staff.11 During this initial processing, MAS completes a template, which includes information on the potential clinical effect size, public or professional pressure to use a new technology, and a preliminary comparison with alternative healthcare interventions (Goeree and Levin, 2006; Levin et al., 2007).

The results of the initial analysis are presented to OHTAC, which is composed of at least 12 individuals (currently, there are 25 members on the

_______________

11 Personal communication, L. Levin, June 13, 2008.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

BOX 4-2
Comparative Effectiveness Entities for Drug Evaluation

At the national level, new drugs undergo a systematic review (via what is called the Common Drug Review process) by the Canadian Agency for Drugs and Technologies in Health. These systematic reviews are completed in 4 to 6 weeks and focus on publicly available clinical trials data as well as pharmacoeconomic evaluations submitted by pharmaceutical manufacturers who wish to have their drug listed on formularies. The evidence is then assessed by the Canadian Expert Drug Advisory Committee, an advisory committee of 12 members from a variety of fields, including clinical trial methodologists, experts in health technology assessment, drug policy, or health economics, and two public representatives, plus a chair (Tierney and Manns, 2008). Based on this committee’s assessment, the agency either recommends that a drug be covered and added to public formularies or not. The committee does not have the authority to request additional research if the current literature fails to address policy makers’ concerns.

In Ontario, the Committee to Evaluate Drugs, composed of 16 members, including 2 lay people and 14 physicians and pharmacists (Committee to Evaluate Drugs, 2007b), uses the recommendations from the Canadian Agency for Drugs and Technologies in Health as a basis for its own recommendation for the province’s publicly funded drug programs. A final coverage decision is made by the Executive Officer of the Ontario Public Drugs Program in the Ministry of Health. These decisions are not binding on private drug insurance plans, which provide employment-based pharmaceutical coverage for most working Ontarians, though the decisions are considered by private decision makers.

In April 2007 the Ministry of Health launched the Drug Innovation Fund, which provides CA$5 million annually to pay for independent research projects relating to health outcomes research, providing information to support decision making, and supporting the development of an independent research capacity in academic institutions throughout Ontario to provide information for decision makers on the impact of drug access and use, optimal use of drugs, and drug adherence (Committee to Evaluate Drugs, 2007b).

(For more information on specific studies paid for by the fund, visit http://www.health.gov.on.ca/english/providers/program/drugs/drug_innov_fund/pdf/funding_successful_proposals.pdf [accessed September 22, 2010]).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

image

FIGURE 4-2 Schematic of explicit link between decision makers and comparative effectiveness entities.
NOTE: GRADE = Grading Recommendations Assesment, Development, and Evaluation Working Group; MAS = Medical Advisory Secretariat; MOHLTC = Ministry of Health and Long Term Care; OHTAC = Ontario Health Technology Assessment Committee; PATH = Program for the Assessment of Technologies in Health; THETA = Toronto Health Economic and Technology Assessment Collaboration.
SOURCE: Whicher, D. M., K Chalkidou, I. Dhalla, L. Levin, and S. Tunis. 2009. Comparative effectiveness research in Ontario, Canada: Producing relevant and timely information for health care decision making. The Milbank Quarterly 87(3): Figure 1, page 589. Reprinted with permissions from John Wiley and Sons.

committee). The committee includes representation from the Ontario Medical Association and the Ontario Hospital Association as well as from the community and long-term care sectors. Individual members have expertise in nursing, medicine, health economics, epidemiology, ethics, and technology assessment (Goeree and Levin, 2006; Ontario Health Technology Advisory Committee, n.d.). The members meet monthly to provide feedback to MAS and to provide policy recommendations to the deputy minister of health (Goeree and Levin, 2006). Based on the initial analysis produced by MAS,

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

OHTAC can choose to request a systematic review, commonly referred to as a health technology assessment (HTA), reject the application, or request further information before making a final decision (Figure 4-2).

If OHTAC requests an HTA, then MAS, in collaboration with academic partners at University of Toronto and McMaster University, produces a health technology assessment within 16 weeks of the initial request. The review includes evidence relating to the technology’s safety, clinical effectiveness, and efficacy (produced by MAS) as well as its cost effectiveness (produced through academic collaborations); and the quality of evidence is assessed by a rating consistent with Grading of Recommendations Assessment, Development and Evaluation Working Group (GRADE) guidelines.12 Evidence can be given a GRADE quality rating of high (future research is unlikely to have an impact on the estimate of the effect), moderate, low, or very low (any estimate of the effect is very uncertain). The GRADE framework depends on a number of factors, including the type of evidence available (e.g., randomized controlled trials vs. observational studies), the quantity of evidence (e.g., the number of studies), and the consistency of the evidence, as well as an assessment of any potential biases (Atkins et al., 2004). Using the GRADE framework provides consistency and transparency in MAS recommendations.

Based on the HTA and estimates for cost effectiveness, OHTAC can either make a recommendation to the MOHLTC or, if the evidence of clinical effectiveness of the technology is lacking, OHTAC can request that a “conditionally funded field evaluation” be performed (Figure 4-2). Following the formation of any draft recommendation, there is a period of public comment when relevant stakeholder groups are targeted and the draft document is posted on the OHTAC website.

OHTAC has determined that a multifaceted approach to public engagement is preferable to programs that conduct field evaluations to generate an evidence base for important high-demand health technologies.

There are several programs in Ontario, all of which are independent and largely government funded, with the capacity to carry out field evaluations for promising technologies. Since the requests for field evaluations are coming directly from decision makers, studies can be designed specifically to address the evidence that is critical to making coverage or purchasing decisions. Field evaluations can take many forms, including (Levin et al., 2007)

  • RCTs,
  • observational studies (e.g., registries or cohort studies, with or without contemporaneous controls),

_______________

12 See http://www.ohsu.edu/drugeffectiveness/ (accessed September 8, 2010).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
  • time-series studies,
  • chart reviews, and
  • multisite safety assessments.

OHTAC may also use information from polling studies or from the development of “Ontariorised” microeconomic policy models to inform policy decisions.

Appropriate study design is discussed by these independent entities in collaboration with experts knowledgeable about the health technology under investigation. While field evaluations are being conducted, the Ontario Health Insurance Plan covers physician costs for the medical technologies used for patients participating in the study as long as a technology is already insured (Goeree and Levin, 2006; Levin et al., 2007). If there is no fee code for the technology, alternative funding arrangements may be made. This arrangement gives patients access to emerging technology—and allows manufacturers to generate some revenue—before a long-term policy decision has been reached. Field evaluations are apportioned to one of the first two agencies described below based on existing capacity, and funding to carry out these evaluations is provided by the Ministry of Health. Each field evaluation also includes an assessment of the technology’s cost effectiveness.

Program for the Assessment of Technology in Health

The Program for the Assessment of Technology in Health (PATH), located at McMaster University and St. Joseph’s Healthcare Centre, is the longest-standing entity for government-funded field evaluations. The program has 20 staff, including 4 graduate students, research associates, a biostatistician, and university faculty (PATH Research Institute, 2008a). Four or five staff work on a given field evaluation with the help of various project consultants. In addition, PATH has been actively involved in developing master’s and doctoral-level degree programs at McMaster University in the field of HTA (PATH Research Institute, 2008b).

Over the past couple years, PATH has completed several studies, the results of which have had a significant impact on decision making. One such study is described in Box 4-3.

Toronto Health Economics and Technology Assessment Collaboration

The Toronto Health Economics and Technology Assessment Collaboration (THETA) was established in July 2007 at the University of Toronto (Toronto Health Economics and Technology Assessment Collaborative, 2007a). The group now has 28 investigators from a variety of backgrounds,

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

BOX 4-3
Comparative Effectiveness Study Comparing
Drug-Eluting Stents to Bare Metal Stents for Treatment of
Coronary Artery Disease

In 2002, the Medical Advisory Secretariat completed a gray literature-based health technology assessment on the clinical effectiveness of drug-eluting stents (DESs) compared to bare-metal stents (BMSs) and concluded that randomized controlled trial (RCT) evidence was imminent and would likely show that a DES was more effective than a BMS, following which there would be a steep diffusion curve for DESs. However, when the initial RCT results on DESs were published later that year, they demonstrated that there was uncertainty regarding generalizability. Due to this, the Ontario Health Technology Assessment Committee (OHTAC) recommended that the Ministry of Health commission a field evaluation from the Program for the Assessment of Technology in Health (PATH). The study proposed by PATH was a prospective observational study, which took advantage of both an existing provincewide registry set up by the Cardiac Care Network (CCN) of Ontario and the ability to link this registry to administrative databases housed at the Institute for Clinical Evaluative Sciences. Additional fields were added to a preexisting CCN database to facilitate a study comparing different stent designs. The objective of the study was to estimate the reduction in risk of revascularization within 2 years of treatment with a DES compared to a BMS. The study was also intended to estimate the cost effectiveness of DESs compared to BMSs. During the course of the study, hospitals were able to provide DESs free of charge to patients enrolled in the study (Bowen et al., 2007; Tu et al., 2007).

Interestingly, the study in part provided evidence for the use of DESs in some “off-label” indications and suggested that DESs may be no more effective than BMSs for many “on-label” indications. The results demonstrated an incremental benefit for DESs only in high-risk patients, defined as those patients who have two of three risk factors for restenosis (diabetes, small vessels, or long lesions) (Bowen et al., 2007; Tu et al., 2007). Based on these results, OHTAC recommended that DESs be used only in patients at high risk for restenosis. Data continue to be collected on patients who receive DESs, and initial estimates suggest this controlled diffusion of DESs led to a cost savings of about $20 million dollars in 2007 and 2008 (Bowen et al., 2007). This has resulted in a conversion rate from BMSs to DESs in Ontario currently estimated at 25 percent, compared to a conversion rate of 90 percent as reported in the New York Times on October 21, 2006 (Feder, 2006).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

including health economists, decision analysts, biostatisticians, and health services researchers. In addition to designing and executing field evaluations, THETA is actively involved in developing classes on HTA and field evaluations at the University of Toronto (Toronto Health Economics and Technology Assessment Collaborative, 2007b). THETA is currently in the process of implementing an RCT for deep brain stimulation (DBS) for the treatment of resistant depression. Study participants (a maximum of 20 patients per year) selected for the study will be randomized to receive DBS for a 12-week period or not. This randomization will be repeated 1 additional time, leaving 4 treatment groups: patients who do not receive DBS, patients who receive DBS for the first 12 weeks, patients who receive DBS for the second 12-week interval, and patients who receive DBS for the entire 24 weeks. The main outcome measures include the effect of deep brain stimulation on depressive symptoms, physical and mental health functioning, work and social adjustment, quality of life, and cognitive functioning. The study will be completed in 2011.

Institute for Clinical Evaluative Sciences

PATH and THETA are often able to increase the impact of their studies by collaborating with researchers at the Institute for Clinical Evaluative Sciences (ICES). ICES is an independent, nonprofit organization that receives core funding for its activities from the Ministry of Health. In addition to direct, project-specific funding from various provincial and national organizations, ICES faculty compete for peer-reviewed research grants (ICES, 2007). The organization has about 75 faculty members and nearly 200 staff. ICES faculty are able to link large data sets to monitor patterns of use for various drugs and medical technologies as well as patterns in quality of care (Center for Global eHealth Innovation University Health Network, n.d.). Information from these large data sets has proven invaluable to conducting various field evaluations. For example, ICES researchers played a significant role in the PATH study comparing drug-eluting stents to bare metal stents described in Box 4-3.

The skills of the ICES faculty plus the institution’s extensive existing privacy controls make ICES the ideal repository for registry creation and subsequent data analysis. For example, ICES has created several registries to collect data related to positron emission tomography (PET) scanning. There are currently six cancer-related PET registry studies being conducted through this arrangement as well as a registry for implantable cardiac defibrillators. In addition to PET registry studies, there are five prospective clinical trials being conducted by the Ontario Clinical Oncology group for cancer indications. Three of these trials are ongoing, and two RCTs have

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

recently completed patient accrual (Ministry of Health and Long Term Care, 2008).

University Health Network Usability Laboratories

The University Health Network Usability Laboratories have 15 employees, including human factors analysts and engineers, and are primarily concerned with assessing the safety of medical technologies, which is an important consideration for policy makers and purchasers (Center for Global eHealth Innovation University Health Network, n.d.). The laboratories handle requests from OHTAC for information relating to the ease of use of the technology, qualifications necessary to manage the technology, or risks to hospital staff or patients (Levin et al., 2007). Several topics currently under review from the usability laboratories include safety concerns regarding computed tomography radiation, magnetic resonance imaging, and smart infusion pumps.

Workforce Analysis for Comparative Effectiveness Network in Ontario

Personnel

The activities described above require staff from a variety of backgrounds, including health policy experts, health economists, clinical epidemiologists, biostatisticians, health services researchers, human factors analysts, and engineers, as well as physicians, nurses, hospital representatives, and information specialists. In addition, the success of this network is dependent on the willingness of university faculty and clinical experts to assist in the development of study designs and the collection of necessary data. Therefore, although there is a limited number of core staff, as described above, the system itself includes a far greater range of human resources working collaboratively to fill evidence gaps of importance to decision makers.

In addition, PATH and THETA are involved in developing workshops, classes, and degree programs at, respectively, McMaster University and the University of Toronto to meet future workforce needs. For example, McMaster University has the Center for Health Economics and Policy Analysis,13 which is funded by McMaster University and the Ontario Ministry of Health. The center offers classes in health economics and policy analysis to students from a variety of degree programs (Centre for Health Economics and Policy Analysis, n.d.). The University of Toronto offers

_______________

13 Centre for Health Economics and Policy Analysis. Available at www.chepa.org/Whoweare/Centre/tabid/59/Default.aspx (accessed July 15, 2008).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

degree programs in health technology assessment and management, HSR, and clinical epidemiology through the Department of Health Policy, Management, and Evaluation (Department of Health Policy, Management, and Evaluation, 2008).

Provincial Government Funding for Field Evaluations

Currently, the Ministry of Health spends CA$8 million to CA$10 million a year on field evaluations for high-demand, emerging medical technologies. Technology costs are generally excluded from this figure, but they are also paid for by the Ministry of Health. This figure also excludes the cost of university and hospital-based researchers whose salaries are paid for by their employers or by external granting agencies. Approximately CA$5 million of this funding is invested in the PET registries, leaving CA$3 to CA$5 million for additional field evaluations. The higher cost of the PET registries is primarily due to the costs of the PET radioisotope being paid for from the OHTAC budget. For most conditionally funded field evaluation projects, other government departments cover the clinical costs.

Policy Implications for the United States

Establish a Stable Funding Source to Support
Comparative Effectiveness Research

Government funding for the comparative effectiveness programs established in Ontario is critical, because product manufacturers often lack the incentives and hospitals usually lack the resources to support this research. Studies to address important unanswered questions identified by OHTAC are designed and implemented in a short time frame, primarily because a pool of resources is available to support this work. It is also worth noting that the time frame for funding decisions is extremely short, which is essential when attempting to evaluate promising emerging technologies on a time frame that is meaningful for clinical and health policy decision making. To create a similar capacity for conducting research aimed at addressing issues of importance to healthcare decision makers in the United States, it is important to identify a continually available, renewable source of funding. Since there is a mix of public and private health insurers in the United States, it would be beneficial to adopt a system where all health insurers were required to contribute funds to the programs. Furthermore, there will need to be a capacity for rapid decisions about allocation of these funds to support prospective studies. Standard grant review cycle times are unlikely to be adequate to support a productive comparative effectiveness enterprise in the United States or elsewhere.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Ensure That the Process Is Timely and Directed and That Evidence Generation Is Directed at Questions of Importance to Decision Makers

The process of generating evidence described in this paper is both timely and directed at the evidence needs of healthcare decision makers. Once OHTAC requests an HTA from MAS, a full systematic review is returned within 16 weeks, at which time OHTAC can decide to request a full field evaluation. This close and ongoing contact between the Ministry of Health, OHTAC, MAS, and the various programs that conduct field evaluations and economic analyses ensures that studies are responsive to the questions of importance to policy makers and potential purchasers. In Ontario, studies are designed collaboratively with input from government officials, hospital representatives, physicians, health economists, and health services researchers. Keeping decision makers involved in this process increases the likelihood that the data generated by the study will be relevant. In the United States, it will be necessary to establish efficient mechanisms for considering input from a broad range of experts and stakeholders in priority setting, protocol development, and study implementation. The methods and strategies for achieving this are not fully developed or well documented, and considerable work will be necessary in order to achieve functioning mechanisms to obtain broad input and to achieve consensus around priorities and methods.

Design Programs That Are Independent from Government and Industry and Ensure That the Decision Making Process Is Transparent

Although the government is the main source of funding for CER in Ontario, programs conducting the various field evaluations have remained independent. This independence from the Ministry of Health allows these programs to design and implement studies without unmanageable political influence and to more freely engage with consultants and experts. In addition, the fact that OHTAC is a board at “arm’s length” from the Ontario Ministry of Health keeps the recommendation process independent from the ministry, thereby separating it from the actual decision-making process.

Efforts have been made by the Ontario government and OHTAC to ensure that the entire process is open to the public. Any Ontario citizen is welcome to submit a request for an assessment of an emerging nondrug medical technology, stakeholder engagement and feedback are solicited via targeted approaches, and all decision and reasons for those decisions are made available via the Internet. Transparency in healthcare decision making is critical to establishing trust from the general public. Decision makers in Ontario continue to look for and adopt new methods to ensure that the public is engaged in the process. When developing a system in the United

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

States, efforts should be made to ensure that citizens are not only aware of these efforts but also encouraged to engage in the process. Public engagement processes also need to be designed so that those with vested interests do not unduly influence decision making.

Create Partnerships Between Universities and Programs Responsible for Conducting Field Evaluations

The Ontario technology assessment network relies on partnerships between programs conducting field evaluations and various universities, such as the University of Toronto (THETA) and McMaster University (PATH). This partnership allows these programs to draw on the expertise of academics and physicians working at these universities when designing and implementing various studies. Furthermore, this connection has led to the development of classes and degree programs that will help to fill future workforce and expertise requirements. The maintenance of ongoing relationships between the Ontario Ministry of Health and academic programs that specialize in comparative effectiveness studies appears to be important for the efficiency and effectiveness of this work. This bears some similarity to the network of EPCs in the United States and a number of similar academically based networks that develop focused expertise and relationships in order to conduct particular types of projects. It may be sensible to explore the establishment of a network of centers with expertise in conducting comparative effectiveness studies that maintain ongoing relationships with CMS, private payers, and a broad network of stakeholders with an interest in this subject.

Leverage Medicare’s Influence on Private Payers

It may be argued that one reason for the effectiveness of Ontario’s system is that decision making is relatively centralized compared to the situation in the United States. The payer (the MOHLTC) decides how new nondrug technologies are used in Ontario. In the United States, the existence of a large number of decision makers makes it more difficult to control the diffusion of emerging medical technologies because the technologies can enter the healthcare system through any number of private as well as public payers.

Still, although there is not one central decision maker in the United States, private payers are often influenced by Medicare’s coverage decisions, though it is increasingly common that large private payers make decisions that differ from those of Medicare. The influence that Medicare wields on private coverage decisions could be leveraged to develop a comparative effectiveness network, especially if Medicare were to use the existing

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Medicare Evidence Development and Coverage Advisory Committee or to establish a new multistakeholder board to perform a function similar to OHTAC. Another factor to consider is that the United States has a much larger HSR capacity than Ontario; this domestic network could be leveraged to review the evidence necessary for the production of coverage recommendations. Where uncertainty remained after a thorough review of all available evidence, Medicare could commission a “coverage with evidence development” (CED) study using government funding, a policy option already used in a number of cases (Tunis and Pearson, 2006). There has been increasing interest in private payer models of CED as well, and it would be particularly effective to have public and private payers supporting the same studies using this policy mechanism.

Methodology Implications for the United States

Draw on Existing Capacities to Support
Comparative Effectiveness Research

Government funding for CER in Ontario is relatively small because MAS, PATH, and THETA are able to make use of existing capacities within the province, such as ICES and university researchers and clinicians, to help support their projects. Once these programs receive requests from OHTAC, they are able to launch studies fairly quickly and efficiently, which is critical given the rapid evolution of high-demand, emerging medical technologies.

Unlike in Ontario, where only a small number of clinical research programs are capable of performing the research needed by the Ministry of Health, in the United States there are many HSR organizations as well as an extensive network of universities and teaching hospitals that could help support a CER agenda. The mechanism used in Ontario of assigning individual projects to research programs may not be scalable to the United States, and a competitive procurement process may be more suitable.

With the strong focus on EBM that currently exists, now is an ideal time to choose a high-demand medical technology and implement pragmatic studies in order to demonstrate how CER can be used to inform medical decisions. In addition, initial studies are necessary to refine current methods and inform discussions about the additional capacity necessary to build a comparative effectiveness network.

Invest in a Centralized Capacity to Set Up and
Collect Information from Patient Registries

The Ontario network takes advantage of the existence of a separate, larger program (ICES) responsible for creating registries and cross-linking

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

databases. Although these databases serve to address a range of policy questions other than coverage decisions, the databases and various ICES analyses are used to support many of the field evaluations designed by PATH and THETA. In addition, the ICES databases allow PATH and THETA to implement studies more quickly and at a lower cost than would otherwise be possible if these databases did not exist.

In the United States there are a number payers, including Medicare, United Healthcare, and Blue Cross Blue Shield, that routinely collect patient information through administrative databases and registries. To make this information useful to researchers and decision makers, it would be beneficial to develop greater coordination in the work of collecting and analyzing administrative and registry data.

Use a Combination of Research Approaches to Inform Decision Makers

The technology assessment system in Ontario relies on a number of different study designs to assess emerging technologies and address critical evidence gaps. Decision makers in Ontario rely on information from a number of sources, including systematic reviews, cost-effectiveness modeling, and (if necessary) field evaluations. In addition, when field evaluations are deemed necessary, they are designed to be responsive to the questions of policy makers and care providers and are focused on the costs and effects of the medical technology in real-world practice.

Adopting a similar approach in the United States would help to ensure that studies are directed at the decision-making process and will likely reduce the number of studies concluding that more evidence is needed before a decision can be reached.

“Globalizing” Comparative Effectiveness

Many of the evidence gaps relating to emerging technologies in Ontario have also been identified as important evidence gaps in the United States and abroad. This overlap suggests that there is an opportunity to facilitate linkages and collaboration for activities of mutual benefit. There are lessons to be learned not only from the Ontario experience but also from those in other countries. For example, a government-funded, centralized HTA program in the United Kingdom commissions studies on topics where the evidence base is limited. This program could serve as a useful model for a commissioned-research CED program housed within Medicare.

With respect to individual studies, international partnerships may be helpful, particularly for rare diseases where the number of patients eligible for a study in any single country is small. However, international studies also have disadvantages: they may take longer to initiate; the collection,

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

assessment, and integration of data may be complicated; and the data may not be generalizable. Furthermore, in order for an international collaboration to be successful, there must be agreement about appropriate study design and outcome measures.

Conclusion

There is currently great interest internationally in both comparative effectiveness and coverage with evidence development. The Ontario experience demonstrates that a significant amount of research can be achieved for a relatively small amount of money if researchers, clinicians, and decision makers work together and make use of existing infrastructure. In the United States and throughout the world, there is a high demand for information on comparative effectiveness for emerging medical technologies, not only for payers and hospitals but also for individual clinicians and patients as well. Beginning to improve the capacity to make evidence-based medical decisions requires immediate action because the pace of medical technology innovation continues to increase, and, as it does, so does the list of questions that need to be answered in order to inform decision makers.

REFERENCES

Ash, J., P. Stavri, R. Dykstra, and L. Fournier. 2003. Implementing computerized physician order entry: The importance of special people. International Journal of Medical Informatics 69:235-250.

Atkins, D., D. Best, P. A. Briss, M. Eccles, Y. Falck-Ytter, S. Flottorp, G. H. Guyatt, R. T. Harbour, M. C. Haugh, D. Henry, S. Hill, R. Jaeschke, G. Leng, A. Liberati, N. Magrini, J. Mason, P. Middleton, J. Mrukowicz, D. O’Connell, A. D. Oxman, B. Phillips, H. J. Schunemann, T. T. Edejer, H. Varonen, G. E. Vist, J. W. Williams, Jr., and S. Zaza. 2004. Grading quality of evidence and strength of recommendations. British Medical Journal 328(7454):1490.

Bastian, H. 2005. Consumer and researcher collaboration in trials: Filling the gaps. Clinical Trials 2:3-4.

Berry, D. 2006. Bayesian clinical trials. Nature Reviews Drug Discovery 5:27-36.

Bowen, J. M., R. Hopkins, M. Chiu, G. Blackhouse, C. Lazzam, D. Ko, J. Tu, E. Cohen, K. Campbell, Y. He, A. Willan, J.-E. Tarride, and R. Goeree. 2007. Clinical and cost-effectiveness analysis of drug eluting stents compared to bare metal stents for percutaneous coronary interventions in Ontario: Final report (Report no. Hta002-0705-02). Hamilton, ON: Program for the Assessment of Technology in Health, St. Joseph’s Healthcare Hamilton/McMaster University.

Buckley, T. 2007. The complexities of comparative effectiveness. Washington, DC: Biotechnology Industry Organization.

Califf, R. 2006. Clinical trials bureaucracy: Unintended consequences of well-intentioned policy. Clinical Trials 3:496-502.

Center for Global eHealth Innovation University Health Network. n.d. Healthcare human factors group. www.ehealthinnovation.org/?q=hhf (accessed July 3, 2008).

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Centre for Health Economics and Policy Analysis. http://www.chepa.org/Whoweare/Centre/tabid/59/Default.aspx (accessed July 15, 2008).

Chaudhry, B., J. Wang, S. Wu, M. Maglione, W. Mojica, E. Roth, S. Morton, and P. Shekelle. 2006. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine 144:742-752.

Chou R, N. Aronson, D. Atkins, A. S. Ismaila, P. Santaguida, D. H. Smith, E. Whitlock, T. J. Wilt and D. Moher. 2010. AHRQ Series Paper 4: Assessing harms when comparing medical interventions: AHRQ and the Effective Health-Care Program. Journal of Clinical Epidemiology 63:502-512.

Cohen, A., W. Hersh, K. Peterson, and P. Yen. 2006. Reducing workload in systematic review preparation using automated citation classification. Journal of the American Medical Informatics Association 13:206-219.

Committee to Evaluate Drugs. 2007a. Terms of reference and administrative guidelines. Ontario: Ministry of Health and Long-Term Care.

———. 2007b. Drug innovation fund to advance research into value of medicines. Approved funding for research proposals from 2007/08 review cycle. Ontario: Ministry of Health and Long-Term Care.

Dall, T., A. Grover, C. Roehrig, M. Bannister, S. Eisenstein, C. Fulper, and J. Cultice. 2006. Physician supply and demand: Projections to 2020. Washington, DC: Health Resources and Services Administration.

Department of Health Policy, Management, and Evaluation. 2008. Course descriptions. http://www.hpme.utoronto.ca/about/gradprograms/msc-htam/courses.htm (accessed July 15, 2008).

Drummond, M., J. Schwartz, B. Jönsson, B. Luce, P. Neumann, U. Siebert, and S. Sullivan. 2008. Key principles for the improved conduct of health technology assessments for resource allocation decisions. International Journal of Technology Assessment in Health Care 24:244-258.

Eardley, T. 2006. NHS Informatics Workforce Survey. London, UK: The Association for Informatics Professionals in Health and Social Care.

EHR (Electronic Health Record) Adoption Model. 2008. The EHR adoption model. Chicago, IL: Healthcare Information Management and Systems Society.

Ellis, P., C. Baker, and M. Hanger. 2007. Research on the comparative effectiveness of medical treatments: Issues and options for an expanded federal role. Washington, DC: Congressional Budget Office.

Emanuel, E., V. Fuchs, and A. Garber. 2007. Essential elements of a technology and outcomes assessment initiative. Journal of the American Medical Association 298:1323-1325.

Feder, B. 2006. Doctors rethink widespread use of heart stents. New York Times. October 21. http://www.nytimes.com/2006/10/21/business/21stent.html (accessed July 2, 2008).

Fletcher, R., and S. Fletcher. 2005. Clinical epidemiology: The essentials, 4th ed. Baltimore, MD: Lippincott Williams & Wilkins.

Forrest, C., A. Millman, J. Hines, and E. Holve. 2005. Health services research competencies: Final report. Baltimore, MD: Johns Hopkins Bloomberg School of Public Health.

Forrest, C. B., D. P. Martin, E. Holve, and A. Millman. 2009. Health services research doctoral core competencies. BMC Health Services Research 9(1):107.

Fridsma, D., J. Evans, S. Hastak, and C. Mead. 2008. The BRIDG project: A technical report. Journal of the American Medical Informatics Association 15:130-137.

Gabler, J. 2003. 2003 integrated delivery system IT budget and staffing study results. Stamford, CT: Gartner Corp.

Gartlehner, G., R. Hansen, D. Nissman, K. Lohr, and T. Carey. 2006. A simple and valid tool distinguished efficacy from effectiveness studies. Journal of Clinical Epidemiology 59:1040-1048.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Goeree, R., and L. Levin. 2006. Building bridges between academic research and policy formulation: The PRUFE framework—An integral part of Ontario’s evidence-based HTPA process. Pharmacoeconomics 24(11):1143-1156.

Goodman, D. 2008. Improving accountability for the public investment in health profession education: It’s time to try health workforce planning. Journal of the American Medical Association 300:1205-1207.

Han, Y., J. Carcillo, S. Venkataraman, R. Clark, R. Watson, T. Nguyen, H. Bayir, and R. Orr. 2005. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 116:1506-1512.

Haynes, R., D. Sackett, G. Guyatt, and P. Tugwell. 2005. Clinical epidemiology: How to do clinical practice research, 3rd ed. Baltimore, MD: Lippincott Williams & Wilkins.

Helfand, M. 2005. Using evidence reports: Progress and challenges in evidence-based decision making. Health Affairs 24:123-127.

Helfand, M., S. Morton, E. Guallar, and C. Mulrow. 2005. Challenges of summarizing better information for better health: The evidence-based practice center experience. Annals of Internal Medicine 142(12, Pt. 2).

Hersh, W. 2002. Medical informatics: Improving health care through information. Journal of the American Medical Association 288:1955-1958.

———. 2006. Who are the informaticians? What we know and should know. Journal of the American Medical Informatics Association 13:166-170.

———. 2008. Health and biomedical informatics: Opportunities and challenges for a twenty-first century profession and its education. In IMIA yearbook of medical informatics 2008, edited by A. Geissbuhler and C. Kulikowski. Stuttgart, Germany: Schattauer. Pp. 138-145.

Hersh, W., and J. Williamson. 2007. Educating 10,000 informaticians by 2010: The AMIA 10 × 10 program. International Journal of Medical Informatics 76:377-382.

Hersh, W., and A. Wright. 2008. What workforce is needed to implement the health information technology agenda? An analysis from the HIMSS Analytics database. Paper read at AMIA Annual Symposium Proceedings, Washington, DC.

ICES (Institute for Clinical Evaluative Sciences). 2007. Knowledge igniting change: 2007 annual report. http://www.ices.on.ca/file/Annual_Report_2007.pdf (accessed July 3, 2008).

Iglehart, J. K. 2000. Revisiting the Canadian health care system. New England Journal of Medicine 342(26):2007-2012.

IOM (Institute of Medicine). 1990. Clinical practice guidelines: Directions for a new program. Washington, DC: National Academy Press.

———. 2007. The learning healthcare system: Workshop summary. Washington, DC: The National Academies Press.

———. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press.

Kirschner, N., S. G. Pauker, and J. W. Stubbs. 2008. Information on cost-effectiveness: An essential product of a national comparative effectiveness program. Annals of Internal Medicine 148:956-961.

Langston, A., M. McCallum, M. Campbell, C. Robertson, and S. Ralston. 2005. An integrated approach to consumer representation and involvement in a multicentre randomized controlled trial. Clinical Trials 2:80-87.

Levin, L., R. Goeree, N. Sikich, B. Jorgensen, M. C. Brouwers, T. Easty, and C. Zahn. 2007. Establishing a comprehensive continuum from an evidentiary base to policy development for health technologies: The Ontario experience. International Journal of Technology Assessment in Health Care 23(3):299-309.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Leviss, J., R. Kremsdorf, and M. Mohaideen. 2006. The CMIO: A new leader for health systems. Journal of the American Medical Informatics Association 13:573-578.

Lewis, S., C. Donaldson, C. Mitton, and G. Currie. 2001. The future of health care in Canada. British Medical Journal 323(7318):926-929.

Luce, B., L. Paramore, B. Parasuraman, B. Liljas, and G. deLissovoy. 2008. Can managed care organizations partner with manufacturers for comparative effectiveness research? American Journal of Managed Care 14:149-156.

Methods Reference Guide. 2008. Methods reference guide for effectiveness and comparative effectiveness reviews. Rockville, MD: Agency for Healthcare Research and Quality.

Ministry of Health and Long Term Care. 2008. Bulletin: Accessing positron emission tomography (PET) studies. http://www.health.gov.on.ca/english/providers/program/ohip/bulletins/4000/bul4477.pdf (accessed July 3, 2008).

Moher, D., A. Tsertsvadze, A. Tricco, M. Eccles, J. Grimshaw, M. Sampson, and N. Barrowman. 2007. A systematic review identified few methods and strategies describing when and how to update systematic reviews. Journal of Clinical Epidemiology 60:1095-1104.

Mullin, T. 2007 (April 17). Statement of Theresa Mullin, Ph.D., Assistant Commissioner for Planning, Food and Drug Administration. Congressional Record, D507-D508. Washington, DC: Subcommittee on Health, Committee on Energy and Commerce, U.S. House of Representatives.

Ontario Health Technology Advisory Committee. n.d. OHTAC membership. http://www.health.gov.on.ca/english/providers/program/ohtac/committee.html (accessed July 10, 2008).

PATH (Programs for Assessment of Technology in Health) Research Institute. 2008a. Meet our Team. http://www.path-hta.ca/team.htm (accessed July 2, 2008).

———. 2008b. HTA educational learning program. http://www.path-hta.ca/help.htm (accessed July 1, 2008).

Reboussin, D., and M. Espeland. 2005. The science of Web-based clinical trial management. Clinical Trials 2:1-2.

Ricketts, T. 2007. Developing the health services research workforce. Washington, DC: AcademyHealth.

Safran, C., and D. Detmer. 2005. Computerized physician order entry systems and medication errors. Journal of the American Medical Association 294:179.

Safran, C., M. Bloomrosen, W. Hammond, S. Labkoff, S. Markel-Fox, P. Tang, and D. Detmer. 2007. Toward a national framework for the secondary use of health data: An American Medical Informatics Association white paper. Journal of the American Medical Informatics Association 14:1-9.

Shaffer, V., and J. Lovelock. 2007. Results of the 2006 Gartner-AMDIS survey of CMIOs: Bridging healthcare’s transforming waters. Stamford, CT: Gartner.

Shekelle, P., E. Ortiz, S. Rhodes, S. Morton, M. Eccles, J. Grimshaw, and S. Woolf. 2001. Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: How quickly do guidelines become outdated? Journal of the American Medical Association 286:1461-1467.

Shojania, K., M. Sampson, M. Ansari, J. Ji, S. Doucette, and D. Moher. 2007. How quickly do systematic reviews go out of date? A survival analysis. Annals of Internal Medicine 147:224-233.

Sittig, D., J. Ash, J. Zhang, J. Osheroff, and M. Shabot. 2006. Lessons from “unexpected increased mortality after implementation of a commercially sold computerized physician order entry system.” Pediatrics 118:797-801.

Slutsky, J., D. Atkins, S. Chang and B. A. Collins Sharp. 2010. AHRQ Series Paper 1: Comparing medical interventions: AHRQ and the Effective Health-Care Program. Journal of Clinical Epidemiology 63(5):481-483.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×

Swirsky, L., and L. Cook. 2008. Comparative effectiveness: Better value for the money? Wash ington, DC: Alliance for Health Reform.

Tierney, M., and B. Manns. 2008. Optimizing the use of prescription drugs in Canada through the common drug review. Canadian Medical Association Journal 178(4):432-435.

Toronto Health Economics and Technology Assessment Collaborative. 2007a. Research. http://theta.utoronto.ca/research (accessed July 1, 2008).

———. 2007b. Education. http://theta.utoronto.ca/static/education (accessed July 1, 2008).

Tu, J. V., J. Bowen, M. Chiu, D. T. Ko, P. C. Austin, Y. He, R. Hopkins, J. E. Tarride, G. Blackhouse, C. Lazzam, E. A. Cohen, and R. Goeree. 2007. Effectiveness and safety of drug-eluting stents in Ontario. New England Journal of Medicine 357(14):1393-1402.

Tunis, S. 2007. Comparative effectiveness: Basic terms and concepts. San Francisco, CA: Center for Medical Technology Policy.

Tunis, S. R., and S. D. Pearson. 2006. Coverage options for promising technologies: Medicare’s “coverage with evidence development.” Health Affairs 25(5):1218-1230.

Tunis, S. R., D. Stryer, and C. Clancy. 2003. Practical clinical trials—Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290:1624-1632.

Weissberg, J. 2007. Use of large system databases. In The Learning Healthcare System: Workshop Summary, edited by L. Olsen, D. Aisner, and J. McGinnis. Washington, DC: The National Academies Press. Pp. 46-50.

Wilensky, G. 2006. Developing a center for comparative effectiveness information. Health Affairs 25:w572-w585.

Woolf, S. 2008. The meaning of translational research and why it matters. Journal of the American Medical Association 299:211-213.

Zerhouni, E. 2007. Translational research: Moving discovery to practice. Clinical Pharmacology and Therapeutics 81:126-128.

Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 191
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 192
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 193
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 194
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 195
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 196
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 197
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 198
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 199
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 200
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 201
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 202
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 203
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 204
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 205
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 206
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 207
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 208
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 209
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 210
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 211
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 212
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 213
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 214
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 215
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 216
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 217
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 218
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 219
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 220
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 221
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 222
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 223
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 224
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 225
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 226
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 227
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 228
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 229
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 230
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 231
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 232
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 233
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 234
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 235
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 236
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 237
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 238
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 239
Suggested Citation:"4 The Talent Required." Institute of Medicine. 2011. Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12214.
×
Page 240
Next: 5 Implementation Priorities »
Learning What Works: Infrastructure Required for Comparative Effectiveness Research: Workshop Summary Get This Book
×
Buy Paperback | $75.00 Buy Ebook | $59.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

It is essential for patients and clinicians to have the resources needed to make informed, collaborative care decisions. Despite this need, only a small fraction of health-related expenditures in the United States have been devoted to comparative effectiveness research (CER). To improve the effectiveness and value of the care delivered, the nation needs to build its capacity for ongoing study and monitoring of the relative effectiveness of clinical interventions and care processes through expanded trials and studies, systematic reviews, innovative research strategies, and clinical registries, as well as improving its ability to apply what is learned from such study through the translation and provision of information and decision support.

As part of its Learning Health System series of workshops, the Institute of Medicine's (IOM's) Roundtable on Value & Science-Driven Health Care hosted a workshop to discuss capacity priorities to build the evidence base necessary for care that is more effective and delivers higher value for patients. Learning What Works summarizes the proceedings of the seventh workshop in the Learning Health System series. This workshop focused on the infrastructure needs--including methods, coordination capacities, data resources and linkages, and workforce--for developing an expanded and efficient national capacity for CER. Learning What Works also assesses the current and needed capacity to expand and improve this work, and identifies priority next steps.

Learning What Works is a valuable resource for health care professionals, as well as health care policy makers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!