4


The Talent Required

INTRODUCTION

Most of the activities integral to comparative effectiveness research (CER) have been conducted on a small scale over the past several decades; yet, meeting an increased demand for CER and the efficient translation and application of CER findings requires more than simply expanding existing programs and infrastructure. In addition to incorporating the new structures, systems, and elements of health information technologies (HITs) into current practice, innovative new approaches will be needed to drive improvements in both research and practice. Work will be increasingly interdisciplinary—requiring coordination and cooperation across professions and healthcare sectors. One of the key themes of workshop discussion was the need for increased funding and support for training a workforce to meet the unique needs of developing and applying comparative effectiveness information.

Papers in this chapter were presented in draft form at the workshop to begin to characterize the workforce needs for the emerging discipline of CER.1 William R. Hersh and colleagues explore the heterogeneous set

_______________

1 Comments of workshop reactor panel participants guided the development of the manuscript by Hersh and colleagues presented in this chapter. Sector perspective panelists included Jean Paul Gagnon (sanofi-aventis), Bruce H. Hamory (Geisinger Health System), Steve E. Phurrough (Centers for Medicare & Medicaid Services), and Robert J. Temple (Food and Drug Administration). Panelists commenting on training and education needs included Eric B. Bass (Johns Hopkins University), Timothy S. Carey (University of North Carolina at Chapel Hill), Don E. Detmer (American Medical Informatics Association), David H. Hickam (Eisenberg Center), and Richard N. Shiffman (Yale University).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 191
4 The Talent Required INTRODUCTION Most of the activities integral to comparative effectiveness research (CER) have been conducted on a small scale over the past several decades; yet, meeting an increased demand for CER and the efficient translation and application of CER findings requires more than simply expanding existing programs and infrastructure. In addition to incorporating the new structures, systems, and elements of health information technologies (HITs) into current practice, innovative new approaches will be needed to drive improvements in both research and practice. Work will be increasingly interdisciplinary—requiring coordination and cooperation across profes- sions and healthcare sectors. One of the key themes of workshop discussion was the need for increased funding and support for training a workforce to meet the unique needs of developing and applying comparative effectiveness information. Papers in this chapter were presented in draft form at the workshop to begin to characterize the workforce needs for the emerging discipline of CER.1 William R. Hersh and colleagues explore the heterogeneous set 1 Comments of workshop reactor panel participants guided the development of the manu- script by Hersh and colleagues presented in this chapter. Sector perspective panelists included Jean Paul Gagnon (sanofi-aventis), Bruce H. Hamory (Geisinger Health System), Steve E. Phurrough (Centers for Medicare & Medicaid Services), and Robert J. Temple (Food and Drug Administration). Panelists commenting on training and education needs included Eric B. Bass (Johns Hopkins University), Timothy S. Carey (University of North Carolina at Chapel Hill), Don E. Detmer (American Medical Informatics Association), David H. Hickam (Eisenberg Center), and Richard N. Shiffman (Yale University). 9

OCR for page 191
92 LEARNING WHAT WORKS of activities that contribute to the field of CER and define key workforce components and related training requirements. CER will draw its work- force from a variety of backgrounds—clinical medicine, clinical epidemiol- ogy, biomedical informatics, biostatistics, and health policy—and settings, including academic units, university centers, contract research organiza- tions, government, and industry. A key challenge will be developing pro- grams to foster interdisciplinary and cross-sector approaches. To provide an example of how different workforce elements might be best organized and engaged in a system focused on developing and applying clinical effectiveness information, Sean R. Tunis and colleagues present an overview of a program for health interventions assessment in Ontario, Canada. A direct link between decision makers and CER entities facilitates research timeliness and a clear focus on the information needs of decision makers. Ontario’s experience provides insights on how the United States might best expand CER capacity, offers a model for developing an integrated workforce that addresses important organizational and funding issues, and suggests some possible efficiencies to be gained through inter- national cooperation. COMPARATIVE EFFECTIVENESS WORKFORCE— FRAMEWORK AND ASSESSMENT William R. Hersh, M.D., Oregon Health and Science University; Timothy S. Carey, M.D., M.P.H., University of North Carolina; Thomas Ricketts, Ph.D., University of North Carolina; Mark Helfand, M.D., M.P.H., Oregon Health and Science University; Nicole Floyd, M.P.H., Oregon Health and Science University; Richard N. Shiffman, M.D., M.C.I.S., Yale University; David H. Hickam, M.D., M.P.H., Oregon Health and Science University2 Overview There have been increasing calls for a better understanding of “what works” in health care (IOM, 2008), driven by a system that allows for learning and improvement based on such an understanding (IOM, 2007). 2 We thank the following individuals who provided comments, critiques, and additions to early versions of this report: Mark Doescher, M.D., M.P.H., University of Washington; Erin Holve, Ph.D., AcademyHealth; Marian McDonagh, Pharm.D., Oregon Health & Science University; Lloyd Michener, M.D., Duke University; Cynthia Morris, Ph.D., Oregon Health & Science University; LeighAnne Olsen, Ph.D., Institute of Medicine; Robert Reynolds, Sc.D., Pfizer Corp.; Robert Schuff, M.S., Oregon Health & Science University; Carol Simon, Ph.D., The Lewin Group; Brian Strom, M.D., M.P.H., University of Pennsylvania; Jonathan Weiner, Dr.P.H., Johns Hopkins University.

OCR for page 191
9 THE TALENT REQUIRED One of the means for assessing what works is CER. The AcademyHealth Methods Council defines CER as “research studies that compare one or more diagnostic or treatment options to evaluate effectiveness, safety, or outcomes” (EHR Adoption Model, 2008). The goals of this report are to define the many components of CER, to explore the necessary training requirements for a CER workforce, and to provide a framework for devel- oping a strategy for future workforce development. The objective of CER is to provide a sustainable, replicable approach to identifying effective clinical services (IOM, 2008). However, although the term CER is widely used, there is no consensus on how best to achieve this objective, and there is little understanding of the challenges required to meet it. There is, for example, wide disagreement about the importance of its different components. The Institute of Medicine (IOM) committee on “knowing what works in health care” emphasizes the central role of comparative effectiveness reviews as a critical linkage between evidence- based medicine (EBM) and practice guidelines, coverage decision making, clinical practice, and health policy (IOM, 2008), whereas Tunis views the knowledge of CER as deriving from practical clinical trials that compare interventions head to head in real clinical settings (Tunis, 2007). The IOM Roundtable on Value & Science-Driven Health Care expands the notion of CER to include other forms of learning about health care (IOM, 2007), such as the growing amount of data derived from secondary sources, including electronic health record (EHR) systems, which feeds other analy- ses, such as health services research (HSR). This knowledge in turn drives the development and implementation of clinical practice guidelines, benefits coverage decisions, and allows the general dissemination of knowledge to practitioners, policy makers, and patients. The ideal learning health system will feed back knowledge from these activities to inform continued CER. While some organizations take an optimistic view of the benefits that CER can bring to improving the quality and cost-effectiveness of health care (Swirsky and Cook, 2008), others sound a more cautionary note. CER will not occur without political and economic ramifications. For example, the Congressional Budget Office notes that CER might lower the cost of health care, but only if it is accompanied by changes in the incentives for provid- ers and patients to use new, more expensive technologies even when they are not proven to be better than less expensive ones (Ellis et al., 2007). A report from the Biotechnology Industry Organization raises concerns that population-based studies may obscure benefits to individual patients or groups and that even in the absence of statistically significant differences among interventions, some individuals may benefit more from some treat- ments than others (Buckley, 2007). Finally, many argue that CER could turn out to be ineffective unless it is funded and conducted independently of the

OCR for page 191
9 LEARNING WHAT WORKS federal executive branch by a dedicated new entity (Emanuel et al., 2007; Kirschner et al., 2008; Wilensky, 2006). In the United States, a clear leader in CER has been the Agency for Healthcare Research and Quality (AHRQ). The AHRQ research portfolio includes evidence-based practice centers (EPCs) (Helfand et al., 2005), which perform comparative effectiveness reviews—that is, syntheses of existing research on the effectiveness, comparative effectiveness, and com- parative harms of different healthcare interventions (Slutsky et al., 2010). The work of the EPCs feeds AHRQ’s Effective Health Care Program,3 which also supports original CER through the Developing Evidence to Inform Decisions about Effectiveness network and via dissemination through the John M. Eisenberg Clinical Decisions and Communications Science Center (Eisenberg Center). AHRQ has also made a substantial investment in fund- ing HIT projects to improve the quality and safety of healthcare delivery. The agency also funds health services research as well as pre-and postdoc- toral training and career development (K awards) in all of these areas. Another potential venue for increased CER is the effort by the National Institutes of Health (NIH) to promote clinical and translational research (Zerhouni, 2007). While many think of clinical and translational research as “bench to bedside” (i.e., moving tests and treatments from the lab into the clinical setting), the NIH and others have taken a broader view. With the tra- ditional bench-to-bedside translational research labeled as “T1,” other types of translation are defined as well, such as “T2” (assessing the effectiveness of care shown to be efficacious in controlled settings, or bedside to population) and “T3” (delivering care with quality and accountability) (Woolf, 2008). NIH has sponsored many trials that qualify as CER, and although this type of research is not a primary focus for the agency, the training needed to conduct CER overlaps that of T2 and T3 translation. Thus the Clinical and Translational Science Awards (CTSA) initiative greatly expands the clinical research training needed to conduct CER.4 As CER absorbs researchers and staff, however, it may also compete with other types of research programs in T1 and some T2 areas. Over the past 3 years, the NIH has awarded funding to 38 CTSA centers, with a goal for an eventual steady state of 60 centers. These centers aim to speed the translation of research from the laboratory to clinical implementation and to the community. The work of CER, examin- ing the effectiveness of treatments in real-world settings, including watching 3 For more information, see http://effectivehealthcare.ahrq.gov (accessed September 8, 2010). 4 Since this paper was originally authored, the 2009 American Reinvestment and Recovery Act provided $1.1 billion of funds for activities related to CER—including $400 million to the Office of the Secretary of the Department of Health and Human Services, $600 million to AHRQ, and $400 million to the NIH.

OCR for page 191
9 THE TALENT REQUIRED for harms to patients with multiple comorbidities, is highly relevant to the CTSA initiative. One challenge for CER is that it currently exists as a heterogeneous field rather than a specific discipline. While this heterogeneity is probably appro- priate to the status of CER as an emerging field of study and effort, it also makes planning for its workforce needs challenging. Investigators and staff in CER come from many backgrounds, including clinical medicine, clinical epidemiology, biomedical informatics, biostatistics, and health policy. They work in a number of settings, including academic units, university centers, contract research organizations, government, and industry. It is not known how well the capacity of the current workforce would absorb any sort of marked increase in demand for CER activities. Finally, there is no specific entity that funds CER, despite calls for there to be so (Wilensky, 2006). Nonetheless, a variety of stakeholders must have access to the best comparative information about medical tests and treatments (Drummond et al., 2008). Physicians need to be able to assess the benefits and harms of various clinical decisions for their patients, who in turn themselves are becoming increasingly involved in decision making. Likewise, policy makers must weigh the evidence for, and against coverage of, increasingly expen- sive technologies, especially when marginal costs vastly exceed marginal benefits. Therefore this report was approached with the assumption that CER should be encouraged as part of the larger learning health system. The authors of this report, leaders with expertise in major known areas of CER, were recruited to define the scope of CER, answer a set of questions concerning the workforce, and work together to develop a framework and a plan for future work. The first task was to achieve consensus among our- selves for defining the components of CER. The next task was to develop a framework for enumerating the workforce and to propose an agenda for defining its required size, skill set, and educational requirements. A draft of this report was presented at the workshop described in this proceedings on July 30–31, 2008. A reactor panel provided some initial feedback, and subsequently more experts were contacted, all of whom are listed in the footnotes on pp. 191 and 192. This led to finalization of the framework and agenda for further research and policy making related to the CER workforce. Framework for Comparative Effectiveness Research Workforce Characterization The scope of CER was defined by developing a figure that depicts the subareas of CER and that is organized around the flow of information and knowledge. Next a preliminary model was developed for how workforce

OCR for page 191
9 LEARNING WHAT WORKS needs might be quantified. The knowledge and challenges in each area were elaborated, followed by a discussion of the issues that will arise with efforts to expand the scope and capacity of CER. As illustrated in Figure 4-1, information and knowledge originate from clinical trials and other clinical research studies, particularly studies using registries, EHRs, practice network data, and pharmacoepidemiologic stud- ies. This information is synthesized in comparative effectiveness reviews and technology assessments, sometimes including meta-analyses, decision analy- ses, or economic analyses, which inform the development of evidence-based clinical guidelines and decisions about coverage. HSR evaluates the optimal delivery and the societal health and economic effects of the corresponding changes in the health system. Finally, the information and knowledge are disseminated to both patients and professionals. Each of these components cycles back to its predecessors, and the continuously learning health system maintains a constant interaction among them. It was also recognized that there are many areas of overlap among the components. For example, experts in biomedical informatics can work synergistically with clinical epidemiologists to determine data requirements and information needs for CER studies. Likewise, clinical guideline devel- opers and implementers can collaborate with health services researchers in technology assessment. Characterization of Specific Components of the Workforce The next task was to develop a framework for enumerating the work- force and to make some estimates of its necessary size. Each author was assigned one of the major components of Figure 4-1 and asked to address the workforce needs in that particular area, taking into account the follow- ing questions: 1. What are the issues and problems for the workforce at present? 2. What skill set is needed to address current issues and problems? 3. Where are these skills currently developed or obtained? 4. What will be the projected needs as CER scales up in healthcare settings? Do we need more people? Do we need to further develop current capacity? What are the training needs? 5. What are the recommendations for assessing and measuring the needs for the current and future workforce? Clinical Epidemiology A core concept underlying CER is that there is a continuum that begins with research evidence, then moves to systematic review of the overall body

OCR for page 191
Clinical Epidemiology/Pharamcoepidemiology/ Biomedical Informatics Evidence-Based Medicine Electronic Clinical Data Clinical Other Clinical Information - Electronic Health Record Data into Data Warehouses Trials Studies Needs - Clinical Decision Support - Public Health Informatics Systematic Reviews Data Mining and Analysis - Prioritization - Validation Methods Development Health Services Research Clinical - Outcomes Research Guidelines Guideline - Decision Science Development Development - Economics - Benefits Design and - Coverage Decisions Implementation - Formulary Decisions Guideline Communications Implementation Dissemination - Translation for Clinicians - Translation for Patients/Consumers FIGURE 4-1 Key activity domains for comparative effectiveness research. Workforce development will be critical to support the many primary functions within each of these domains as well as to foster the cross-domain interactions and activities identified (e.g., methods development, identifying information needs). 9 Figure S-3, 4-1, editable, broadside

OCR for page 191
9 LEARNING WHAT WORKS of evidence, and then to the interpretation of the strength of the overall evidence that can be used for developing credible clinical practice guidelines (IOM, 2008). While they overlap with other disciplines, the skills required to conduct CER are not widely taught. This section focuses on the four types of research involved in CER analyses as well as the personnel needed to conduct those analyses: (1) practical clinical trials and conventional clinical research, (2) systematic evidence reviews and technology assess- ment, (3) pharmacoepidemiologic research, and (4) clinical epidemiology methods research. Practical Clinical Trials and Conventional Clinical Research A wide variety of studies are useful in CER (Chou et al., 2010). Most would agree, however, that increasing the amount of CER will require expanding the capability for conducting practical, head-to-head “effective- ness” trials. Such trials are distinct from the so-called efficacy or explanatory clinical trials performed in the regulatory approval process. Explanatory tri- als, which focus on comparison with placebo treatments in highly selected subjects, are a necessary step in evaluating new therapies, but they are usually not an adequate guide for clinical practice. It can be difficult to determine from such trials—and from the systematic reviews that aggregate them—what the “best” treatments are. In contrast, effectiveness trials, such as practical clinical trials, compare treatments in a head-to-head manner in settings that can be applied to real-world clinical practice. The character- istics that distinguish effectiveness from explanatory (efficacy) studies are listed in Box 4-1 (Gartlehner et al., 2006). Tunis and colleagues note a number of disincentives to perform- BOX 4–1 Characteristics Distinguishing Effectiveness from Explanatory Studies 1. Populations in primary care or general population 2. Less stringent eligibility criteria 3. Health outcomes 4. Long study duration; clinically relevant treatment modalities 5. Assessment of adverse events 6. Adequate sample size SOuRCE: Gartlehner et al. (2006).

OCR for page 191
99 THE TALENT REQUIRED ing head-to-head comparisons of treatments, such as the disease-oriented nature of the NIH and the commercial motivations of pharmaceutical and other companies (Tunis et al., 2003). Indeed, few such trials have been performed. In a recent survey, Luce and colleagues were able to identify fewer than 20 such trials in the literature (Luce et al., 2008). A frequently stated goal for comparative effectiveness is for the number of effectiveness trials performed each year to grow to 50 trials. As discussed below, accom- plishing this goal will require methodological advances in designing and conducting studies as well as training programs devoted to this new type of clinical trial research. Because so few effectiveness trials have been performed, training in how to design and conduct them is not widely available. While there is overlap, the expertise and the team composition required for practical clinical trials differ from what is required for smaller efficacy trials. For example, prac- tical clinical trials will need to use streamlined, more efficient procedures for recruitment and monitoring than large efficacy trials use (Califf, 2006). They should take advantage, for instance, of Web-based tools for trial management and the potential for using EHR systems to identify, recruit, and allocate subjects to treatment arms within and across health systems (Bastian, 2005; Langston et al., 2005; Reboussin and Espeland, 2005). They also need to develop methods for involving consumers and, for trials conducted in practice networks, office-based clinicians in the design and conduct of trials. Finally, some practical trials require specialized statistical skills (Berry, 2006). Comparative Effectiveness Reviews and Technology Assessments Comparative effectiveness reviews are a cornerstone of evidence-based decision making (Helfand, 2005). These reviews follow the explicit prin- ciples of systematic reviews, but they are more comprehensive and mul- tidisciplinary, requiring a wider range of expertise. As noted in the EPC Guide to Conducting Comparative Effectiveness Reviews, comparative effectiveness reviews “expand the scope of a typical systematic review, which focuses on the effectiveness of a single intervention, by comparing the relative benefits and harms among a range of available treatments or interventions for a given condition. In doing so, [comparative effectiveness reviews] more closely parallel the decisions facing clinicians, patients, and policy makers, who must choose among a variety of alternatives in mak- ing diagnostic, treatment, and healthcare delivery decisions” (Methods Reference Guide, 2008). While some technology assessments are similar in scope to a comparative effectiveness review, most are smaller, more focused reviews that require a narrower range of expertise. Within the emerging, somewhat poorly defined field of CER, conduct-

OCR for page 191
200 LEARNING WHAT WORKS ing comparative effectiveness reviews and technology assessments is the most developed component. In contrast with other components of CER, guiding principles and explicit guidance for the conduct of comparative effectiveness reviews are available and are widely used. Examples include guidance tools from the UK National Institute for Health and Clinical Excellence (NICE)5 and the recently released EPC Guide (Methods Refer- ence Guide, 2008). The underlying disciplines for conducting CER are clinical epidemiol- ogy and clinical medicine. Individual comparative effectiveness reviews are usually conducted by project teams led by a project principal investigator under the oversight of a center director. The center director must have exceptional, in-depth disciplinary knowledge and skills in the underlying core disciplines of clinical epidemiology, clinical medicine, and medical decision making. The director should also have applied experience in addi- tion to theoretical knowledge of these areas. For example, it is essential that the director have experience working with guideline panels, coverage com- mittees, health plans, consumer groups, and other bodies that use evidence in decision making. Without such leadership, comparative effectiveness reviews may miss the mark, failing to address the information needs of the target audiences. It is also important that the director, or other senior investigators, have experienced conducting clinical research studies and not just appraising them. Qualifications for center directors generally include an M.D. degree with additional training leading to a master’s degree plus a record of aca- demic productivity representing outstanding contributions in a field such as clinical research design, literature synthesis, statistics, pharmacoepidemiol- ogy, or medical decision making. The most important competencies of the project leader are an understanding of clinical research study designs and clinical decision making. Collectively, the project leader and other investi- gators and staff must have expertise in various areas, such as interviewing experts (including patients) to identify important questions for the review to address, protocol development, project management, literature retrieval and searching, formal methods to assess the quality and applicability of studies, critical appraisal of studies, quantitative synthesis, and medical writing. This workforce can be characterized based on the experience of the AHRQ EPCs. Through the Effective Health Care Program, the EPCs have completed 15 CERs over a period of approximately 3 years. The average cost of an AHRQ CER is $250,000 to $350,000, depending on its com- plexity. In these centers, investigators usually have a Ph.D. in epidemiol- ogy, pharmacoepidemiology, or biostatistics, or an M.D. with research fellowship training and a master’s degree in a pertinent field. Ideally, all 5 See http://www.nice.org.uk/ (accessed September 8, 2010).

OCR for page 191
20 THE TALENT REQUIRED participants should have experience conducting systematic reviews and an understanding of methodological research in the area of systematic reviews, clinical epidemiology, meta-analysis, or cost-effectiveness analysis. Most importantly, they should have the ability to work with healthcare decision makers who need information to make more informed decisions; they should be able to formulate problems carefully, often working with technical experts (including patients and clinicians) to develop an analytic framework and key questions addressing uncertainties that underlie con- troversy or variation in practice; they should have a broad view of eligible evidence, one that has recognized that the kinds of evidence included in a review depends on the kinds of questions asked and on what kinds of evi- dence are available to answer them; and they should understand that while systematic reviews do not in themselves dictate decisions, they can play a valuable role in helping decision makers clarify what is known as well as unknown about the issues surrounding important decisions and, in that way, affect both policy and clinical practice (Helfand, 2005). Also required for systematic reviews are research librarians who have skills in finding evidence for systematic reviews through using electronic bibliographic databases, citation-tracking resources, regulatory agency data repositories, practice guidelines, unpublished scientific research, Web sites and proprietary databases, bibliographic reviews, expert referrals, and pub- lications of meeting proceedings, as well as hand-searching of key journals. Statisticians are needed who have skills in providing advice and critique on the statistical methods used in published and unpublished clinical studies; in conducting statistical analyses, including meta-analysis and other standard analysis and computation; and in preparing statistical reports, including figures and tables. EPCs also require editors who can improve the read- ability and standardization of evidence reports. In addition, EPCs require research support staff. Research associates must have the ability to critically assess the effectiveness and safety of medical interventions; experience with systematic reviews of the medical literature; knowledge of the fundamentals of epidemiology, study design, and biostatistics; facility in conceptualiz- ing and structuring tasks; and experience with clinical research methods. Research assistants need skills in maintaining bibliographies; coordinating peer review contacts and documents; and assisting in the development of summary reports, figures, tables, and final reports using particular style guidelines. Table 4-1 shows the typical staffing for a CER evidence report funded by AHRQ for a 1-year period. Although the number of systematic reviews that is necessary may be among the easier of the “how much” questions to ask, there is no clear answer. The Cochrane Collaboration6 originally estimated a need 6 See http://www.cochrane.org/ (accessed September 8, 2010).

OCR for page 191
20 LEARNING WHAT WORKS recently completed patient accrual (Ministry of Health and Long Term Care, 2008). University Health Network Usability Laboratories The University Health Network Usability Laboratories have 15 employ- ees, including human factors analysts and engineers, and are primarily concerned with assessing the safety of medical technologies, which is an important consideration for policy makers and purchasers (Center for Global eHealth Innovation University Health Network, n.d.). The labora- tories handle requests from OHTAC for information relating to the ease of use of the technology, qualifications necessary to manage the technology, or risks to hospital staff or patients (Levin et al., 2007). Several topics cur- rently under review from the usability laboratories include safety concerns regarding computed tomography radiation, magnetic resonance imaging, and smart infusion pumps. Workforce Analysis for Comparative Effectiveness Network in Ontario Personnel The activities described above require staff from a variety of back- grounds, including health policy experts, health economists, clinical epi- demiologists, biostatisticians, health services researchers, human factors analysts, and engineers, as well as physicians, nurses, hospital representa- tives, and information specialists. In addition, the success of this network is dependent on the willingness of university faculty and clinical experts to assist in the development of study designs and the collection of neces- sary data. Therefore, although there is a limited number of core staff, as described above, the system itself includes a far greater range of human resources working collaboratively to fill evidence gaps of importance to decision makers. In addition, PATH and THETA are involved in developing workshops, classes, and degree programs at, respectively, McMaster University and the University of Toronto to meet future workforce needs. For example, McMaster University has the Center for Health Economics and Policy Analysis,13 which is funded by McMaster University and the Ontario Min- istry of Health. The center offers classes in health economics and policy analysis to students from a variety of degree programs (Centre for Health Economics and Policy Analysis, n.d.). The University of Toronto offers 13 Centre for Health Economics and Policy Analysis. Available at www.chepa.org/Whoweare/ Centre/tabid/59/Default.aspx (accessed July 15, 2008).

OCR for page 191
2 THE TALENT REQUIRED degree programs in health technology assessment and management, HSR, and clinical epidemiology through the Department of Health Policy, Man- agement, and Evaluation (Department of Health Policy, Management, and Evaluation, 2008). Provincial Government Funding for Field Evaluations Currently, the Ministry of Health spends CA$8 million to CA$10 mil- lion a year on field evaluations for high-demand, emerging medical tech- nologies. Technology costs are generally excluded from this figure, but they are also paid for by the Ministry of Health. This figure also excludes the cost of university and hospital-based researchers whose salaries are paid for by their employers or by external granting agencies. Approximately CA$5 million of this funding is invested in the PET registries, leaving CA$3 to CA$5 million for additional field evaluations. The higher cost of the PET registries is primarily due to the costs of the PET radioisotope being paid for from the OHTAC budget. For most conditionally funded field evalua- tion projects, other government departments cover the clinical costs. Policy Implications for the United States Establish a Stable Funding Source to Support Comparative Effectiveness Research Government funding for the comparative effectiveness programs estab- lished in Ontario is critical, because product manufacturers often lack the incentives and hospitals usually lack the resources to support this research. Studies to address important unanswered questions identified by OHTAC are designed and implemented in a short time frame, primarily because a pool of resources is available to support this work. It is also worth not- ing that the time frame for funding decisions is extremely short, which is essential when attempting to evaluate promising emerging technologies on a time frame that is meaningful for clinical and health policy decision making. To create a similar capacity for conducting research aimed at addressing issues of importance to healthcare decision makers in the United States, it is important to identify a continually available, renewable source of fund- ing. Since there is a mix of public and private health insurers in the United States, it would be beneficial to adopt a system where all health insurers were required to contribute funds to the programs. Furthermore, there will need to be a capacity for rapid decisions about allocation of these funds to support prospective studies. Standard grant review cycle times are unlikely to be adequate to support a productive comparative effectiveness enterprise in the United States or elsewhere.

OCR for page 191
22 LEARNING WHAT WORKS Ensure That the Process Is Timely and Directed and That Evidence Generation Is Directed at Questions of Importance to Decision Makers The process of generating evidence described in this paper is both timely and directed at the evidence needs of healthcare decision makers. Once OHTAC requests an HTA from MAS, a full systematic review is returned within 16 weeks, at which time OHTAC can decide to request a full field evaluation. This close and ongoing contact between the Ministry of Health, OHTAC, MAS, and the various programs that conduct field evalua- tions and economic analyses ensures that studies are responsive to the ques- tions of importance to policy makers and potential purchasers. In Ontario, studies are designed collaboratively with input from government officials, hospital representatives, physicians, health economists, and health services researchers. Keeping decision makers involved in this process increases the likelihood that the data generated by the study will be relevant. In the United States, it will be necessary to establish efficient mechanisms for con- sidering input from a broad range of experts and stakeholders in priority setting, protocol development, and study implementation. The methods and strategies for achieving this are not fully developed or well documented, and considerable work will be necessary in order to achieve functioning mechanisms to obtain broad input and to achieve consensus around priori- ties and methods. Design Programs That Are Independent from Government and Industry and Ensure That the Decision Making Process Is Transparent Although the government is the main source of funding for CER in Ontario, programs conducting the various field evaluations have remained independent. This independence from the Ministry of Health allows these programs to design and implement studies without unmanageable political influence and to more freely engage with consultants and experts. In addi- tion, the fact that OHTAC is a board at “arm’s length” from the Ontario Ministry of Health keeps the recommendation process independent from the ministry, thereby separating it from the actual decision-making process. Efforts have been made by the Ontario government and OHTAC to ensure that the entire process is open to the public. Any Ontario citizen is welcome to submit a request for an assessment of an emerging nondrug medical technology, stakeholder engagement and feedback are solicited via targeted approaches, and all decision and reasons for those decisions are made available via the Internet. Transparency in healthcare decision mak- ing is critical to establishing trust from the general public. Decision makers in Ontario continue to look for and adopt new methods to ensure that the public is engaged in the process. When developing a system in the United

OCR for page 191
2 THE TALENT REQUIRED States, efforts should be made to ensure that citizens are not only aware of these efforts but also encouraged to engage in the process. Public engage- ment processes also need to be designed so that those with vested interests do not unduly influence decision making. Create Partnerships Between Universities and Programs Responsible for Conducting Field Evaluations The Ontario technology assessment network relies on partnerships between programs conducting field evaluations and various universities, such as the University of Toronto (THETA) and McMaster University (PATH). This partnership allows these programs to draw on the expertise of academics and physicians working at these universities when designing and implementing various studies. Furthermore, this connection has led to the development of classes and degree programs that will help to fill future workforce and expertise requirements. The maintenance of ongoing rela- tionships between the Ontario Ministry of Health and academic programs that specialize in comparative effectiveness studies appears to be important for the efficiency and effectiveness of this work. This bears some similar- ity to the network of EPCs in the United States and a number of similar academically based networks that develop focused expertise and relation- ships in order to conduct particular types of projects. It may be sensible to explore the establishment of a network of centers with expertise in conduct- ing comparative effectiveness studies that maintain ongoing relationships with CMS, private payers, and a broad network of stakeholders with an interest in this subject. Leverage Medicare’s Influence on Private Payers It may be argued that one reason for the effectiveness of Ontario’s system is that decision making is relatively centralized compared to the situation in the United States. The payer (the MOHLTC) decides how new nondrug technologies are used in Ontario. In the United States, the existence of a large number of decision makers makes it more difficult to control the diffusion of emerging medical technologies because the tech- nologies can enter the healthcare system through any number of private as well as public payers. Still, although there is not one central decision maker in the United States, private payers are often influenced by Medicare’s coverage decisions, though it is increasingly common that large private payers make decisions that differ from those of Medicare. The influence that Medicare wields on private coverage decisions could be leveraged to develop a compara- tive effectiveness network, especially if Medicare were to use the existing

OCR for page 191
2 LEARNING WHAT WORKS Medicare Evidence Development and Coverage Advisory Committee or to establish a new multistakeholder board to perform a function similar to OHTAC. Another factor to consider is that the United States has a much larger HSR capacity than Ontario; this domestic network could be leveraged to review the evidence necessary for the production of coverage recommendations. Where uncertainty remained after a thorough review of all available evidence, Medicare could commission a “coverage with evidence development” (CED) study using government funding, a policy option already used in a number of cases (Tunis and Pearson, 2006). There has been increasing interest in private payer models of CED as well, and it would be particularly effective to have public and private payers supporting the same studies using this policy mechanism. Methodology Implications for the United States Draw on Existing Capacities to Support Comparative Effectiveness Research Government funding for CER in Ontario is relatively small because MAS, PATH, and THETA are able to make use of existing capacities within the province, such as ICES and university researchers and clinicians, to help support their projects. Once these programs receive requests from OHTAC, they are able to launch studies fairly quickly and efficiently, which is critical given the rapid evolution of high-demand, emerging medical technologies. Unlike in Ontario, where only a small number of clinical research programs are capable of performing the research needed by the Ministry of Health, in the United States there are many HSR organizations as well as an extensive network of universities and teaching hospitals that could help support a CER agenda. The mechanism used in Ontario of assigning individual projects to research programs may not be scalable to the United States, and a competitive procurement process may be more suitable. With the strong focus on EBM that currently exists, now is an ideal time to choose a high-demand medical technology and implement prag- matic studies in order to demonstrate how CER can be used to inform medical decisions. In addition, initial studies are necessary to refine current methods and inform discussions about the additional capacity necessary to build a comparative effectiveness network. Invest in a Centralized Capacity to Set Up and Collect Information from Patient Registries The Ontario network takes advantage of the existence of a separate, larger program (ICES) responsible for creating registries and cross-linking

OCR for page 191
2 THE TALENT REQUIRED databases. Although these databases serve to address a range of policy ques- tions other than coverage decisions, the databases and various ICES analy- ses are used to support many of the field evaluations designed by PATH and THETA. In addition, the ICES databases allow PATH and THETA to implement studies more quickly and at a lower cost than would otherwise be possible if these databases did not exist. In the United States there are a number payers, including Medicare, United Healthcare, and Blue Cross Blue Shield, that routinely collect patient information through administrative databases and registries. To make this information useful to researchers and decision makers, it would be benefi- cial to develop greater coordination in the work of collecting and analyzing administrative and registry data. Use a Combination of Research Approaches to Inform Decision Makers The technology assessment system in Ontario relies on a number of different study designs to assess emerging technologies and address criti- cal evidence gaps. Decision makers in Ontario rely on information from a number of sources, including systematic reviews, cost-effectiveness model- ing, and (if necessary) field evaluations. In addition, when field evaluations are deemed necessary, they are designed to be responsive to the questions of policy makers and care providers and are focused on the costs and effects of the medical technology in real-world practice. Adopting a similar approach in the United States would help to ensure that studies are directed at the decision-making process and will likely reduce the number of studies concluding that more evidence is needed before a decision can be reached. “Globalizing” Comparative Effectiveness Many of the evidence gaps relating to emerging technologies in Ontario have also been identified as important evidence gaps in the United States and abroad. This overlap suggests that there is an opportunity to facilitate linkages and collaboration for activities of mutual benefit. There are lessons to be learned not only from the Ontario experience but also from those in other countries. For example, a government-funded, centralized HTA program in the United Kingdom commissions studies on topics where the evidence base is limited. This program could serve as a useful model for a commissioned-research CED program housed within Medicare. With respect to individual studies, international partnerships may be helpful, particularly for rare diseases where the number of patients eligible for a study in any single country is small. However, international studies also have disadvantages: they may take longer to initiate; the collection,

OCR for page 191
2 LEARNING WHAT WORKS assessment, and integration of data may be complicated; and the data may not be generalizable. Furthermore, in order for an international collabora- tion to be successful, there must be agreement about appropriate study design and outcome measures. Conclusion There is currently great interest internationally in both comparative effectiveness and coverage with evidence development. The Ontario experi - ence demonstrates that a significant amount of research can be achieved for a relatively small amount of money if researchers, clinicians, and decision makers work together and make use of existing infrastructure. In the United States and throughout the world, there is a high demand for information on comparative effectiveness for emerging medical technologies, not only for payers and hospitals but also for individual clinicians and patients as well. Beginning to improve the capacity to make evidence-based medical decisions requires immediate action because the pace of medical technology innovation continues to increase, and, as it does, so does the list of ques- tions that need to be answered in order to inform decision makers. REFERENCES Ash, J., P. Stavri, R. Dykstra, and L. Fournier. 2003. Implementing computerized physician order entry: The importance of special people. International Journal of Medical Infor- matics 69:235-250. Atkins, D., D. Best, P. A. Briss, M. Eccles, Y. Falck-Ytter, S. Flottorp, G. H. Guyatt, R. T. Harbour, M. C. Haugh, D. Henry, S. Hill, R. Jaeschke, G. Leng, A. Liberati, N. Magrini, J. Mason, P. Middleton, J. Mrukowicz, D. O’Connell, A. D. Oxman, B. Phillips, H. J. Schunemann, T. T. Edejer, H. Varonen, G. E. Vist, J. W. Williams, Jr., and S. Zaza. 2004. Grading quality of evidence and strength of recommendations. British Medical Journal 328(7454):1490. Bastian, H. 2005. Consumer and researcher collaboration in trials: Filling the gaps. Clinical Trials 2:3-4. Berry, D. 2006. Bayesian clinical trials. Nature Reviews Drug Discovery 5:27-36. Bowen, J. M., R. Hopkins, M. Chiu, G. Blackhouse, C. Lazzam, D. Ko, J. Tu, E. Cohen, K. Campbell, Y. He, A. Willan, J.-E. Tarride, and R. Goeree. 2007. Clinical and cost- effectiveness analysis of drug eluting stents compared to bare metal stents for percuta- neous coronary interventions in Ontario: Final report (Report no. Hta002-0705-02). Hamilton, ON: Program for the Assessment of Technology in Health, St. Joseph’s Healthcare Hamilton/McMaster University. Buckley, T. 2007. The complexities of comparative effectiveness. Washington, DC: Biotechnol- ogy Industry Organization. Califf, R. 2006. Clinical trials bureaucracy: Unintended consequences of well-intentioned policy. Clinical Trials 3:496-502. Center for Global eHealth Innovation University Health Network. n.d. Healthcare human factors group. www.ehealthinnovation.org/?q=hhf (accessed July 3, 2008).

OCR for page 191
2 THE TALENT REQUIRED Centre for Health Economics and Policy Analysis. http://www.chepa.org/Whoweare/Centre/ tabid/59/Default.aspx (accessed July 15, 2008). Chaudhry, B., J. Wang, S. Wu, M. Maglione, W. Mojica, E. Roth, S. Morton, and P. Shekelle. 2006. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine 144:742-752. Chou R, N. Aronson, D. Atkins, A. S. Ismaila, P. Santaguida, D. H. Smith, E. Whitlock, T. J. Wilt and D. Moher. 2010. AHRQ Series Paper 4: Assessing harms when comparing medical interventions: AHRQ and the Effective Health-Care Program. Journal of Clinical Epidemiology 63:502-512. Cohen, A., W. Hersh, K. Peterson, and P. Yen. 2006. Reducing workload in systematic review preparation using automated citation classification. Journal of the American Medical Informatics Association 13:206-219. Committee to Evaluate Drugs. 2007a. Terms of reference and administrative guidelines. On- tario: Ministry of Health and Long-Term Care. ———. 2007b. Drug innovation fund to advance research into value of medicines. Approved funding for research proposals from 200/0 review cycle. Ontario: Ministry of Health and Long-Term Care. Dall, T., A. Grover, C. Roehrig, M. Bannister, S. Eisenstein, C. Fulper, and J. Cultice. 2006. Physician supply and demand: Projections to 2020. Washington, DC: Health Resources and Services Administration. Department of Health Policy, Management, and Evaluation. 2008. Course descriptions. http:// www.hpme.utoronto.ca/about/gradprograms/msc-htam/courses.htm (accessed July 15, 2008). Drummond, M., J. Schwartz, B. Jönsson, B. Luce, P. Neumann, U. Siebert, and S. Sullivan. 2008. Key principles for the improved conduct of health technology assessments for resource allocation decisions. International Journal of Technology Assessment in Health Care 24:244-258. Eardley, T. 2006. NHS Informatics Workforce Survey. London, UK: The Association for In- formatics Professionals in Health and Social Care. EHR (Electronic Health Record) Adoption Model. 2008. The EHR adoption model. Chicago, IL: Healthcare Information Management and Systems Society. Ellis, P., C. Baker, and M. Hanger. 2007. Research on the comparative effectiveness of medical treatments: Issues and options for an expanded federal role. Washington, DC: Congres- sional Budget Office. Emanuel, E., V. Fuchs, and A. Garber. 2007. Essential elements of a technology and outcomes assessment initiative. Journal of the American Medical Association 298:1323-1325. Feder, B. 2006. Doctors rethink widespread use of heart stents. New York Times. October 21. http://www.nytimes.com/2006/10/21/business/21stent.html (accessed July 2, 2008). Fletcher, R., and S. Fletcher. 2005. Clinical epidemiology: The essentials, 4th ed. Baltimore, MD: Lippincott Williams & Wilkins. Forrest, C., A. Millman, J. Hines, and E. Holve. 2005. Health services research competencies: Final report. Baltimore, MD: Johns Hopkins Bloomberg School of Public Health. Forrest, C. B., D. P. Martin, E. Holve, and A. Millman. 2009. Health services research doctoral core competencies. BMC Health Services Research 9(1):107. Fridsma, D., J. Evans, S. Hastak, and C. Mead. 2008. The BRIDG project: A technical report. Journal of the American Medical Informatics Association 15:130-137. Gabler, J. 2003. 200 integrated delivery system IT budget and staffing study results. Stam- ford, CT: Gartner Corp. Gartlehner, G., R. Hansen, D. Nissman, K. Lohr, and T. Carey. 2006. A simple and valid tool distinguished efficacy from effectiveness studies. Journal of Clinical Epidemiology 59:1040-1048.

OCR for page 191
2 LEARNING WHAT WORKS Goeree, R., and L. Levin. 2006. Building bridges between academic research and policy for- mulation: The PRUFE framework—An integral part of Ontario’s evidence-based HTPA process. Pharmacoeconomics 24(11):1143-1156. Goodman, D. 2008. Improving accountability for the public investment in health profession education: It’s time to try health workforce planning. Journal of the American Medical Association 300:1205-1207. Han, Y., J. Carcillo, S. Venkataraman, R. Clark, R. Watson, T. Nguyen, H. Bayir, and R. Orr. 2005. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 116:1506-1512. Haynes, R., D. Sackett, G. Guyatt, and P. Tugwell. 2005. Clinical epidemiology: How to do clinical practice research, 3rd ed. Baltimore, MD: Lippincott Williams & Wilkins. Helfand, M. 2005. Using evidence reports: Progress and challenges in evidence-based decision making. Health Affairs 24:123-127. Helfand, M., S. Morton, E. Guallar, and C. Mulrow. 2005. Challenges of summarizing better information for better health: The evidence-based practice center experience. Annals of Internal Medicine 142(12, Pt. 2). Hersh, W. 2002. Medical informatics: Improving health care through information. Journal of the American Medical Association 288:1955-1958. ———. 2006. Who are the informaticians? What we know and should know. Journal of the American Medical Informatics Association 13:166-170. ———. 2008. Health and biomedical informatics: Opportunities and challenges for a twenty- first century profession and its education. In IMIA yearbook of medical informatics 200, edited by A. Geissbuhler and C. Kulikowski. Stuttgart, Germany: Schattauer. Pp. 138-145. Hersh, W., and J. Williamson. 2007. Educating 10,000 informaticians by 2010: The AMIA 10 × 10 program. International Journal of Medical Informatics 76:377-382. Hersh, W., and A. Wright. 2008. What workforce is needed to implement the health informa- tion technology agenda? An analysis from the HIMSS Analytics database. Paper read at AMIA Annual Symposium Proceedings, Washington, DC. ICES (Institute for Clinical Evaluative Sciences). 2007. Knowledge igniting change: 200 annual report. http://www.ices.on.ca/file/Annual_Report_2007.pdf (accessed July 3, 2008). Iglehart, J. K. 2000. Revisiting the Canadian health care system. New England Journal of Medicine 342(26):2007-2012. IOM (Institute of Medicine). 1990. Clinical practice guidelines: Directions for a new program. Washington, DC: National Academy Press. ———. 2007. The learning healthcare system: Workshop summary. Washington, DC: The National Academies Press. ———. 2008. Knowing what works in health care: A roadmap for the nation. Washington, DC: The National Academies Press. Kirschner, N., S. G. Pauker, and J. W. Stubbs. 2008. Information on cost-effectiveness: An essential product of a national comparative effectiveness program. Annals of Internal Medicine 148:956-961. Langston, A., M. McCallum, M. Campbell, C. Robertson, and S. Ralston. 2005. An integrated approach to consumer representation and involvement in a multicentre randomized con- trolled trial. Clinical Trials 2:80-87. Levin, L., R. Goeree, N. Sikich, B. Jorgensen, M. C. Brouwers, T. Easty, and C. Zahn. 2007. Establishing a comprehensive continuum from an evidentiary base to policy development for health technologies: The Ontario experience. International Journal of Technology Assessment in Health Care 23(3):299-309.

OCR for page 191
29 THE TALENT REQUIRED Leviss, J., R. Kremsdorf, and M. Mohaideen. 2006. The CMIO: A new leader for health sys- tems. Journal of the American Medical Informatics Association 13:573-578. Lewis, S., C. Donaldson, C. Mitton, and G. Currie. 2001. The future of health care in Canada. British Medical Journal 323(7318):926-929. Luce, B., L. Paramore, B. Parasuraman, B. Liljas, and G. deLissovoy. 2008. Can managed care organizations partner with manufacturers for comparative effectiveness research? American Journal of Managed Care 14:149-156. Methods Reference Guide. 2008. Methods reference guide for effectiveness and comparative effectiveness reviews. Rockville, MD: Agency for Healthcare Research and Quality. Ministry of Health and Long Term Care. 2008. Bulletin: Accessing positron emission to- mography (PET) studies. http://www.health.gov.on.ca/english/providers/program/ohip/ bulletins/4000/bul4477.pdf (accessed July 3, 2008). Moher, D., A. Tsertsvadze, A. Tricco, M. Eccles, J. Grimshaw, M. Sampson, and N. Barrowman. 2007. A systematic review identified few methods and strategies describing when and how to update systematic reviews. Journal of Clinical Epidemiology 60:1095-1104. Mullin, T. 2007 (April 17). Statement of Theresa Mullin, Ph.D., Assistant Commissioner for Planning, Food and Drug Administration. Congressional Record, D507-D508. Washing- ton, DC: Subcommittee on Health, Committee on Energy and Commerce, U.S. House of Representatives. Ontario Health Technology Advisory Committee. n.d. OHTAC membership. http://www. health.gov.on.ca/english/providers/program/ohtac/committee.html (accessed July 10, 2008). PATH (Programs for Assessment of Technology in Health) Research Institute. 2008a. Meet our Team. http://www.path-hta.ca/team.htm (accessed July 2, 2008). ———. 2008b. HTA educational learning program. http://www.path-hta.ca/help.htm (ac- cessed July 1, 2008). Reboussin, D., and M. Espeland. 2005. The science of Web-based clinical trial management. Clinical Trials 2:1-2. Ricketts, T. 2007. Developing the health services research workforce. Washington, DC: AcademyHealth. Safran, C., and D. Detmer. 2005. Computerized physician order entry systems and medication errors. Journal of the American Medical Association 294:179. Safran, C., M. Bloomrosen, W. Hammond, S. Labkoff, S. Markel-Fox, P. Tang, and D. Detmer. 2007. Toward a national framework for the secondary use of health data: An American Medical Informatics Association white paper. Journal of the American Medical Informat- ics Association 14:1-9. Shaffer, V., and J. Lovelock. 2007. Results of the 200 Gartner-AMDIS survey of CMIOs: Bridging healthcare’s transforming waters. Stamford, CT: Gartner. Shekelle, P., E. Ortiz, S. Rhodes, S. Morton, M. Eccles, J. Grimshaw, and S. Woolf. 2001. Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: How quickly do guidelines become outdated? Journal of the American Medical Associa- tion 286:1461-1467. Shojania, K., M. Sampson, M. Ansari, J. Ji, S. Doucette, and D. Moher. 2007. How quickly do systematic reviews go out of date? A survival analysis. Annals of Internal Medicine 147:224-233. Sittig, D., J. Ash, J. Zhang, J. Osheroff, and M. Shabot. 2006. Lessons from “unexpected increased mortality after implementation of a commercially sold computerized physician order entry system.” Pediatrics 118:797-801. Slutsky, J., D. Atkins, S. Chang and B. A. Collins Sharp. 2010. AHRQ Series Paper 1: Com- paring medical interventions: AHRQ and the Effective Health-Care Program. Journal of Clinical Epidemiology 63(5):481-483.

OCR for page 191
20 LEARNING WHAT WORKS Swirsky, L., and L. Cook. 2008. Comparative effectiveness: Better value for the money? Wash- ington, DC: Alliance for Health Reform. Tierney, M., and B. Manns. 2008. Optimizing the use of prescription drugs in Canada through the common drug review. Canadian Medical Association Journal 178(4):432-435. Toronto Health Economics and Technology Assessment Collaborative. 2007a. Research. http://theta.utoronto.ca/research (accessed July 1, 2008). ———. 2007b. Education. http://theta.utoronto.ca/static/education (accessed July 1, 2008). Tu, J. V., J. Bowen, M. Chiu, D. T. Ko, P. C. Austin, Y. He, R. Hopkins, J. E. Tarride, G. Blackhouse, C. Lazzam, E. A. Cohen, and R. Goeree. 2007. Effectiveness and safety of drug-eluting stents in Ontario. New England Journal of Medicine 357(14):1393-1402. Tunis, S. 2007. Comparative effectiveness: Basic terms and concepts. San Francisco, CA: Center for Medical Technology Policy. Tunis, S. R., and S. D. Pearson. 2006. Coverage options for promising technologies: Medicare’s “coverage with evidence development.” Health Affairs 25(5):1218-1230. Tunis, S. R., D. Stryer, and C. Clancy. 2003. Practical clinical trials—Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290:1624-1632. Weissberg, J. 2007. Use of large system databases. In The Learning Healthcare System: Work- shop Summary, edited by L. Olsen, D. Aisner, and J. McGinnis. Washington, DC: The National Academies Press. Pp. 46-50. Wilensky, G. 2006. Developing a center for comparative effectiveness information. Health Affairs 25:w572-w585. Woolf, S. 2008. The meaning of translational research and why it matters. Journal of the American Medical Association 299:211-213. Zerhouni, E. 2007. Translational research: Moving discovery to practice. Clinical Pharmacol- ogy and Therapeutics 81:126-128.