9
Clinical Investigators and Evaluators

Coordinator


Richard Platt, Harvard Medical School and Harvard Pilgrim Health Care


Other Contributors


Carolyn Clancy, Agency for Healthcare Research and Quality; Elizabeth DuPre, AEI-Brookings Joint Center for Regulatory Studies; David Helms, Academy Health; Rae-Ellen Kavey, National Heart, Lung, and Blood Institute; Cato Laurencin, University of Virginia; Mark McClellan, Brookings Institution; Patricia Pittman, AcademyHealth; Jean Slutsky, Agency for Healthcare Research and Quality; Don Steinwachs, Johns Hopkins University

SECTOR OVERVIEW

The discussion in this chapter reflects the perspectives of clinical investigators and evaluators in determining whether, how well, for whom, and at what cost prevention and treatment strategies work and on methods for ensuring their use. Its major focus is on evidence generation, which must occur in clinical and community settings rather than under tightly controlled experimental conditions. The authors of this chapter note that appropriately targeted clinical research has driven rapid changes in prevention and treatment practices; examples include the management of diabetes and the use of postmenopausal hormone replacement therapy. They also note that the topics addressed here form a continuum with population healthcare practices, especially primary prevention, that address many of the same clinical conditions. Many of the same considerations apply to those activities, and a complete plan to create a learning healthcare system should be developed in concert with the population healthcare stakeholders.

Evidence generation and evaluation in real-life situations span health services research and clinical research, including effectiveness, efficacy, and implementation research. The term “effectiveness research” refers to the



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 217
9 Clinical Investigators and Evaluators Coordinator Richard Platt, Harvard Medical School and Harvard Pilgrim Health Care Other Contributors Carolyn Clancy, Agency for Healthcare Research and Quality; Elizabeth DuPre, AEI-Brookings Joint Center for Regulatory Studies; David Helms, Academy Health; Rae-Ellen Kavey, National Heart, Lung, and Blood Institute; Cato Laurencin, University of Virginia; Mark McClellan, Brookings Institution; Patricia Pittman, AcademyHealth; Jean Slutsky, Agency for Healthcare Research and Quality; Don Steinwachs, Johns Hopkins University SECTOR OVERVIEW The discussion in this chapter reflects the perspectives of clinical inves- tigators and evaluators in determining whether, how well, for whom, and at what cost prevention and treatment strategies work and on methods for ensuring their use. Its major focus is on evidence generation, which must occur in clinical and community settings rather than under tightly controlled experimental conditions. The authors of this chapter note that appropri- ately targeted clinical research has driven rapid changes in prevention and treatment practices; examples include the management of diabetes and the use of postmenopausal hormone replacement therapy. They also note that the topics addressed here form a continuum with population healthcare practices, especially primary prevention, that address many of the same clinical conditions. Many of the same considerations apply to those activi- ties, and a complete plan to create a learning healthcare system should be developed in concert with the population healthcare stakeholders. Evidence generation and evaluation in real-life situations span health services research and clinical research, including effectiveness, efficacy, and implementation research. The term “effectiveness research” refers to the 2

OCR for page 217
2 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE examination of the benefit of an intervention when it is used under ordinary circumstances, including evaluations with broader patient populations and in broad healthcare delivery settings, and the term “comparative effective- ness research” refers to the evaluation of the relative risks and benefits of competing therapies (Learning What Works Best, 2007). Both of these terms are used in contrast to the terms “efficacy studies,” which evaluate the impact of a therapy under the optimal conditions. The term “imple- mentation research” refers to the assessment of methods used to promote the application of knowledge in routine practice and, hence, to improve the quality of care. It looks specifically at the determinants and outcomes of different processes and strategies by using theories and models derived from clinical research, program evaluation, and behavioral and organiza- tional and management research. These types of inquiry span the domains of health services research and clinical research. “Health services research” is often used as an umbrella term to refer to the multidisciplinary field that studies how social factors, financing systems, organizational structures and processes, healthcare technologies, and personal behaviors affect access to care, the cost and quality of health care, and ultimately, health and well-being. Its domains include individuals, families, organizations, institutions, communities, and populations (Lohr and Steinwachs, 2002). In 2007 an estimated 13,000 individuals were engaged in health services research; and these individuals were from many different disciplines, including epidemiology, biostatistics, physiology, deci- sion theory, sociology, psychology, cognitive science, communications, and economics (Institute of Medicine, 1994; Moore and McGinnis, 2007). One current interdisciplinary focus is on bringing applied research closer to clini- cal practice, the so-called second translational block of bedside-to-practice research. Such research aims to improve the scientific basis for clinical prac- tice as well as accelerate the identification and adoption of best practices and will be an increasingly important dimension of health services research design and analysis (Ricketts, 2007). The term “clinical research” refers to the study of the safety and effectiveness of a particular intervention or set of interventions for patient outcomes. Just as the patient outcomes assessed may be broad, ranging from disease end points to levels of satisfaction, the interventions may also range from a diagnostic test or specific treatment to the organiza- tion of the interventions or prevention strategies. As a result, the clinical investigators (e.g., physicians, nurses, dentists, nurses, dentists, pharma- cists) who make up a substantial proportion of health services research- ers, may self-identify as clinical investigators rather than health services researchers. The impact of clinical research depends on the effective dissemination and adoption of the findings of that research. Currently, dissemination often

OCR for page 217
29 CLINICAL INVESTIGATORS AND EVALUATORS depends on publication in peer-reviewed journals and the incorporation of these published findings into clinical practice guidelines and other clinical decision-making aids. Many different organizations and disciplines pub- lish and develop guidelines, and the approaches that the various guideline developers use vary considerably. Groups such as the Grading of Recom- mendations Assessment, Development and Evaluation Working Group and Appraisal of Guidelines Research and Evaluation have formed to develop standards for the syntheses of clinical evidence and the development of clinical practice guidelines (Learning What Works Best, 2007). Information about clinical practice guidelines can be found at the National Guideline Clearinghouse (http://www.guidelines.gov) and the Guidelines International Network (http://www.g-i-n.net). Infrastructure and Support Most researchers and research are funded on a project-by-project basis. Public-sector support comes largely from the U.S. Department of Health and Human Services, which includes the National Institutes of Health (NIH), the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention (CDC), the Centers for Medicare and Med- icaid Services (CMS), and the Food and Drug Administration (FDA), and from the Veterans Health Administration (VHA). AHRQ’s Effective Health Care Program includes its Evidence-Based Practice Centers, which synthesize existing information; the DEcIDE (Developing Evidence to Inform Decisions on Effectiveness) centers, which conduct research to fill knowledge gaps; and the Eisenberg Center, which communicates findings. AHRQ also sup- ports the Centers for Education and Research on Therapeutics. Additionally, AHRQ supports practice-based research networks to foster research that provides generalizable findings and the Accelerating Change and Transfor- mation in Organizations and Networks. NIH’s Clinical and Translational Science Awards (CTSA) Consortium includes as one of its goals the conduct of research in practice settings and the dissemination of research findings to clinical practice (Thornton and Brown, 2007), although the magnitude of its support for these CTSA activities has not yet been determined. NIH’s Division for Application of Research Discovery and its Roadmap project include programs that develop translational and clinical research. Addition- ally, several individual NIH institutes support robust programs in health services research. CDC’s Division of Healthcare Quality Promotion leads a variety of research programs, including ones that target care in hospitals; its Immuni- zation Safety Office is the home of the Vaccine Safety Datalink, which has developed novel methods for the routine use of the healthcare data that it collects to assess vaccine safety. CDC, which is the nation’s principal health

OCR for page 217
220 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE statistics agency, also maintains several national data resources, including vital statistics, data from health examinations, and data from health inter- view surveys. Other public agencies also conduct health services research. CMS spon- sors research and demonstration programs to align payment with qual- ity. FDA supports postmarketing programs to assess the safety and, to a lesser extent, the benefits of therapeutic agents. VHA supports an array of clinical research and technology assessment programs, including its Quality Enhancement and Research Initiative, and it actively uses the information derived from its electronic medical records to inform both health policy and clinical practice. In the private sector, academic organizations, healthcare product developers, insurers, healthcare delivery organizations, and professional societies also sponsor research. Several groups perform technology assess- ments; examples include BlueCross BlueShield Association’s Technology Evaluation Center, the ECRI Institute, Hayes, Inc., the Institute for Clini- cal and Economic Review, and The Cochrane Collaboration. The HMO Research Network is a consortium of 15 health plan-based public-domain research groups that work cooperatively on effectiveness and other research (Learning What Works Best, 2007). Funding Levels and Trends It is difficult to ascertain the total national expenditure on clinical effec- tiveness research, but the total annual appropriations to the federal agencies noted above that are specifically identified for health services research total about $1.5 billion annually (Coalition for Health Services Research, 2006). Data are not currently available on the direct expenditures on clinical effectiveness research that private organizations make. In a review of health services research projects that began between 2000 and 2005, Thornton and Brown found that 34 percent were funded by foundations, 19 percent by AHRQ, and the remainder by NIH and other federal agencies (Thornton and Brown, 2007). Funding by foundations and NIH increased steadily over this period, with NIH becoming the lead federal funder, whereas the number of projects funded by AHRQ and other federal agencies decreased. These trends are independent of those for health services research that is identified as clinical research, data for which are not readily available. Whatever the specific annual total, the national investment in clinical effectiveness research (health services research plus relevant clinical research) is less than half a percent of all healthcare expenditures (Kupersmith et al., 2005; Moses et al., 2005; Sung et al., 2003). The amount for comparative effectiveness research is even smaller.

OCR for page 217
22 CLINICAL INVESTIGATORS AND EVALUATORS ACTIVITY CATEGORIES The work of clinical investigators and health services researchers may include evaluations of specific healthcare interventions, evaluations of interventions that improve individual and population health, cost-benefit analyses, decision analysis and modeling, and organizational studies con- ducted to reduce a healthcare organization’s liability risk or to determine whether a healthcare organization meets accreditation standards. They may be quantitative or qualitative and include studies with a variety of experi- mental designs, surveys, focus groups, and record reviews. The principal activities of the clinical investigation and evaluation sector relevant to the development and application of evidence fall into these broad research and evaluation categories: clinical trial design, implementation, and coordination; • registry design, management, and coordination; • database development and use, including hypothesis testing and • data mining; evidence synthesis; • development of standards of evidence; • development of methods to stimulate the adoption of evidence- • based practice; evaluation of the application of evidence in clinical practice; • methodology development; and • modeling and simulation studies. • Current Methodological Approaches Some of the methodological approaches are illustrated in Figure 9-1. Study designs are categorized as experimental or nonexperimental. Con- ventional controlled experiments, including randomized clinical trials, are generally considered to generate the most reliable results and may be par- ticularly well suited to the evaluation of new approaches to treatment or prevention; but they are often costly and slow, and their findings lack generalizability to broad populations, subpopulations (including elderly individuals and children), and the practice environment. Practical clinical trials are controlled trials that are designed to reflect the real world rather than ideal practice, and cluster randomized trials—which randomize prac- tice groups or other groups larger than individuals—are being explored as opportunities to improve both generalizability and efficiency. Studies with quasiexperimental designs (natural experiments) evaluate different levels of exposure to a treatment or prevention strategy, for instance, different levels of exposure resulting from differences in coverage or other factors thought

OCR for page 217
222 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE Basic Study Models Individual Randomized Controlled Clustered experiment Experimental Not randomized (purposeful assignment) Quasi-experimental/natural experiment/interrupted time series (change not investigator initiated) Multivariate statistical techniques and predictive models: Nonexperimental correlation, odds ratios, regression techniques/models that account for dual causation, endogeneity, interactions FIGURE 9-1 Basic study models. SOURCE: Study Models in Health Services Research. Working document. Methods Council Meeting. AcademyHealth. June 8, 2008. fig 8-1 to be unrelated to a clinical outcome. Nonexperimental studies evaluate the routine delivery of care. These latter methods typically attempt to identify and compensate for confounding that occurs because variation in treatment choice is usually related to severity of illness or other factors that influence the outcome apart from the treatment. The choice of research method depends on the specific issue or question under consideration, ethical concerns, resource availability, the acceptability of different forms of investigation for decision makers, and other factors. When a tightly controlled, randomized study is feasible, economical, and timely and can yield results that are generalizable to most of the population of interest, the consensus is that this approach is preferred (DeVoto and Kramer, 2006). However, many questions of central importance cannot be addressed in this manner. The inability of conventional randomized clini- cal trials to address many questions is due, in part, to the inherent limits of their external validity (e.g., related to factors such as restricted recruit- ment) as well as to the heterogeneity of treatment effects that results from different baseline risks or the heterogeneity in the response that individual patients exhibit (Kravitz et al., 2004). Often, randomized controlled trials fail to capture the longitudinal data that are important for obtaining an understanding of the true impacts of different interventions over time. The United States has devoted little funding or effort to the development or implementation of practical or clustered randomized trials; nor has the

OCR for page 217
22 CLINICAL INVESTIGATORS AND EVALUATORS country yet assessed their potential to generate reliable, real-world evidence quickly and inexpensively. Different study designs answer very different questions, and the broad range of questions requiring attention requires an array of study designs and methodologies. Because knowing that an intervention works under ideal circumstances (efficacy) is necessary but not sufficient for evaluating what is appropriate for patients in real-world practice settings, some con- tend that answers to these questions require an update of the traditional evidence hierarchy and its emphasis on the randomized trial (Atkins, 2007). A learning healthcare system will need both randomized controlled trials, especially pragmatic or practical trials that are broadly applicable, as well as other methods. Challenges Five major challenges confront the development of the knowledge needed to support a learning healthcare system. First, the limited support for research and development in this arena is an overriding constraint. Under- investment is evidenced by the fact that the United States devotes less than one-tenth of a percent of its total healthcare expenditures to understanding how well health care works and how to improve it, an amount that is small compared with the amounts invested to understand other major segments of the economy. Underinvestment is also evidenced by the fact that more than 90 percent of the federal investment in healthcare-related research is applied to the development of new therapies rather than to understanding how well various strategies work in practice or how to ensure that the right preventive or therapeutic regimen is offered to the individuals who need it. We do not believe that too much is being invested in the development of new treatments and specifically do not suggest that resources be redirected from those used for the discovery of new therapies. Second, it is difficult to use many of the existing data, even when they exist in electronic form, because of the fragmentation among organizations that control the data, variations in the ways in which different organiza- tions interpret the Health Insurance Portability and Accountability Act (HIPAA), the various interpretations of regulations governing the use of these data for research by institutional review boards (IRBs), and the pro- prietary concerns of data holders. Third, there are important limitations to the existing data. This is the case for both the data collected for administrative purposes and the clini- cal information in electronic medical records. Examples of these problems include misclassification of the data, which is sometimes inherent because of the different coding systems used and which is sometimes caused by errors and biases in the application of those systems, and missing data, which may

OCR for page 217
22 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE include medical history data or which may result from the lack of collection or recording of information during routine medical care. Lack of generaliz- ability of the populations served is another serious problem, particularly among those cared for by tertiary care facilities that tend to treat sicker, more complicated patients, with different intervention patterns. Fourth, there are substantial barriers to determining what treatments and strategies do and do not work in many clinical settings. This is true both for randomized clinical trials and for other types of research. These barriers include a sense that research is a specialized activity that should involve a limited number of individuals in a few locations, restrictive poli- cies, and logistical and financial obstacles. Fifth, and finally, a full understanding of the strengths and weaknesses of the different research methods, ways in which to strengthen them, and the situations in which they are best applied is lacking. It is clear, however, that the findings of many randomized trials that are considered the “gold standard” lack generalizability because they are performed with highly nonrepresentative, referral-filtered populations. LEADERSHIP COMMITMENTS AND INITIATIVES The research and evaluation sector wishes to underscore the importance of establishing evidence generation, that is, learning what works and what does not work, as a normal part of health care. Such an emphasis is consis- tent with long-held medical values, as articulated, for example, in the Oath of Maimonides: “Grant me the strength, time, and opportunity always to correct what I have acquired, always to extend its domain; for knowledge is immense and the spirit of man can extend indefinitely to enrich itself daily with new requirements” (The Oath of Maimonides, 1793). To accomplish this, the research and evaluation sector has identified advances that are needed and that are described in the following sections. Invest in Applied Research and Development Individuals and society will benefit from increased investments in applied research to develop new evidence about treatment effectiveness and to make better use of existing knowledge. Support should increasingly focus on linking researchers to decision makers and organizations (purchasers, payers, delivery systems, healthcare institutions, clinicians, patients, and the public) interested in participating in these activities. Examples of activities in need of increased support include assessments of primary prevention strategies and the comparative effectiveness of treatments in clinical use and the testing of ways to eliminate disparities in health care. The invest- ment in research and development required is large in absolute terms but

OCR for page 217
22 CLINICAL INVESTIGATORS AND EVALUATORS small in relation to total healthcare expenditures. An annual investment of 1 percent of medical spending (the equivalent of a few weeks of medi- cal cost inflation) would yield an amount comparable to the current NIH budget for 1 year. The sector specifically recommends that this research and development investment be made in addition to current biomedical research spending. To advance this issue, a deliberative process should be undertaken to (1) develop a framework for allocating and using a sustained multi-billion-dollar public and private investment in healthcare research and development and (2) identify funding options. The national investment should include specific provisions to redesign and expand the training of investigators in ways that reward the skills and creativity needed to implement the necessary research portfolio. Reengineer Healthcare Delivery to Facilitate Structured Learning About Best Practices Enhancing the efficiency and value of health care requires the ongoing development of comparative data on the benefits, risks, and costs of treat- ment alternatives. Much of the information required cannot be obtained from conventional randomized clinical trials. In some cases, this is because clinical trials require more time and resources than are available. More importantly, such trials do not address the effect of a treatment in typical populations under the conditions of its actual use. Conventional clinical trials also pro- vide little information about the safety of new drugs, biologics, and devices. Specific methods for addressing these needs are discussed below. Use the Information Collected During the Routine Delivery of Health Care to Assess Outcomes The use of data for the systematic assessment of outcomes of care should be construed as routine. The goals for the use of these data would be to (1) inform better decision making about the effectiveness of the pre- vention strategies and treatments currently in use, (2) understand how dif- ferent strategies and treatments work in diverse populations, and (3) make efficient use of resources. This use of existing data should be contrasted with the conventional notion of “research” that is both extraordinary and which poses risk beyond that entailed by regular care. It will be important to improve the ability to use different kinds of healthcare data, including claims data, data from electronic medical records, data from registries, vital statistics data, and self-reported information. For many purposes, it will be necessary to use information about very large populations. It will therefore be essential to develop governance and oversight procedures that encour- age the holders of confidential and proprietary data to allow their use for

OCR for page 217
22 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE approved purposes. Accomplishing this will require the participation of a broad array of stakeholders. Consideration should also be given to whether it is necessary to use the same rules and oversight mechanisms for these secondary uses of data that are applied for the protection of human subjects of conventional experimental research. Because research and development shares many characteristics with healthcare operations, consideration should be given to whether the rules governing the use of data for operations can apply in some circumstances. The value of the systematic assessment of outcomes might be linked more directly to the growing public interest in the disclosure of healthcare costs and outcomes. To the extent that public reporting becomes more established, it will be worthwhile to ensure that the methods of assessing the outcomes and adjusting for case mix are sufficiently scientifically valid to allow understanding of comparative effectiveness. Specific actions that will facilitate the broader use of healthcare data concern the interpretation of HIPAA regulations, the ways in which IRBs oversee observational research, the priorities of purchasers, and the roles that payers play. Suggestions include the following: Expand the range of HIPAA-compliant assessments of outcomes Deter- mine whether HIPAA allows the use of medical care information to char- acterize treatments and outcomes. Specifically, can assessments of benefits, risks, and costs be defined to be healthcare operations within the context of HIPAA? This interpretation of HIPAA could be particularly suited to assessments that can be performed within covered entities for local use and reported in summary fashion for pooled analysis. An important first step will be to clarify the ways in which outcomes assessments can be performed so that they are in compliance with HIPAA regulations. Facilitate approval of research restricted to review of medical records Studies of benefit and risk typically require fully representative participation that is impossible when individual informed consent is required. There is a need for the better standardization of practices between IRBs and for the review pro- cess to have improved efficiency when multiple IRBs have oversight. Improv- ing efficiency will require preservation of the understanding of the local context and the protection of special populations, particularly disadvantaged and vulnerable individuals. Clarification of the understanding of the Com- mon Rule provision for the waiver of informed consent for record review studies is needed. Although the Common Rule allows waivers of consent in this situation, they are not uniformly granted, and many holders of clini- cal information unilaterally require individual authorization for the release of information, even when both the controlling IRB and the HIPAA pri-

OCR for page 217
22 CLINICAL INVESTIGATORS AND EVALUATORS vacy board waive the consent requirement. Additional steps needed include (1) standardization of IRB applications and reporting forms to expedite submissions to multiple IRBs; (2) the creation of regional or national IRB consortia to streamline inter-IRB communication and the coordination of the review of proposals presented to multiple IRBs; and (3) the development of national standards for training for IRB staff and reviewers, in the interest of creating a more uniform interpretation of standards. Authorize public and private payers to create evidence about benefits and risks Establishing assessment of the benefits and risks of specific preven- tive and therapeutic regimens and strategies as a normal activity of the healthcare delivery system will blur the distinction between practice, quality improvement, and research. It will require greater interactions among regu- lators, payers, providers, and investigators. It may also require revision of the regulations and contract provisions that govern CMS and private payers. Some payers, including CMS, are constrained in their ability to make an assessment of benefits and risks a condition of payment. CMS’s recent efforts to link coverage to evidence development, participation in clinical trials, or inclusion in a registry have been a step in the right direc- tion but are too limited for many needs. Additionally, many private payers are limited by contracts with their purchasers in the ways in which they can guide care. For private payers (e.g., health plans), discussions among purchasers, payers, and regulators are needed to increase the ability to learn about the comparative benefits, safety, and costs of regimens. Both public and private payers and funders of research need to engage policy makers at the national and local levels on the importance of creating a regulatory and financing environment that supports robust research on comparative effectiveness and the benefits and harms of different healthcare interven- tions. This engagement must occur in a manner that is transparent and deliberative, and the reasoning behind decisions should be apparent. It should include a broad range of stakeholders, specifically including patients and the general public. Consider advance coverage approaches In some situations, it may be worthwhile to provide advance coverage for new therapies for a subset of individuals as a temporary measure to inform decisions about whether the therapy should be adopted as a standard covered item. Advance coverage means that a purchaser or, possibly, a payer pays for a new therapy or prevention strategy for some individuals before it covers the same therapy for the population as a whole. In every case, this selective coverage would be limited to therapies that are approved by FDA. Because coverage is extended to a limited number of individuals and only the purchaser or payer is allowed to decide whether the treatment should be covered, this

OCR for page 217
22 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE practice does not deprive individuals of treatments to which their insurance coverage entitles them. The period of coverage for only some individuals would typically be limited to the minimum period needed to acquire the needed information, after which it would be available to all individuals or would not be covered. Advance coverage could be used in two ways: (1) for participants in conventional clinical trials for the assessment of efficacy (CMS has used this approach in some situations as part of its Coverage with Evidence Development Policy [Tunis and Pearson, 2006]) and (2) for groups (for example, practices, health plans, or geographic areas) to assess the population-level effectiveness of a new therapy or prevention strategy. Advance coverage for selected groups will allow direct assessment of the population-level effectiveness of a new therapy or prevention strategy, because it would be possible to compare outcomes among the people who were eligible for the new treatment with those among comparable people who were not eligible. This kind of information is rarely available now and will be extremely valuable in providing an understanding of the overall ben- efit and cost of a new therapy. Nevertheless, the use of accelerated coverage for some members of society will require the development of a consensus that this is fair and ethical. To explore the stakeholder perspectives, a broadly representative stake- holder group should explore whether and under what circumstances it will be useful and acceptable to use advance accelerated coverage for the pur- pose of understanding the benefits and risks of therapies and thus informing decisions about whether to make the therapy available for the entire covered population. The Center for Medical Technology Policy1 is one organization that convenes multistakeholder groups to develop and implement advance coverage as one of several strategies for evidence generation. Advance coverage is an especially attractive method for evaluating dis- ease prevention and health promotion activities, activities that often benefit by active collaborations among the healthcare delivery system, purchasers, payers, community organizations, and public health agencies. For example, there would be value in evaluating the effectiveness of the widespread use of an arthritis self-management program that has been shown to decrease pain and the need for physician visits (Theis et al., 2007). Expand the Use of Both Conventional and Pragmatic Randomized Clinical Trials Comparing Approved Treatments A principal use of both conventional and pragmatic randomized clini- cal studies will be to evaluate approved therapies for which information is needed about both efficacy and effectiveness compared to other modalities 1 See http://www.cmtpnet.org.

OCR for page 217
229 CLINICAL INVESTIGATORS AND EVALUATORS for similar indications. Such studies can also be used to evaluate therapies for which information about the efficacy and effectiveness of a therapy in special populations, such as children, elderly individuals, and members of specific ethnic groups, is needed. For example, note the success of pediatric oncology in making participation in clinical trials normal behavior for clinicians and patients, and contrast that behavior with the lack of a similar practice of clinical inquiry among other medical specialties. A goal, then, is to make randomized clinical trials commonplace and to transform both patients’ and providers’ views about the desirability of participating in them. Ideally, both patients and providers would inquire about the availability of clinical trials before initiating treatment. To obtain clinically useful results, the inclusion criteria should be broad and the trial should replicate the conditions of the actual use of the treatment to the greatest extent possible. In addition, data collection requirements should be minimized. These are attributes of practical or pragmatic trials (Tunis et al., 2003). Considerable work will be necessary to refine these methods. These changes will require a strong partnership with clinical care sites that commit to institutional participation in applied clinical research as their standard operating procedure. Academic medical centers can be major venues for research addressing inpatient care, and large ambulatory-care practices will be the logical sites for research addressing outpatient care. This institu- tional participation need not occur throughout an institution; for instance, selected intensive care units (ICUs) or surgical subspecialty sites within a hospital might choose to participate in multicenter research collaborations that routinely test agreed-upon interventions. Some of these interventions will be large simple clinical trials that randomize individual patients. Other trials might be evaluations of unit- or practice-level changes in practice. An example of the latter might be an ICU’s participation in a randomized study of different unit-wide protocols for ventilator care. In this example, the entire unit would adopt a specific protocol as its standard operating practice for the duration of the study. Such protocols would, of course, need to meet all applicable IRB requirements for cluster-randomized studies. The likelihood of success will be enhanced by the broader adoption of protocols that minimize data collection requirements. However, no matter how simple the protocols are, it will be necessary to provide a new infra- structure to support organizations’ participation in these new research col- laboratives. Most importantly, success will require a change in the culture and the expectations of clinical care delivery so that at least some com- munities of providers and healthcare institutions and their patients expect to participate in ongoing systematic evaluations of commonly used clinical practices and therapies. To accelerate this transition, clinicians, healthcare delivery sites, and clinical investigators must work together to design a more robust clinical

OCR for page 217
20 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE trials program that takes advantage of the existing clinical care infrastructure. This work can build on but does not need to be limited to the work of the Practice-Based Research Networks, the various related AHRQ initiatives, the U.S. Department of Veterans Affairs’ Cooperative Studies Program, the NIH Roadmap project and CTSA Consortium, and other research networks. Improve Data Sources, Access, and Utility It will be important to address the nonrepresentativeness of the popu- lations for whom data from clinical studies are available. Nonrepresenta- tiveness is sometimes immediately evident, for instance, a lack of children and adolescents in institutions that care only for adults. Other times it is not so clear, for instance, with regard to representation by individu- als who are part of minority, vulnerable, and disadvantaged populations. Examples of opportunities to progress in this area include the development of (1) improved methods for understanding which populations are repre- sented in the healthcare datasets used for research; (2) an improved ability to collect and link different kinds of healthcare data, including claims data, pharmacy dispensing information, electronic medical records, laboratory test results, vital statistics registries, cancer registries, and self-reported information, including data in personally controlled health records; (3) an improved capability for collecting patient-reported outcomes of treatments, perhaps by taking advantage of the anticipated diffusion of personal medi- cal records and methods developed in the NIH-funded Patient-Reported Outcomes Measurement Information System (PROMIS) initiative (NIH PROMIS Initiative, 2007); (4) an improved ability to collect and link nonmedical data, such as census data, motor vehicle department data, and consumer information; and (5) an improved capacity for biobanking (the collection and storage of tissue samples and genetic data). Both tissue and genetic data will be important, but genetic information is essential to taking full advantage of the potential for fully personalized medicine. Addressing the infrastructure, governance, and policy issues at play will be critical. Priority issues include (1) the need to support the develop- ment of database architectures and governance procedures that address these data needs (both architecture and governance procedures will need to respect the privacy needs and the proprietary interests of the data holders) and (2) the need to develop regulations that balance privacy and propri- etary concerns without restricting the generation of essential knowledge. Invest in Improving Research Methods Innovation is needed to improve the quality of research and accelerate the translation of knowledge into practice. New methods as well as inter-

OCR for page 217
2 CLINICAL INVESTIGATORS AND EVALUATORS disciplinary agreements in areas of dispute around existing methods are needed. Specific needs include better methods of prioritizing and assessing gaps in the evidence; determination of the best uses of observational data and randomized trials that are both simpler and yield more generalizable results; and methods for the translation of research into practice. Use the Full Range of Methodologies and Research Tools The use of methods and tools other than conventional randomized clin- ical trials should be expanded to develop evidence (AHRQ, 2007; Institute of Medicine, 2007). The proceedings of an AHRQ workshop, Compara- tive Effectiveness and Safety: Emerging Methods, provides an overview of some of the opportunities (AHRQ, 2007). It should also be acknowledged that the current evidence hierarchy is inadequate to address certain essen- tial healthcare questions. Areas of particular importance that cannot be addressed by randomized controlled trials of individuals include the assess- ment of safety in the postmarketing environment and the population-level effects of coverage decisions. Therefore, the level of evidence needs to be matched to the situation. This may require the development or refinement of a taxonomy that classifies evidence for its utility for supporting both clinical and health policy decision making (Teutsch et al., 2005). Clinicians, healthcare delivery sites, and clinical investigators must be engaged in the development of improved methods for observational research. Specific research methods other than conventional randomized trials include • Pure observational studies that use data obtained during the rou- tine delivery of care. Analytical methods for these studies include time series analysis, logistic regression analysis, propensity score analysis, analysis with marginal structural models, doubly robust estimator analysis, and instrumental variable analysis. Research will be needed to assess the powers of these and other methods to identify and reduce bias and confounding. • Quasi-experimental designs (natural experiments). These use simi- These simi- lar data as above, but exploiting differences in utilization between segments of the population, for instance because of differences in coverage, abrupt secular changes in practice, or other factors unrelated to outcome. • Registries. These can contribute essential information that is not col- lected during routine care. These will be most useful when they are combined with data obtained during the routine delivery of care. • Practical or pragmatic simple trials. To the greatest extent pos- sible these should occur under conditions of representative clinical

OCR for page 217
22 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE practice and should minimize cost; such trials require broad inclu- sion criteria, minimal exclusion criteria, and a minimal number of outcomes assessments. • Cluster randomization. This includes selective advance access (with coverage) to new therapies for segments of a population or the selective delayed imposition of new coverage policies and the pro- vision of encouragement or incentives to some segments of the community to alter their therapeutic decisions. • Mathematical modeling. Improve Methods to Prioritize Research on Gaps in Evidence for All Segments of the Population In addition to understanding the situations in which evidence is most needed, better methods are needed to understand the benefits and risks of a therapy among individuals who are not typically included in research studies, including individuals who are members of vulnerable populations and groups with complex clinical and social needs. It is not necessary to fill all research gaps for knowledgeable decision making. Setting realistic and rational priorities to conduct research in areas with knowledge gaps is essential for the equitable use of research investments. Research on Methods for Translation of Research into Practice Many dissemination strategies result in little or no change in physi- cian behavior or health outcomes. Studies of more complex and more costly interventions like audit and feedback, message prompts, and educa- tional outreach visits suggest potential changes in physician behavior and health outcomes; but interpretation of the results is often complicated by a high risk of bias, before-and-after assessments of outcome measures, a lack of head-to-head assessments of different methods, small sample sizes, unadjusted variations in the intensity of the intervention, and an absence of process evaluations. Potential areas for research include the development and evaluation of innovative approaches to chang- • ing physician behavior on the basis of adult learning principles, including consideration of financial benefit for compliance; the design of rigorous trials to evaluate changes in professional • practice; the development and evaluation of innovative approaches to • changing consumer-patient behavior on the basis of adult learn- ing principles, including the use of evidence, decision support, and adherence enhancing tools; and

OCR for page 217
2 CLINICAL INVESTIGATORS AND EVALUATORS coordination of the release of major new guidelines with the simul- • taneous initiation of research to evaluate predefined practice out- comes; for this, consider the use of methods for the collection, evaluation, and use of data that are not published through the peer-review process as part of the evidence base. Specific follow-up activities that might catalyze the needed action include (1) convening of a broad-based task force composed of multiple stakeholders, including patients plus experts in evidence-based medicine and behavior change, to design research initiatives to increase the rate of adoption of recommended practices, possibly including differential reim- bursement for compliance with guidelines, and (2) convening of a confer- ence of guideline developers to develop recommendations for clinical trials to assess the implementation of guidelines combined with the release of guidelines, similar to the Guidelines International Network annual research meeting, which was held in Toronto, Ontario, Canada, in 2007 and in which guideline implementation was the overarching theme. As recommendations, policies, and procedures are developed to broaden the participation of many stakeholders in developing evidence and evaluating practice, it will be important to minimize the administra- tive burdens of these activities on the participating organizations and individuals. NEXT STEPS The clinical investigators and evaluators sector puts most emphasis on the need to establish assessments of the benefits and risks of specific pre- ventive and therapeutic regimens and strategies as a normal part of health care. To accomplish this, cross-sector collaboration should focus on the pri- ority action items identified below. Invest in Applied Research and Development The following actions are needed for investment in applied research and development: Establish a process to (1) develop a framework for using a sus- • tained multi-billion-dollar public and private investment in health- care research and development and (2) identify funding options. Ensure the development of programs of investigator training that • foster the levels, skills, and creativity needed to implement the necessary research portfolio.

OCR for page 217
2 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE Introduce into all healthcare professional educational curriculums • training in the philosophy and skills necessary to imbue the ethic that each caregiver is part of the evidence development process. Make Better Use of Information Developed During the Routine Delivery of Health Care to Assess Outcomes The following actions are needed to make better use of the information during the routine delivery of health care to assess outcomes: Support the development of database architectures and governance • procedures that address these data needs. Both architecture and governance procedures will need to respect privacy needs and the proprietary interests of the data holders. Develop regulations to protect privacy and proprietary concerns. • Clarify ways in which outcomes assessment can be performed effi- • ciently but still adhere to HIPAA regulations. Clarify the understanding of the Common Rule provision for the • waiver of informed consent for record review studies. Standardize IRB applications and reporting forms to expedite submissions to multiple IRBs. Create regional or national IRB consortia to streamline inter-IRB • communication and coordination of the review of proposals pre- sented to multiple IRBs. Develop national standards for accessible training for IRB staff and • reviewers, in the interest of creating more uniform interpretation of standards. Authorize Public and Private Payers to Create Evidence About Benefits and Risks The following actions are needed to authorize public and private payers to create evidence about benefits and risks: Both public and private payers and funders of research need to • engage policy makers at the national and local levels about the importance of creating a regulatory and financing environment that supports robust research on comparative effectiveness and the benefits and the harms of different healthcare interventions. Stakeholders should explore the appropriate circumstances for the • use of accelerated coverage.

OCR for page 217
2 CLINICAL INVESTIGATORS AND EVALUATORS Expand the Use of Different Types of Clinical Trial Randomization Comparing Approved Treatments The following actions are needed to expand the use of different types of clinical trial randomization comparing approved treatments, including practical and pragmatic, cluster randomized trials, and the use of other novel approaches to affecting statistical randomization in large databases: • Engage clinicians, healthcare delivery sites, and clinical investiga- tors so that they may articulate the needs for a more robust clinical trials program that takes advantage of the existing clinical care infrastructure. • Engage all stakeholders so that they may address the appropriate- ness of the more widespread use of such trials and the situations in which they can be integrated into both prevention and treatment. Invest in Improving Research Methods The following actions are needed for greater investments in improving research methods: • Engage clinicians, healthcare delivery sites, and clinical investiga- tors in the development of improved methods for observational research. • Convene a broad-based task force composed of multiple stake- holders, including patients, the public at large, and experts in evidence-based medicine and behavior change, to design research initiatives to increase the rate of adoption of evidence-based medi- cine, possibly including differential reimbursement for compliance with guidelines. • Convene a conference of guideline developers to develop recom- mendations for trials to assess guideline implementation combined with the release of guidelines. REFERENCES AHRQ (Agency for Healthcare Research and Quality). 2007. Comparative effectiveness and safety: Emerging methods. Special Issue dedicated to Harry Guess. Medical Care 45(10 Suppl 2):S1-S172. Atkins, D. 2007. Creating and synthesizing evidence with decision makers in mind: Integrat- ing evidence from clinical trials and other study designs. Medical Care 45(10 Suppl 2): S16-S22. Coalition for Health Services Research. 2007. Federal funding for health services research. http://www.chsr.org/AHfundingreport1206.pdf (accessed May 12, 2008).

OCR for page 217
2 LEADERSHIP COMMITMENTS TO IMPROVE VALUE IN HEALTH CARE DeVoto, E., and B. Kramer. 2006. An evidence based approach to oncology. In Oncology: An evidence-based approach, edited by A. E. Chang, D. F. Hayes, H. I. Pass, R. M. Stone, P. A. Ganz, T. J. Kinsella, J. H. Schiller, and V. J. Strecher. New York: Springer. Pp. 3-13. Institute of Medicine. 1994. Health services research: Opportunities for an expanding field of inquiry—an interim statement, edited by S. Thaul, K. N. Lhor, and R. E. Tranquada. Washington, DC: National Academy Press. ———. 2007. The learning healthcare system. Washington, DC: The National Academies Press. Kravitz, R. L., N. Duan, and J. Braslow. 2004. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Quarterly 82(4):661-687. Kupersmith, J., N. Sung, M. Genel, H. Slavkin, R. Califf, R. Bonow, L. Sherwood, N. Reame, V. Catanese, C. Baase, J. Feussner, A. Dobs, H. Tilson, and E. A. Reece. 2005. Creating a new structure for research on health care effectiveness. Journal of Investigative Medicine 53(2):67-72. Learning What Works Best. 2007. The nation’s need for evidence on comparative effec- tiveness in health care. http://www.iom.edu/Object.File/Master/43/390/Comparative% 20Effectiveness%20White%20Paper%20(F).pdf (accessed May 12, 2008). Lohr, K. N., and D. M. Steinwachs. 2002. Health services research: An evolving definition of the field. Health Services Research 37(1):7-9. Moore, J., and S. McGinnis. 2007. The health services researcher workforce current stock. Paper presented at AcademyHealth’s Health Services Researcher of 2020 Summit, Wash- ington, DC. Moses, H., III, E. R. Dorsey, D. H. Matheson, and S. O. Thier. 2005. Financial anatomy of biomedical research. JAMA 294(11):1333-1342. NIH PROMIS Initiative. 2007. Functional components: Network structure. http://www. nihpromis.org/Web%20Pages/Network%20Structure.aspx (accessed May 12, 2008). The Oath of Maimonides. 1793. http://www.library.dal.ca/kellogg/Bioethics/codes/maimonides. htm (accessed May 12, 2008). Ricketts, T. 2007. Developing the health services research workforce. Paper presented at AcademyHealth’s Health Services Researcher of 2020 Summit, Washington, DC. Sung, N. S., W. F. Crowley, Jr., M. Genel, P. Salber, L. Sandy, L. M. Sherwood, S. B. Johnson, V. Catanese, H. Tilson, K. Getz, E. L. Larson, D. Scheinberg, E. A. Reece, H. Slavkin, A. Dobs, J. Grebb, R. A. Martinez, A. Korn, and D. Rimoin. 2003. Central challenges facing the national clinical research enterprise. JAMA 289(10):1278-1287. Teutsch, S. M., M. L. Berger, and M. C. Weinstein. 2005. Comparative effectiveness: Asking the right questions, choosing the right method. Health Affairs 24(1):128-132. Theis, K. A., C. G. Helmick, and J. M. Hootman. 2007. Arthritis burden and impact are greater among U.S. women than men: Intervention opportunities. Journal of Women’s Health (Larchmt) 16(4):441-453. Thornton, C., and J. D. Brown. 2007. The demand for health services researchers in 2020. Paper presented at AcademyHealth’s Health Services Researcher of 2020 Summit, Wash- ington, DC. Tunis, S. R., and S. D. Pearson. 2006. Coverage options for promising technologies: Medicare’s “coverage with evidence development.” Health Affairs 25(5):1218-1230. Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. JAMA 290:1624-1632.