3
Setting Priorities for Evidence Assessment

Abstract: This chapter provides the committee’s findings and recommendations on setting priorities for evidence assessment (systematic review) and describes key challenges in establishing a priority setting process for a national clinical assessment program (“the Program”). The committee recommends that the Program appoint an independent, standing Priority Setting Advisory Committee (PSAC) to develop and implement the process. PSAC members should be selected to ensure a balance of expertise and interests, with minimal bias due to conflicts of interest. Although there is little solid basis to recommend the use of one priority setting process over another, the committee recommends that the process adhere to basic principles of consistency, efficiency, objectivity, responsiveness, and transparency. Thus, the PSAC should establish a process that is open, predictable, and explicitly defined, with fully documented standards and procedures. The procedures should be simple and efficient to preserve the available resources for evidence assessment itself. Two considerations should be paramount in identifying the highest priority topics: (1) how well the topic reflects the clinical questions of patients and clinicians and (2) the potential for the topics to have a strong impact on clinical and other outcomes that matter the most to patients.

If the nation is to resolve the current deficiencies in how it uses scientific evidence to identify the most effective clinical services, there must be a process for identifying the most important topics in order to preserve resources for evidence assessment itself. Most health technology assessment programs have an organized process for determining which topics merit comprehensive study. At present, however, no one agency or organization in



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 57
3 Setting Priorities for Evidence Assessment Abstract: This chapter proides the committee’s findings and recommen- dations on setting priorities for eidence assessment (systematic reiew) and describes key challenges in establishing a priority setting process for a national clinical assessment program (“the Program”). The committee recommends that the Program appoint an independent, standing Priority Setting Adisory Committee (PSAC) to deelop and implement the pro- cess. PSAC members should be selected to ensure a balance of expertise and interests, with minimal bias due to conflicts of interest. Although there is little solid basis to recommend the use of one priority setting process oer another, the committee recommends that the process adhere to basic principles of consistency, efficiency, objectiity, responsieness, and transparency. Thus, the PSAC should establish a process that is open, predictable, and explicitly defined, with fully documented standards and procedures. The procedures should be simple and efficient to presere the aailable resources for eidence assessment itself. Two considerations should be paramount in identifying the highest priority topics: () how well the topic reflects the clinical questions of patients and clinicians and () the potential for the topics to hae a strong impact on clinical and other outcomes that matter the most to patients. If the nation is to resolve the current deficiencies in how it uses scien- tific evidence to identify the most effective clinical services, there must be a process for identifying the most important topics in order to preserve resources for evidence assessment itself. Most health technology assessment programs have an organized process for determining which topics merit comprehensive study. At present, however, no one agency or organization in 7

OCR for page 57
 KNOWING WHAT WORKS IN HEALTH CARE the United States evaluates from a broad, national perspective the effective- ness of new as well as established health interventions for all populations, children as well as elderly people, women as well as men, and ethnic and racial minorities. As noted in Chapter 1, this report focuses on developing an organizational framework for a national clinical effectiveness assessment program, referred throughout as “the Program.” Early in its deliberations, the committee agreed that the Program should commission systematic reviews on the effectiveness of health services and that the topics of the reviews should be informed by the recommendations of an independent Priority Setting Advisory Com- mittee (PSAC). The objective of this chapter is threefold: (1) to review the basic elements of a priority setting process, (2) to present the committee’s recommendations for establishing a priority setting infrastructure, and (3) to highlight key programmatic challenges in establishing a priority setting process for the Program. Recommendation: The Program should appoint a standing Priority Setting Advisory Committee to identify high-priority topics for system- atic reviews of clinical effectiveness. BACKGROUND This section provides background on the basic elements of a priority setting process: identifying potential topics, selecting the priority criteria, reducing the initial list of nominated topics to a smaller set of topics to be pursued, and choosing the final priority topics. Some approaches also in- corporate quantitative methods that involve the collection of data that can be used to weigh priorities, the assignment of scores for each criterion to each topic, and the calculation of priority scores for each topic to produce a ranked priority list. A committee or advisory group that reviews and chooses the topics that will be funded typically conducts the process. It may use a formal method, such as the Delphi technique, to systematically develop the high-priority list. The Delphi technique has been adapted and modified in various ways to facilitate group decision making (OTA, 1994). It typically involves the distribution of a questionnaire to an expert group. Each participant independently answers the questionnaires. The responses are summarized and reported back to the group. The process may be anonymous or open, and several iterations may be necessary before a final decision is reached.

OCR for page 57
9 SETTING PRIORITIES What Is the Best Approach? No single priority setting method is obviously superior to others (Goodman, 2004; Noorani et al., 2007; Oxman et al., 2006; Sassi, 2003). The committee could not find any systematic assessments of the compara- tive strengths and weaknesses of different approaches to priority setting, including whether complex quantitative and resource-intensive methods are more effective than less rigorous approaches. Apparently, few, if any, organizations use a quantitative approach to selecting priority topics, although numerous methods have been devel- oped. Phelps and Parente (1990), for example, developed a formula for calculating a priority index for health technology assessment. The Institute of Medicine (IOM) Committee on Priorities for Assessment and Reassess- ment of Health Care Technologies proposed a method that could be used to aggregate various dimensions into a single priority score, including a technique that quantifies the potential gains that can achieved by assessing health interventions (IOM, 1992).1 Various Contexts for Setting Priorities Organizations have different objectives and target audiences for evi- dence assessment. The annual number of selected topics that are reviewed is quite small (Table 3-1). In 2006, for example, the number of systematic reviews produced by federal agencies ranged from only 3 by the National Institutes of Health (NIH) Consensus Development Program to 22 by the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Prac- tice Centers (EPCs). There are no aggregate national data on the volume of topics that are assessed each year. The range of potential topics that may be considered may include the universe of prevention such as screening tests or immunizations, diagnosis such as laboratory tests or imaging techniques, drugs and other therapeutic interventions such as surgery, chemotherapy, or radiation, and end-of-life care and palliation. However, the specific audience for the assessment is likely to have more narrow interests, such as new and emerging technolo- gies or a specific subpopulation group. For example, the Blue Cross and Blue Shield Association (BCBSA) Technology Evaluation Center (TEC) focuses on the specific needs of member plans. The Medicare Evidence De- velopment and Coverage Advisory Committee (MedCAC), which advises 1 See the following IOM reports for past recommendations related to priority setting: Set- ting Priorities for Health Technologies Assessment: A Model Process (IOM, 1992), Setting Priorities for Clinical Practice Guidelines (IOM, 1995), National Priorities for the Assessment of Clinical Conditions and Medical Technologies: Report of a Pilot Study (IOM, 1990), and Priority Areas for National Action: Transforming Health Care Quality (IOM, 2003).

OCR for page 57
60 KNOWING WHAT WORKS IN HEALTH CARE TABLE 3-1 Context for Setting Priorities for Evidence Assessment, 2006 Number of Full Organization Target Audience Systematic Reviews AHRQ Effective Heath Care CMS, providers, policy makers, consumers 4 Program EPC program CMS, USPSTF, NIH, and other federal 22 agencies; providers; medical professional societies USPSTF Primary care clinicians, health systems, 6 payers, and purchasers Other federal programs CMS Medicare intermediaries, beneficiaries, and 9 providers DERP State Medicaid programs 3 NIH Consensus Health professionals and the public 3 Development Program Private technology assessors BCBSA TEC Medical directors of BCBSA member 14 plans, providers, and scientific staff ECRI Institute Private clients, including decision makers 20 in hospitals, health systems, health plans, and departments and ministries of health Hayes, Inc. Private clients, including decision makers 86 in hospitals, health systems, health plans, and government agencies NOTE: DERP = Drug Effectiveness Review Project; USPSTF = U.S. Preventive Services Task Force. SOURCES: AHRQ (2007c,d,e); BCBSA TEC (2007); NIH Consensus Development Program (2007). the Centers for Medicare & Medicaid Services (CMS) on the services used by the Medicare population, sponsors evidence reviews (conducted by an AHRQ EPC) only when Medicare is considering a national coverage deci- sion on a controversial issue. In general, payers initiate assessments when they must make benefits and coverage decisions about new technologies or new applications of exist- ing technologies. In this context, the decision usually involves a categorical

OCR for page 57
6 SETTING PRIORITIES determination (e.g., are insulin pumps covered?) or a more narrow assess- ment to identify the subpopulations for which a service should be covered (e.g., who among the population is likely to benefit from an artificial disc for degenerative disc disease?). If the topic in question is not within the boundaries of covered benefits, payers are unlikely to assess it. Thus, for example, an insurance company is not likely to assess the efficacy of a vac- cine if it does not cover preventive services. The agenda of AHRQ, the lead federal health agency charged with conducting systematic reviews of clinical effectiveness, is circumscribed by statute. The Effective Health Care Program, for example, may only sponsor studies related to 1 of 10 priority conditions established by the secretary of the U.S. Department of Health and Human Services (Table 3-2). The U.S. Preventive Services Task Force (USPSTF) focuses on clinical preventive ser- vices provided in primary care settings. Many medical professional societies assess evidence to develop clinical guidelines for the management of specific conditions. Manufacturers assess evidence to demonstrate safety and effi- cacy and to persuade payers and other constituencies of their value. Private research firms generally focus on responding to marketplace demands. The Cochrane Collaboration supports the broadest range of evidence reviews worldwide; its volunteer researchers participate in 51 discipline- specific (e.g., musculoskeletal) review groups that set their own agendas in accord with the important questions within their disciplines. The Cochrane Collaboration’s Steering Group is considering new approaches to how the review groups set priorities for their research and has funded research projects whose results will guide them in this effort (Cochrane Collabora- tion, 2007). Methods Used to Identify Potential Topics Some organizations, including AHRQ and the USPSTF, actively solicit nominations from stakeholders and the public (Table 3-2). Other organiza- tions have internal processes for gathering suggestions from staff or outside advisors. The response to the AHRQ open call for topics is of interest, although it is not necessarily indicative of the potential response to a broader call for topics from a well-funded agency. Table 3-3 shows the number of EPC topics nominated and funded, the topic areas, and the types of organiza- tions that nominated a topic for the EPC program during 2005 and 2006. The total number of nominations was small. From 2005 to 2006, AHRQ received 76 topic nominations: 36 were related to treatment effectiveness; 13 were related to diagnostic interventions; and the rest concerned quality improvement and patient safety, prevention, organization and finance, and other topics. Ultimately, 51 percent of the topics were funded.

OCR for page 57
6 KNOWING WHAT WORKS IN HEALTH CARE TABLE 3-2 Methods Used to Identify Topics for Systematic Reviews in Selected Organizations Organization Methods Who Can Nominate Eligible Topics AHRQ Solicits topics Open to the public; Effectiveness of prevention, annually through AHRQ conducts diagnosis, treatment, and the Federal Register systematic reviews management of common and accepts for CMS, the clinical and behavioral nominations on an USPSTF, and the conditions; organization ongoing basis NIH Consensus and financing; and research Development methods; topics addressed Conference program by the Effective Health Care Program must relate to 1 of 10 priority conditions established by the secretary of HHSa BCBSA TEC Solicits topics from TEC staff, medical Effectiveness of surgical within BCBSA and directors of procedures, devices and from its advisers member plans, implants, diagnostic imaging, Medical Advisory laboratory tests, and targeted Panel (external and specialty pharmaceuticals advisers), Medical Policy Panel, and pharmacy managers Cochrane Vary among 51 Open to the public; Broad range of clinical services Collaboration review groups reviews are author and population-based health initiated or the topic interventions is nominated and authors sought DERPb Program State Medicaid Comparative effectiveness of participants programs and drugs within classes of drugs nominate topics other participating organizations MedCAC and Internal decision MedCAC staff Devices, drugs, and procedures CMSb that are within the scope of Medicare coverage and subject to a national coverage decision NICE Internal decision by Individuals and Effectiveness of services that the department of groups are being considered for health in England coverage by the National and Wales; NICE Health Service, including uses the National drugs, devices, diagnostics, Horizon Scanning surgical procedures, and Centre to identify population-based health new and emerging promotion technologies

OCR for page 57
6 SETTING PRIORITIES TABLE 3-2 Continued Organization Methods Who Can Nominate Eligible Topics NIH OMARb NIH institutes and NIH institutes and Medical safety and efficacy; centers and OMAR centers, the U.S. economic, sociological, legal, select topics on Congress, other and ethical issues the basis of four government health criteria agencies, and the public USPSTFb Solicits topics Open to the public Clinical preventive services, biennially through including screening, the Federal Register counseling, and preventive and appeals to medications for asymptomatic stakeholders individuals NOTE: DERP = Drug Effectiveness Review Project; HHS = U.S. Department of Health and Human Services; NICE = National Institute for Health and Clinical Excellence; NIH OMAR = National Institutes of Health Office of Medical Applications of Research. aThe priority conditions are arthritis and nontraumatic joint disorders, cancer, chronic obstructive pulmonary disease and asthma, dementia, depression and other mood disorders, diabetes mellitus, ischemic heart disease, peptic ulcer disease and dyspepsia, pneumonia, and stroke and hypertension. bThe reviews are conducted by an AHRQ EPC. SOURCES: AHRQ (2006, 2007b); Aronson (2007); Coates (2007); Cochrane Collaboration (2007); Guirguis-Blake et al. (2007); NIH Consensus Development Program (2005). Box 3-1 lists the organizations that submitted EPC topic nominations from 2005 to 2006. The largest source of nominations was federal agen- cies, followed by medical professional societies (to support clinical guideline development). Box 3-2 provides the topics of EPC studies released during the same period; they include the diagnosis and treatment of cancer and blood disorders, heart and vascular diseases, mental health conditions, and neurological disorders; routine obstetric care; bioterrorism preparedness; the use of dietary supplements for various clinical conditions; information technology; research methodologies; and approaches to improving the qual- ity and the safety of care. Horizon Scanning Many organizations, especially health plans and the private technology assessment firms that serve them, make special efforts to identify new or emerging technologies before they are widely adopted in practice. These activities, commonly referred to as “horizon scanning,” typically involve the active monitoring of medical journals; trade press publications; national

OCR for page 57
6 KNOWING WHAT WORKS IN HEALTH CARE TABLE 3-3 AHRQ EPC Study Nominations by Source and Topic Area, 2005-2006 Number Total Source or Topic Area 2005 2006 Number Percent All nominations 40 36 76 100.0 Funded nominations 24 15 39 51.3 Source (n = 47) Federal agencies 15 5 20 42.6 Health plans 2 1 3 6.4 Medical professional societies 10 6 16 34.0 Other 4 4 8 17.0 Total 31 16 47 100.0 Topic area (n = 76) Prevention 3 4 7 9.2 Diagnosis 5 8 13 17.1 Treatment 17 19 36 47.4 Rehabilitation 0 1 1 1.3 Organization and finance 2 4 6 7.9 Quality improvement and patient safety 8 0 8 10.5 Other 5 0 5 6.6 NOTE: Excludes studies requested by CMS and USPSTF. SOURCE: Personal communication, J. Slutsky, Agency for Healthcare Research and Quality, May 10, 2007. health news sources; CMS and U.S. Food and Drug Administration no- tices; announcements of proposed and new current procedural terminology codes; and abstracts, posters, and presentations from scientific meetings of major specialty societies for topics. There is no evidence or apparent con- sensus on the elements of an effective horizon-scanning system (Murphy et al., 2007). Nevertheless, past experience has shown, sometimes with tragic conse- quences, the risks of failing to assess new and emerging health technologies before they are widely adopted. Although it is not clear that early effec- tiveness assessment would deter the rapid adoption of unproven interven- tions, assessments of the early evidence could underscore the risks of early adoption. A compelling example of what can go horribly wrong when a high-risk, untested procedure is promoted is high-dose chemotherapy with autologous bone marrow transplantation (HDC/ABMT) for breast cancer. Rettig and colleagues (2007) showed in an in-depth history of HDC/ABMT that no central entity required that the controversial new procedure be adequately evaluated before its use became widespread. At the time that HDC/ABMT began to be used, its potential risks and benefits were not known. With this void as the backdrop, the procedure was evaluated not by parties with the appropriate clinical or research expertise, but by the

OCR for page 57
6 SETTING PRIORITIES BOX 3-1 Sources of Topic Nominations, Evidence-based Practice Centers, 2005-2006 America’s Health Insurance Plans American Academy of Audiology American Academy of Family Physicians American Academy of Orthopedic Surgeons American Academy of Pediatrics American Association of Clinical Chemistry American College of Cardiology American College of Chest Physicians American College of Obstetricians and Gynecologists American College of Physicians American Dental Association American Dietetic Association American Organization of Nurse Executives American Society of Clinical Oncology Centers for Disease Control and Prevention Centers for Medicare & Medicaid Services Council of Linkages Between Academia and Public Health Practice (Public Health Foundation) Employer Health Care Alliance Cooperative Fogarty International Center for Advanced Study in the Health Sciences (NIH) Health Resources and Services Administration National Center of Complementary and Alternative Medicine (NIH) National Rural Health Association Office of Dietary Supplements (NIH) Office of Management Analysis and Review (NIH) Office of Research on Women’s Health (NIH) Saliba Burns Institute Society of Vascular Surgery Spinal Cord Consortium Transatlantic Inter-Society Consensus Union County Health Committee U.S. Breastfeeding Committee U.S. Preventive Services Task Force SOURCE: Personal Communication, J. Slutsky, Agency for Healthcare Research and Quality, May 10, 2007. courts, legislatures, and the media. Many women died of treatment-related causes before it was clear that HDC/ABMT was ineffective and harmful. Box 3-3 provides a number of examples of widely adopted health interven- tions found to be ineffective or harmful.

OCR for page 57
66 KNOWING WHAT WORKS IN HEALTH CARE BOX 3-2 Topics of Evidence-based Practice Center Reports, 2005 to present Bioterrorism Pediatric Anthrax, Bioterrorism Preparedness Cancer and blood disorders Adnexal Mass Cancer Care Quality Measures, Colorectal Cancer Cancer Care Quality Measures, Symptoms and End-of-Life Cancer Clinical Trials, Recruitment of Underrepresented Populations Hereditary Nonpolyposis Colorectal Cancer Ovarian Cancer, Genomic Tests for Detection and Management Small Cell Lung Cancer, Management Complementary and alternative care Meditation Practices for Health Dietary supplements B Vitamins and Berries and Age-Related Neurodegenerative Disorders Multivitamin/Mineral Supplements, Chronic Disease Prevention Omega-3 Fatty Acids Series: Effects on Cancer, Child and Maternal Health, Cognitive Functions, Eye Health, Mental Health, Organ Transplantation Soy, Effects on Health Outcomes Ear, nose, and throat conditions Sinusitis, Acute Bacterial—Update Heart and vascular diseases Abdominal Aortic Aneurysm, Endovascular and Open Surgical Repairs Heart Failure Diagnosis and Prognosis, Testing for BNP and NT-proBNP Left Ventricular Systolic Dysfunction, Cardiac Resynchronization Therapy and ICDs Post-Myocardial Infarction Depression Information technology Health Information Technology, Costs and Benefits Telemedicine for the Medicare Population—Update Lung conditions Asthma, Work-Related Chronic Obstructive Pulmonary Disease, Spirometry Mental health conditions and substance abuse Adults with Non-Psychotic Depression Treated with SSRIs, CYP450 Testing Eating Disorders, Management Tobacco Use: Prevention, Cessation, and Control The emphasis on horizon scanning appears to have led to a consid- erable duplication of effort among health plans and private technology assessment firms in the United States. In response to a request by the com- mittee, UnitedHealthcare provided a sample list of the screening, diagnos- tic, therapeutic, and disease management services and devices that it had

OCR for page 57
67 SETTING PRIORITIES Metabolic, nutritional, and endocrine conditions Impaired Glucose Tolerance and Fasting Glucose, Diagnosis, Prognosis, and Therapy Methodology Empirical Evaluation, Association Between Methodological Shortcomings and Estimates of Adverse Events Health Benefit Design, Consumer-Oriented Strategies for Improving Statement of Work for Technical Analysis, Methodology Systematic Reviews, Criteria for Distinguishing Effectiveness from Efficacy Trials Nerve and brain conditions Age-Related Neurodegenerative Disorders, B Vitamins and Berries Insomnia, Manifestations and Management Stroke, Evaluation and Treatment Obstetric and gynecologic conditions Breastfeeding, Maternal and Infant Health Outcomes Cesarean Delivery on Maternal Request Episiotomy Use in Obstetrical Care Menopause-Related Symptoms, Management Ovarian Cancer, Genomic Tests for Detection and Management Perinatal Depression: Prevalence and Screening Uterine Fibroids—Update Pediatric conditions Breastfeeding, Maternal and Infant Health Outcomes Toilet Training, Effectiveness of Different Methods Quality improvement and patient safety Children with Special Health Care Needs, Care Coordination Strategies Closing the Quality Gap—Vol. 3: Hypertension Care; Vol. 4: Antibiotic Prescrib- ing Behavior; Vol. 5: Asthma Care, Vol. 6: Healthcare-Associated Infections; Vol. 7: Care Coordination Continuing Medical Education, Effectiveness Nurse Staffing and Quality of Patient Care Periodic Health Evaluation, Value Skin conditions Heparin, Uses Treat Burn Injury NOTE: BNP = B-Type natriuretic peptide; CYP450 = cytochrome P450; ICD = implantable cardioverter defibrillator; NT-proBNP = N-Terminal proBNP; SSRI = selective serotonin reup- take inhibitor. SOURCE: AHRQ (2007e). assessed in 2006. The committee then asked three additional health plans (Aetna, Kaiser Permanente, and WellPoint) and TEC, the ECRI Institute, and Hayes, Inc., if they had also conducted reviews of the 20 services that UnitedHealthcare had reviewed (Table 3-4). With only a few exceptions, each health plan and private firm had assessed the same 20 services that

OCR for page 57
TABLE 3-4 Continued 70 Technology Assessment Health Plans Firms United- Kaiser Hayes, BCBSA Type of Service Healthcare Permanente Aetna WellPoint Inc. TEC ECRI Devices ✓ ✓ ✓ ✓ ✓ ✓ ✓ Artificial total disc replacement for lumbar and cervical spine ✓ ✓ ✓ ✓ ✓ ✓ Cochlear implants ✓ ✓ ✓ ✓ ✓ ✓ Total artificial heart ✓ ✓ ✓ ✓ ✓ ✓ ✓ Total hip resurfacing arthroplasty NOTE: Not all reviews are comprehensive assessments. AHRQ EPCs have reviewed 5 of the 20 topics listed (ambulatory blood pressure monitor- ing, CT angiography, proteomic testing for ovarian cancer, spinal fusion for low back pain, and uterine fibroids). The Kaiser Permanente entries represent all Kaiser regions.

OCR for page 57
7 SETTING PRIORITIES TABLE 3-5 Priority Setting Criteria That Selected Organizations Use Public Disease Potential Interest or New Sufficient Variation Organization Cost Burden Impact Controversy Evidence Evidence in Care AHRQa ✓ ✓ ✓ ✓ ✓ ✓ BCBSA TECb ✓ ✓ ✓ CADTH ✓ ✓ ✓ ✓ ✓ MedCAC and ✓ ✓ ✓ CMS NICE ✓ ✓ ✓ ✓ NIH OMAR ✓ ✓ ✓ ✓ ✓ ✓ DERPc ✓ USPSTF ✓ ✓ ✓ ✓ ✓ ✓ NOTE: CADTH = Canadian Agency for Drugs and Technologies in Health; DERP = Drug Effectiveness Review Project; NICE = National Institute for Health and Clinical Excellence; NIH OMAR = National Institutes of Health Office of Medical Applications of Research. aAlso if relevant to federal health programs; specific plans to disseminate or otherwise use findings. bMust be of interest to member plans. cAlso if multiple drugs are in the class, for off-label use, and for recent additions to drug class. SOURCES: AHRQ (2007a,c,d); BCBSA (2007); CADTH (2005); CMS (2006); DERP (2007); Harris et al. (2001). Methods Used to Identify High-Priority Topics Selection Criteria Many organizations report using the same general criteria to gauge the potential impact that an evidence assessment might have on clinical care and patient outcomes (Table 3-5) (Aronson, 2007; CADTH, 2005; Harris et al., 2001). These include the burden of disease (rates of disability, mor- bidity, or mortality), public controversy, cost (as related to the condition, as related to the procedure, or in the aggregate), potential impact, new evidence that might change previously held conclusions (new clinical trial results), the adequacy of the existing evidence, and unexplained variation in the use of services (Table 3-6). How these factors play into final priori- ties is not apparent. One recent analysis found little congruence between the topics ad- dressed by cost-effectiveness analyses, conducted from 1976 through 2001, and those conditions that caused the highest burden of disease or that were the top health concerns identified in the U.S. Surgeon General’s report Healthy People 00 (HHS, 2000; Neumann et al., 2005). The effectiveness

OCR for page 57
7 KNOWING WHAT WORKS IN HEALTH CARE TABLE 3-6 Definitions of Commonly Used Priority Setting Criteria Criterion Definition Disease burden Extent of disability, morbidity, or mortality imposed by a condition, including effects on patients, families, communities, and society overall Controversy Controversy or uncertainty around the topic and supporting data Cost Economic cost associated with the condition, procedure, treatment, or technology related to the number of people needing care, unit cost of care, or indirect costs New evidence New evidence with the potential to change conclusions from prior assessments Potential impact Potential to improve health outcomes (morbidity, mortality) and quality of life; improve decision making for patient or provider Public or Consumers, patients, clinicians, payers, and others want an assessment provider interest to inform decision making Sufficient The available research literature provides adequate evidence to support evidence an assessment Variation in Potential to reduce unexplained variations in prevention, diagnosis, or care treatment; the current use is outside the parameters of clinical evidence reviews focused primarily on pharmaceuticals (40 percent) and surgical pro- cedures (16 percent) and overrepresented cerebrovascular disease, diabetes, breast cancer, and HIV/AIDS, whereas they underrepresented depression and bipolar disorder, injuries, and substance abuse disorders. Similarly, a survey of European horizon-scanning agencies found little evidence that the organizations had operationalized all of their selection criteria (Douw and Vondeling, 2006). FINDINGS There is little solid basis at present for judging whether one method of selecting priorities is better than another. The Cochrane Collaboration and the USPSTF are currently reconsidering their approaches and may have insights to offer in the future (Cochrane Collaboration, 2007; Guirguis- Blake et al., 2007). Although AHRQ has handled a relatively small volume of nominations, it has considerable experience managing topic nomina- tions for its effectiveness programs. The Program should learn from this experience. New and emerging technologies are clearly high priorities for health

OCR for page 57
7 SETTING PRIORITIES plans. However, the Program should focus its priorities not only on what lies ahead, but also where there is meaningful potential to identify both new and established effective services. Several specific variables may be useful indicators of potential impacts, including burden of disease, cost, unex- plained variations in use, and measures of disparities in health outcomes based on race and ethnicity. The PSAC must consider how best to approach the setting of priorities for reviewing new and emerging technologies. There appear to be substan- tial efficiencies to be gained by reducing duplicative reviews of new tech- nologies. Decision makers, especially in health plans and health systems, often need to decide quickly about whether to cover new and emerging technologies. Patients and providers want information on new health ser- vices as soon as they become available, often because manufacturers are pressing them to adopt a product or because patients have read direct-to- consumer advertising and want answers from their physicians. Yet, almost by definition, sufficient objective information about new and emerging technologies is seldom available. The PSAC should consider whether new and emerging technologies require the use of a different priority setting process—including the use of separate criteria—than other topics with more substantive evidence. There would be trade-offs in the resource and opportunity costs associated with two different processes. There are few, if any, empirical data to suggest the optimal frequency for setting priorities or updating previous assessments. The Cochrane Col- laboration recommends that systematic reviews be updated every two years and review groups send reminders and results of new literatures searches to prompt the authors (Higgins and Green, 2006). New knowledge, such as new evidence from recently conducted clinical trials, may trigger the need to reassess a previously considered topic, especially if it suggests the need for modifications to current clinical decision making. The PSAC should identify the quantitative and qualitative indicators that best signal the need for an update. Quantitative variables include, for example, significant changes in the magnitudes of effects (greater than 50 percent) for any primary or mortality outcome from the original systematic review (Shojania et al., 2007). Possible qualitative signals include new studies reporting substantial differences in effectiveness, new information about harm, or caveats about previously reported findings. RECOMMENDATIONS As noted earlier, the committee recommends that the Program appoint a PSAC to develop and implement a priority setting process that will iden- tify those high-priority topics that merit systematic evidence assessment. This section draws from the research examined in this chapter, and based

OCR for page 57
7 KNOWING WHAT WORKS IN HEALTH CARE on the consensus of the committee, presents further recommendations for developing the Program’s priority setting process. It also highlights key programmatic issues the PSAC must address including: PSAC membership, cultivating objectivity, scope, identifying potential topics, identifying prior- ity topics, meeting frequency, and updating priorities and processes. Recommendation: The Program should appoint a standing Priority Setting Advisory Committee (PSAC) to identify high-priority topics for systematic reviews of clinical effectiveness. • The priority setting process should be open, transparent, efficient, and timely. • Priorities should reflect the potential for evidence-based practice to improve health outcomes across the life span, reduce the burden of disease and health disparities, and eliminate undesirable variations. • Priorities should also consider economic factors, such as the costs of treatment and the economic burden of disease. • The membership of the PSAC should include a broad mix of exper- tise and interests and be chosen to minimize committee bias due to conflicts of interest. Guiding Principles During the course of this study, the committee established a set of eight guiding principles for building the Program: accountability, consistency, ef- ficiency, feasibility, objectivity, responsiveness, scientific rigor, and transpar- ency. The principles are described in depth in Chapter 6. Five of the eight principles have particular salience for the Program’s priority setting process and are described in Table 3-7. Key Program Challenges PSAC Membership The PSAC would be an active body with ongoing responsibility for reviewing topic nominations, horizon scanning, and advising the Program on topics that merit priority systematic review. Members should be willing to make significant time commitments. There is limited research evidence to suggest the optimal composition or size of the PSAC. The committee believes that it should be sufficiently large to include all of the important stakeholders, but not too large so that it is unwieldy. The membership should mirror the Program’s target audience, especially patients and con- sumers, clinicians, payers, and guideline developers, as well as individuals

OCR for page 57
7 SETTING PRIORITIES TABLE 3-7 Principles for Setting Evidence Assessment Priorities Principle Implications for Priority Setting Consistency—methods are The Program reliably uses standard processes and criteria. standardized and predictable Efficiency—aoids waste and The process is simple. unnecessary duplication Objectivity—eidence based The process is developed by a broadly representative group selected to ensure a balanced membership and and without bias; conflict of minimal bias due to conflicts of interest. interest is minimized Responsiveness—addresses the The process cultivates input from key decision makers, particularly patients, clinicians, and guideline developers, information needs of decision and ensures up-to-date information. Evaluation of the makers process is a routine function. Transparency—methods are The process remains open, predictable, and explicitly defined, with fully documented standards and procedures. explicitly defined, consistently applied, and publicly aailable with the appropriate expertise in the relevant content areas and technical methods. Maintaining expertise in all content areas will be impossible. The PSAC should consider using the CMS MedCAC approach. CMS sometimes recruits outside experts knowledgeable about a particular subject matter or methodologies to serve as nonvoting panelists to provide additional technical input to MedCAC deliberations (CMS MCAC Operations and Methodology Subcommittee, 2006). The PSAC would require support staff to assist in efficient review of topic nominations. Staff expertise in library sciences and research databases will be especially important. Cultiating Objectiity2 Objectivity implies balanced participation, oversight by a governance body, and standards that minimize conflicts of interest and other biases. The PSAC should not be dominated by special interests that can benefit materially or by intellectual biases that might favor one professional spe- cialty over another (e.g., surgery versus medicine or ophthalmology versus optometry). The use of transparent, well-documented, and standard procedures also contributes to perceptions of objectivity. Stakeholders are not likely to 2 SeeChapter 5, Developing Trusted Clinical Practice Guidelines, for further discussion of the factors involved in developing balance in an advisory group.

OCR for page 57
76 KNOWING WHAT WORKS IN HEALTH CARE trust an unpredictable, opaque process. All deliberations should be open to encourage public participation and public confidence and to ensure the inclusion of a wide variety of perspectives. The PSAC should post key documents on its website, including meeting announcements and decisions concerning priorities, and should allow time for public comment on docu- ments that support the priority setting process. Scope The PSAC should consider a broad range of topics, including, for ex- ample, new, emerging, and well-established health services across the full spectrum of health care (e.g., preventive interventions, diagnostic tests, treatments, rehabilitative therapies, and end-of-life care and palliation); community-based interventions such as immunization initiatives or pro- grams to encourage smoking cessation; and research methods and data sources for the analysis of comparative effectiveness. Identifying Potential Topics There should be an open and inclusive topic nomination process that cultivates input from the key end users, such as the developers of guidelines and quality measures, patients, clinicians, and payers. Although the nomi- nation process should not be overly burdensome to potential nominators, its methods, schedules, and information requirements should be standard- ized and predictable from year to year. Topic nominations may not necessarily translate readily into answer- able research questions. The AHRQ Effective Health Care Program requires nominators to provide standardized information in a template (Appen- dix C) that helps to clarify the focus of the suggested topic and to draw out the salient questions underlying the topic nomination. The PSAC should consider this approach. Identifying Priority Topics The PSAC should develop the selection criteria, with Program staff providing necessary research support. The committee believes that two considerations should be paramount in developing the selection criteria: (1) how well the topic reflects the clinical questions of patients and clini- cians and (2) the potential for a large impact on clinical and other outcomes that matter the most to patients. It will be important to include criteria that indicate potential impacts, such as the burden of disease; economic factors, such as the costs of treatment and the economic burden of disease; unex-

OCR for page 57
77 SETTING PRIORITIES plained variations; and measures of disparities in health outcomes based on race and ethnicity. A strict, quantitative priority ranking may not be feasible given the range and complexity of potential topics regarding the management of specific health problems (e.g., back pain), specific patient populations (e.g., women under age 70 with advanced breast cancer who have undergone breast-conserving surgery), care settings (e.g., a specialized rehabilitation unit or a physician’s office), the class of pharmacologic or nonpharmaco- logic treatment, the type of provider (e.g., a neurologist or a psychiatrist), and multiple patient outcomes (e.g., pain, return to work, and mortality). Meeting Frequency The PSAC should meet frequently enough so that its members may keep abreast of research discoveries, emerging technologies, and unexpected events that might affect the priorities that the PSAC establishes. There will be a continuing stream of new interventions and an ongoing imperative to determine if each new intervention is better than, comparable to, or worse than standard treatments. The priority setting process should be responsive to decision makers in a timely manner. It should also be routinely evaluated to ensure that it is fulfilling its purpose effectively and efficiently. Updating Priorities and Processes Research is iterative. New evidence can lead to new conclusions. The PSAC should develop a mechanism for revisiting past nominations, whether they have been rejected or accepted. On first consideration, the evidence for many topics will be insufficient to draw a conclusion on effectiveness. REFERENCES AHCPR (Agency for Health Care Policy and Research). 1990. Extracranial-intracranial by- pass to reduce the risk of ischemic stroke: Health technology assessment report no. 6. Rockville, MD: AHCPR. ———. 1993. Intermittent positie pressure breathing: Old technologies rarely die. Rockville, MD: AHCPR. AHRQ (Agency for Healthcare Research and Quality). 2006. Solicitation for nominations for new primary and secondary health topics to be considered for review by the United States Preventive Services Task Force. Federal Register 71(15):3849-3850. ———. 2007a. Effectie health care home http://effectivehealthcare.ahrq.gov/ (accessed Au- gust 7, 2007). ———. 2007b. Nominations of topics for Evidence-based Practice Centers. Federal Register 72(51):12618-12619. ———. 2007c. Reports: Research reiews http://effectivehealthcare.ahrq.gov/reports/reviews. cfm (accessed September 4, 2007).

OCR for page 57
7 KNOWING WHAT WORKS IN HEALTH CARE ———. 2007d. Technology assessments http://www.ahrq.gov/clinic/techix.htm (accessed Sep- tember 4, 2007). ———. 2007e. Topic index: A-Z Eidence-based Practice Centers (EPCs) http://www.ahrq. gov/clinic/epcquick.htm (accessed September 4, 2007). Aronson, N. 2007. Approaches to priority setting: Identifying topics and selection (Submitted responses to the IOM HECS committee meeting, January , 007). Washington, DC. BCBSA (Blue Cross and Blue Shield Association). 2007. Blue Cross and Blue Shield Asso- ciation’s Technology Ealuation Center http://www.bcbs.com/tec/index.html (accessed January 18, 2007). BCBSA TEC (Blue Cross and Blue Shield Association Technology Evaluation Center). 2007. 006 TEC Assessments (Volume ) http://www.bcbs.com/betterknowledge/tec/vols/#21 (accessed September 4, 2007). BMJ. 2004a. Antihistamines and/or oral decongestants to treat otits media with effusion. Clinical Eidence. December. London, UK: BMJ Publishing Group. ———. 2004b. Fenfluramine plus phentermine to treat obesity. Clinical Eidence. December. London, UK: BMJ Publishing Group. ———. 2004c. Subcutaneous interferon alfa-2a to treat age-related macular degeneration. Clinical Eidence 88(12). December. London, UK: BMJ Publishing Group. CADTH (Canadian Agency for Drugs and Technologies in Health). 2005. CADTH: HTA dictorate process documentation. In CADTH: Topic identification, prioritization and refinement. Ontario, Canada: CADTH. CMS (Centers for Medicare & Medicaid Services). 2006. Guidance for the public, industry, and CMS staff: Factors CMS considers in commissioning external technology assess- ments http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=7 (accessed January 18, 2007). CMS MCAC Operations and Methodology Subcommittee (CMS Medicare Coverage Advisory Committee Operations and Methodology Subcommittee). 2006. Process for ealuation of effectieness and committee operations http://www.cms.hhs.gov/FACA/Downloads/ recommendations.pdf (accessed January 18, 2007). Coates, V. 2007. Stakeholders Forum (Presentation to the IOM HECS Committee Meeting, January , 007). Washington, DC. Cochrane Collaboration. 2007. Setting priorities. Cochrane News (40), http://www.cochrane. org/newslett/CochraneNews_Aug07lowres.pdf (accessed September 12, 2007). Coplen, S. E., E. M. Antman, J. A. Berlin, P. Hewitt, and T. C. Chalmers. 1990. Efficacy and safety of quinidine therapy for maintenance of sinus rhythm after cardioversion: A meta- analysis of randomized control trials. Circulation 82:1106-1116. DERP (Drug Effectiveness Review Project). 2007. Process http://www.ohsu.edu/drugeffectiveness/ process/index.htm (accessed September 4, 2007). Douw, K., and H. Vondeling. 2006. Selection of new health technologies for assessment aimed at informing decision making: A survey among horizon scanning systems. International Journal of Technology Assessment in Health Care 22(2):177-183. Enkin, M., M. Keirse, M. Renfrew, and J. Neilson. 1995. A guide to effectie care in pregnancy and childbirth. 2nd ed. New York: Oxford University Press. Feeny, D., G. Guyatt, and P. Tugwell eds. 1986. Health care technology: Effectieness, effi- ciency, and public policy. Montreal, Canada: Institute for Research on Public Policy. Fletcher, S. W., and G. A. Colditz. 2002. Failure of estrogen plus progestin therapy for preven- tion. JAMA 288:366-368. Goodman, C. 2004. HTA 0: Introduction to health technology assessment. Falls Church, VA: The Lewin Group. Grimes, D. A. 1993. Technology follies: The uncritical acceptance of medical innovation. JAMA 269(23):3030-3033.

OCR for page 57
79 SETTING PRIORITIES Guirguis-Blake, J., N. Calonge, T. Miller, A. Siu, S. Teutsch, and E. Whitlock. 2007. Current processes of the U.S. Preventive Services Task Force: Refining evidence-based recommen- dation development. Annals of Internal Medicine 147(2). Harris, R. P., M. Helfand, S. H. Woolf, K. N. Lohr, C. D. Mulrow, S. M. Teutsch, and D. Atkins. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preentie Medicine 20(3 Suppl):21-35. HHS (U.S. Department of Health and Human Services). 2000. Healthy people 00: Un- derstanding and improing health. 2nd ed. Washington, DC: U.S. Government Printing Office. Higgins, J. T., and S. Green. 2006. Cochrane handbook for systematic reiews of interentions ..6 [updated September 006], The Cochrane Library, Issue 4, 2006. Chichester, UK: John Wiley & Sons, Ltd. IOM (Institute of Medicine). 1990. National priorities for the assessment of clinical condi- tions and medical technologies: Report of a pilot study. Edited by Lara, M. E., and C. Goodman. Washington, DC: National Academy Press. ———. 1992. Setting priorities for health technologies assessment: A model process. Edited by Donaldson, M. S., and H. C. Sox. Washington, DC: National Academy Press. ———. 1995. Setting priorities for clinical practice guidelines. Edited by Field, M. J. Wash- ington, DC: National Academy Press. ———. 2003. Priority areas for national action: Transforming health care quality. Edited by Adams, K., and J. M. Corrigan. Washington, DC: The National Academies Press. The Ischemic Optic Neuropathy Decompression Trial Research Group. 1995. Optic nerve decompression surgery for nonarteritic anterior ischemic optic neuropathy (NAION) is not effective and may be harmful. JAMA 273(8):625-632. Mello, M. M., and T. A. Brennan. 2001. The controversy over high-dose chemotherapy with autologous bone marrow transplant for breast cancer. Health Affairs 20(5):101-117. Murphy, K., C. Packer, A. Stevens, and S. Simpson. 2007. Effective early warning systems for new and emerging health technologies: Developing an evaluation framework and an as- sessment of current systems. International Journal of Technology Assessment in Health Care 23(3):324-330. NIH Consensus Development Program (National Institutes of Health Consensus Development Program). 2005. About the National Institutes of Health (NIH) Consensus Deelopment Program (CDP) http://consensus.nih.gov/ABOUTCDP.htm#topic (accessed January 18, 2007). ———. 2007. Preious conference statements http://consensus.nih.gov/PREVIOUSSTATEMENTS. htm (accessed September 4, 2007). Neumann, P. J., A. B. Rosen, D. Greenberg, N. V. Olchanski, R. Pande, R. H. Chapman, P. W. Stone, S. Ondategui-Parra, J. Nadai, J. E. Siegel, and M. C. Weinstein. 2005. Can we better prioritize resources for cost-utility research? Medical Decision Making 25(4):429-436. Noorani, H. Z., D. R. Husereau, R. Boudreau, and B. Skidmore. 2007. Priority setting for health technology assessments: A systematic review of current practical approaches. In- ternational Journal of Technology Assessment in Health Care 23(3):310-315. OTA (U.S. Congress Office of Technology Assessment). 1994. Identifying health technologies that work: Searching for eidence. Washington, DC: Government Printing Office. Oxman, A. D., H. J. Schünemann, and A. Fretheim. 2006. Improving the use of research evi- dence in guideline development: 2. Priority setting. Health Research Policy and Systems 4(14). Passamani, E. 1991. Clinical trials: Are they ethical? New England Journal of Medicine 324(22):1589-1592. Phelps, C., and S. Parente. 1990. Priority setting in medical technology and medical practice assessment. Medical Care 28:703-723.

OCR for page 57
0 KNOWING WHAT WORKS IN HEALTH CARE Rettig, R., P. Jacobsen, C. Farquhar, and W. Aubrey. 2007. False hope: Bone marrow trans- plantation for breast cancer. New York: Oxford University Press. Rossouw, J. E., G. L. Anderson, and R. L. Prentice. 2002. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: Principal results from the Women’s Health Initiative randomized controlled trial. JAMA 288:321-333. Sassi, F. 2003. Setting priorities for the evaluation of health interventions: When theory does not meet practice. Health Policy (Amsterdam, Netherlands) 63(2):141-154. Shojania, K. G., M. Sampson, M. T. Ansari, J. Ji, S. Doucette, and D. Moher. 2007. How quickly do systematic reviews go out of date? A survival analysis. Annals of Internal Medicine 147:224-233.