Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Appendix B Workshop Agendas and Questions to Panelists AGENDAâWORKSHOP 1 Institute of Medicine Committee on Reviewing Evidence to Identify Highly Effective Clinical Services November 7, 2006 National Academy of Sciences Building 2101 Constitution Avenue, NW, Lecture Room, Washington, DC Workshop Objective: To review three case studies that reveal the challenges that decision makers face when trying to determine the clinical effective- ness of healthcare technologies. 8:30 Welcome and introductory remarksâBarbara McNeil, Chair, Institute of Medicine (IOM) Committee 8:45 Panel 1âPET Scan for Alzheimerâs Disease. Moderator: Dick Justman (UnitedHealthcare) Marilyn Albert, Johns Hopkins University School of Medicine, Division of Cognitive Neuroscience David Matchar, Duke University Medical School, Center for Clinical Health Policy Research 181
182 KNOWING WHAT WORKS IN HEALTH CARE Sean Tunis, Center for Medical Technology Policy Susan Molchan, Alzheimerâs Disease Neuroimaging Initiative, National Institute on Aging (NIH) Question & Answer/Open Discussion 10:15 Break 10:30 Panel 2âAvastin and Lucentis for Age Related Macular Degeneration. Moderator: Diana Petitti (Kaiser Permanente, Southern California) Reginald Sanders, American Society of Retina Specialists Winifred Hayes, Hayes, Inc. Steve Phurrough, CMS Coverage and Analysis Group Dan Martin, Emory University School of Medicine Question & Answer/Open Discussion 12:15 Lunch 1:00 Panel 3âScreening and Treating Colorectal CancerâThe Fecal DNA Test and an Assay for Irinotecan Toxicity. Moderator: Steve Shak (Genomic Health, Inc.) Barry Berger, Exact Sciences Corporation Margaret Piper, Blue Cross and Blue Shield Association Technology Evaluation Center Richard Goldberg, University of North Carolina, Chapel Hill Atiqur Rahman, Food and Drug Administration, Center for Drug Evaluation and Research Question & Answer/Open Discussion 3:00 Break 3:15 Panel 4âExperts React. Moderator: Hal Sox, Vice Chair, IOM Committee Daniel Cain, Cain Brothers Peter Juhn, Health Policy and Evidence, Johnson & Johnson Cindy Mulrow, University of Texas and the American College of Physicians David Ransohoff, University of North Carolina Chapel Hill, School of Medicine Earl Steinberg, Resolution Health
APPENDIX B 183 5:00 Adjourn Questions for the PanelistsâWorkshop 1 November 7, 2006 Panel 1âPET Scans for Alzheimerâs ⢠What does this experience show about the feasibility of coverage with evidence development under Medicare? ⢠What roles did evidence assessment and political pressure have in this coverage decision? How is this experience instructive for future cases? ⢠What challenges were involved in ensuring that the evidence available on PET was applicable to everyday clinical practice? Panel 2âLucentis/Avastin ⢠Was the substantial uptake in Avastin use for wet AMD justifiable given the lack of evidence and the needs of the patient population? ⢠How do evidence reviewers and payers address the relative effectiveness of Lucentis and Avastin given the limited data? ⢠Given the state of the evidence base, what role should cost play in payer decisions? ⢠What does this case study say about the societal need for more clinical data and information and the mechanisms by which data development is financed? ⢠How will the head-to-head trial supported by NIH alter their role in terms of assessing cost effectiveness? Panel 3âGenetic Tests ⢠Do you think that more of these types of tests will be developed [toxicity, and non-invasive screening]? How will experiences with these technologies affect the development of more similar tests? ⢠Are non-invasive screenings (genetic byproduct screening) and toxicity testing the âwave(s) of the futureâ? What types or levels of evidence are needed to recommend replacement of current therapies? Will comparative testing be done as newer technologies emerge? ⢠Are there specific challenges due to the nature of the populations qualified for testing? (Such as, are the populations so small as to affect the feasibility of large clinical trials?) ⢠As the evidence for these tests is emerging, how do gaps in evidence compare with more traditional technologies?
184 KNOWING WHAT WORKS IN HEALTH CARE ⢠What are the labeling issues for these technologies? What needs to be included and what would prompt a change? ⢠Is patient compliance and invasiveness considered when determining effectiveness? Panel 4âReactor ⢠What role did evidence play in these examples (as compared to other influences such as political pressure and provider experience)? ⢠Was the process of data collection and assessment able to keep pace with consumer and provider demand? ⢠In what ways did lack of data influence the process? ⢠What is the likelihood that the gaps in data will be filled (and in a timely manner)? ⢠How should current standards and methods in evidence assessment change in this era of personalized medicine, new biologic therapies, and advanced imaging techniques (if at all)? ⢠What needs to change to expedite the introduction of clinical services that are potentially highly effective (e.g., expediting the clinical trial process, and obtaining different funding mechanisms for investigations). ⢠How might information about new technologies be made more accessible for patients? AGENDAâWORKSHOP 2 Institute of Medicine Committee on Reviewing Evidence to Identify Highly Effective Clinical Services January 25, 2007 National Academy of Sciences Building 2101 Constitution Avenue, NW, Lecture Room, Washington, DC 8:30 Welcome and introductory remarksâBarbara McNeil, Chair, Institute of Medicine Committee 8:35 Panel 1âUsing Systematic Reviews to Develop Clinical Recommendations. Moderator: Richard Marshall Carolyn Clancy, Agency for Healthcare Research and Quality Mary Barton, U.S. Preventive Services Task Force
APPENDIX B 185 Steven Findlay, Consumers Union Ray Gibbons, American Heart Association Question & Answer/Open Discussion 9:55 Panel 2âUsing Systematic Reviews to Develop Quality Measures and Practice Standards. Moderator: Lisa Simpson Janet Corrigan, National Quality Forum Greg Pawlson, National Committee for Quality Assurance Dennis OâLeary, Joint Commission on Accreditation of Healthcare Organizations Cary Sennett, AMA-convened Physician Consortium for Performance Improvement Question & Answer/Open Discussion 11:15 Panel 3âApproaches to Priority Setting: Identifying Topics and Selection Criteria. Moderator: Dana Goldman Richard Justman, UnitedHealthcare Kay Dickersin, Cochrane USA Jean Slutsky, AHRQ Effective Health Care Program Naomi Aronson, BCBSA Technology Evaluation Center Question & Answer/Open Discussion 12:30 Lunch 1:00 Panel 4âStakeholders Forum. Moderator: Robert Galvin Kathy Buto, Johnson & Johnson Art Small, Genentech Vivian Coates, ECRI Jim Weinstein, Dartmouth-Hitchcock Medical Center Question & Answer/Open Discussion
186 KNOWING WHAT WORKS IN HEALTH CARE Institute of Medicine Workshop Reviewing Evidence to Identify Highly Effective Clinical Services January 25, 2007 Panel 1âUsing Systematic Reviews to Develop Clinical Recommendations Moderator: Richard Marshall (Harvard Vanguard Medical Associates) Panelists: Carolyn Clancy (AHRQ), Mary Barton (USPSTF), Steven Findlay (Consumers Union), and Ray Gibbons (American Heart Association) The objective of this panel discussion is to learn about the experiences of well-regarded organizations that use systematic reviews or other syntheses of bodies of evidence to develop clinical recommendations. AHRQ serves multiple roles; generator of evidence (e.g., DEcIDE, CERTS), synthesizer of evidence (e.g., Evidence-based Practice Center Program), and developer of clinical recommendations (e.g., USPSTF). The USPSTF assesses and syn- thesizes the evidence on preventive services and issues clinical recommenda- tions based on these bodies of evidence. Consumers Unionâs Best Buy Drugs Program relies on evidence-based analyses of the safety and effectiveness of prescription drugs to help consumers choose the drug best suited to their medical needs. The American Heart Association (in collaboration with the American College of Cardiology) synthesizes bodies of evidence on selected topics and draws from other reviews to develop clinical practice guidelines for cardiovascular care. Questions for the Panelists 1. Who is the principal audience for your clinical recommendations? Have you assessed their use of the recommendations? Approximately how often does your audience follow your clinical recommendations: â¢â In full, _____ percent â¢â In part, _____ percent 2. How do you identify and prioritize areas for which clinical recommendations are necessary? 3. How do you identify sources of evidence on clinical effectiveness? Which criteria (if any) do you use to judge quality of evidence? 4. How would you characterize the available evidence on the clinical
APPENDIX B 187 questions you address? Does it sufficiently cover services and populations of interest? What are the critical gaps (if any)? 5. you incorporate observational and other nonrandomized data Do in your evidence syntheses? If yes, what are the parameters for their use? 6. How do you respond to pressing demands for clinical recommendations when the body of evidence is insufficient or when the available evidence is relevant to only a subset of patients? Approximately how often do you make recommendations in the absence of sufficient data in these cases? 7. What resources does your organization dedicate to developing evidence-based clinical recommendations (e.g., staff time, special committee responsibility, conferences)? Institute of Medicine Workshop Reviewing Evidence to Identify Highly Effective Clinical Services January 25, 2007 Panel 2âUsing Systematic Reviews to Develop Quality Measures and Practice Standards Moderator: Lisa Simpson (Cincinnati Childrenâs Hospital Medical Center) Panelists: Janet Corrigan (NQF), Greg Pawlson (NCQA), Dennis OâLeary (JCAHO), Cary Sennett (AMA-convened Physician Consortium) The objective of this panel discussion is to learn about the experiences of leading organizations that are at the forefront of U.S. efforts to develop, implement, and improve evidence-based clinical practice standards. The mission of the NQF is to promote quality improvement in healthcare by en- dorsing national performance measures. NCQAâs HEDIS measures are used to assess health plan performance on various dimensions of care. JCAHOâs ORYX initiative incorporates outcome performance measurement into the accreditation process for health care organizations. The AMA-convened Physician Consortium for Performance Improvement® (Consortium) has developed more than 100 performance measures for practicing physicians. The Consortium includes more than 100 medical specialty and state medi- cal societies, the Council of Medical Specialty Societies, American Board of Medical Specialties and its member-boards, experts in methodology and
188 KNOWING WHAT WORKS IN HEALTH CARE data collection, the Agency for Healthcare Research and Quality, and the Centers for Medicare & Medicaid Services. Questions for the Panelists 1. How do you identify areas for which quality measures or practice standards are necessary? 2. Who is the principal audience for your evidence-based measures/ standards? What factors promote your credibility and establish you as a trusted source of information for these groups? 3. How do you respond to pressing demands for measures/standards when the relevant body of evidence is insufficient? Approximately how often do you make recommendations in the absence of sufficient data in these cases? 4. you have a mechanism for influencing research to produce needed Do evidence? What are your thoughts about the emerging practice of coverage with evidence development? 5. How would you characterize the available evidence on the clinical questions you address? Does it sufficiently cover services and populations of interest? What are the critical gaps (if any)? 6. you consider observational and other nonrandomized data on Do clinical effectiveness? If yes, what are the parameters for their use? 7. What resources does your organization use to identify, assess, and incorporate evidence syntheses in your clinical quality measures/ standards (e.g., staff time, special committee responsibility, conferences)? Institute of Medicine Workshop Reviewing Evidence to Identify Highly Effective Clinical Services January 25, 2007 Panel 3âApproaches to Priority Setting: Identifying Topics and Selection Moderator: Dana Goldman Panelists: Richard Justman (UnitedHealthcare), Kay Dickersin
APPENDIX B 189 (Cochrane USA), Jean Slutsky (AHRQ Effective Health Care Program), Naomi Aronson (BCBSA TEC) The objective of this panel discussion is to learn how these leading orga- nizations prioritize their efforts to conduct systematic reviews on clinical effectiveness. UnitedHealthcare is one of the nationâs largest health plans with an estimated 22 million covered lives. The Cochrane Collaboration is an international, not-for-profit organization that produces and disseminates systematic reviews of health care interventions. AHRQâs Effective Health Care Program is the leading federal agency charged with systematically reviewing and synthesizing evidence on clinical effectiveness. BCBSA TEC, an AHRQ-designated Evidence-based Practice Center, is a highly respected source of evidence-based assessments of the clinical effectiveness of medical procedures, devices, and drugs. Questions for the Panelists 1. How do you identify topics? Please provide a detailed outline of your approach, using, for example, last yearâs topics. 2. How would you characterize the yield from your efforts to identify topics? Does it capture a broad spectrum of services? What about surgical procedures? Existing services? Behavioral health? Disease management? Childrenâs health? 3. What are your criteria for selecting topics and how are the criteria implemented (e.g., through a formal process and quantitative method)? 4. you have a mechanism for picking up missed but important Do topics? How often (in retrospect) has your horizon scanning failed to identify a key service? Consider the past two years in answering this question. 5. How do you rank priorities? Please be specific as to the criteria used. How many of the priority topics are addressed each year? 6. What factors, if any, can override already determined priorities? 7. Has this approach worked well given your objectives? What are the strengths and weaknesses of the process? 8. What resources are involved in this activity (e.g., staff time, special committee responsibility, conferences)?
190 KNOWING WHAT WORKS IN HEALTH CARE Institute of Medicine Workshop Reviewing Evidence to Identify Highly Effective Clinical Services January 25, 2007 Panel 4âStakeholders Forum Moderator: Robert Galvin (General Electric) Panelists: Kathy Buto (J&J), Vivian Coates (ECRI), Art Small (Genentech), James Weinstein (Dept. of Orthopedics, Dartmouth-Hitchcock Medical Center) The objective of this panel discussion is to learn key stakeholdersâ views on how highly effective clinical services are identified. Johnson & Johnson is one of the worldâs largest manufacturers of medical devices, drugs, and equipment. ECRI, an AHRQ-designated Evidence-based Practice Center, is a highly respected source of evidence-based assessments of the clinical effectiveness of medical procedures, devices, and drugs. Genentech is one of the worldâs leading biotech companies. Dartmouth-Hitchcockâs department of orthopedics is the primary site of a 5-year, $14 million trial comparing surgical to nonsurgical treatments for certain back problems. Questions for the Panelists This IOM Committee has been charged with recommending an approach to identifying highly effective clinical services across the spectrum of careâ from prevention, diagnosis, treatment, and rehabilitation, to end-of-life care and palliation. In light of this charge and from the perspective of your organization, please answer the following: 1. How do you think that priorities should be set for services that need evidence development or synthesis? 2. What is your organizationâs current role in the development, use, and analysis of evidence on the clinical effectiveness of health care services (including drugs, devices, procedures, and other methods used to promote health or rehabilitation)? 3. Several groups and individualsâperhaps most recently Gail Wilensky in a Health Affairs pieceâhave proposed the establishment of a sizable entity to effect a quantum leap in the national capacity to assess the comparative effectiveness of health care services. How â Wilensky G. 2006. Health Affairs Web Exclusive w572-w585. Bethesda, MD: Project Hope.
APPENDIX B 191 might such a venture provide benefits, what would be the key concerns, and what would be the implications for your organization? 4. The U.S. Preventive Services Task Force evaluates evidence and develops recommendations for clinical preventive services. How would your organization respond to the formation of a similar task force that provided the same function for clinical interventions, e.g., diagnostic testing, treatment, etc? â Harris, R. P., M. Helfand, S. H. Woolf, K. N. Lohr, C. D. Mulrow, S. M. Teutsch, and D. Atkins. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preventive Medicine 20(3 Suppl):21-35.