OBSERVATIONAL STUDIES IN A LEARNING HEALTH SYSTEM
An Institute of Medicine Workshop
Sponsored by the Patient-Centered Outcomes Research Institute
A Learning Health System Activity
IOM Roundtable on Value & Science-Driven Health Care
April 25–26, 2013
National Academy of Sciences
2101 Constitution Avenue, NW
Washington, DC
Meeting Objectives
1. Explore the role of observational studies (OSs) in the generation of evidence to guide clinical and health policy decisions, with a focus on individual patient care, in a learning health system;
2. Consider concepts of OS design and analysis, emerging statistical methods, use of OSs to supplement evidence from experimental methods, identifying treatment heterogeneity, and providing effectiveness estimates tailored for individual patients;
3. Engage colleagues from disciplines typically underrepresented in discussions of clinical evidence discussions; and
4. Identify strategies for accelerating progress in the appropriate use of OS for evidence generation.
Day 1: Thursday, April 25 | |
8:00 am |
Coffee and light breakfast available |
8:30 am |
Welcome, introductions, and overview Welcome, framing of the meeting, and agenda overview Welcome from the Institute of Medicine (IOM) Michael McGinnis, IOM Opening remarks and meeting overview Joe Selby, Patient-Centered Outcomes Research Institute Ralph Horwitz, GlaxoSmithKline |
9:00 am |
Workshop stage setting Session format Workshop overview and stage setting Q&A and open discussion Session questions: How do observational studies contribute to building valid evidence to support effective decision making by patients and clinicians? When are their findings useful? What are the major challenges (study design, methodological, data collection/management/analysis, cultural, etc.) facing the field in the use of observational study data for decision making? Please include consideration of the following issues: bias, methodological standards, publishing requirements. What can workshop participants expect from the following sessions? |
9:45 am |
Engaging the issue of bias Moderator: Michael Lauer, National Heart, Lung, and Blood Institute Session format Introduction to issue |
Presentations: Instrumental variables and their sensitivity to unobserved biases An empirical approach to measuring and calibrating for error in observational analyses Respondents and panel discussion: John Wong, Tufts University Joel Greenhouse, Carnegie Mellon University Q&A and open discussion Session questions: What are the major bias-related concerns with the use of observational study methods? What are the sources of bias? How many of these concerns relate to methods and how many relate to the quality and availability of suitable data? What barriers have these concerns created for the use of the results of observational studies to drive decision making? What are the most promising approaches to reduction of bias through the use of statistical methods? What are the circumstances under which administrative (claims) data can be used to assess treatment benefits? What data are needed from electronic health records to strengthen the value of administrative data? What methods are best used to adjust for the changes in treatment and clinical conditions among patients followed longitudinally? What are the implications of these promising approaches for the use of observational study methods moving forward? |
|
11:30 am |
Lunch Participants will be asked to identify, along with the individuals at their table what they think the most critical questions are for patient centered research outcomes in the topics covered by the workshop. These topics will then be circulated to the moderators of the proceeding sessions. |
12:30 pm |
Generalizing randomized controlled trial (RCT) results to broader populations Session format Introduction to issue Presentations: Generalizing the right question Using observational studies to determine RCT generalizability Respondents and panel discussion: William Weintraub, Christiana Medical Center Constantine Frangakis, Johns Hopkins University Q&A and open discussion Session questions: What are the most cogent methodological and clinical considerations in the use of observational study methods to test the external validity of findings from RCTs? How do data collection, management, and analysis approaches affect generalizability? What are the generalizability questions of greatest interest? Or, where does the greatest doubt arise (age, concomitant illness, concomitant treatment)? What examples represent well-established differences? |
What statistical methods are needed to generalize RCT results? Are the standards for causal inference from OSs different when prior RCTs have been performed? How does statistical methodology vary in this case? What are the implications when treatment results for patients not included in the RCT differ from the overall results reported in the original RCT? What makes an observed difference in outcomes credible? Finding the effect shown in the RCT in the narrower population? Replication in more than one environment? The confidence interval of the result? The size of the effect in the RCT? Can subset analyses in the RCT, even if they are underpowered, be used to support or rebut the OS finding? |
|
2:15 pm |
Break |
2:30 pm |
Detecting treatment effect heterogeneity Session format Introduction to issue David Kent, Tufts University Presentations: Comparative effectiveness of coronary artery bypass grafting and percutaneous coronary intervention Mark Hlatky, Stanford University Identification of effect heterogeneity using instrumental variables Respondents and panel discussion: Mary Charlson, Cornell University Mark Cullen, Stanford University Q&A and open discussion |
Session questions: What is the potential for OSs in assessing treatment response heterogeneity and individual patient decision making? What clinical and other data can be collected routinely to affect this potential? How can longitudinal information on changes in treatment categories and clinical condition be used to assess variations in treatment responses and individual patient decision making? What are the statistical methods for time-varying changes in treatment (including cotherapies) and clinical condition? What are the best methods to form distinctive patient subgroups in which to examine heterogeneity of the treatment response? What data elements are necessary to define these distinctive patient subgroups? What are the best methods to assess heterogeneity in multidimensional outcomes? How could further implementation of best practices in data collection, management, and analysis affect treatment response heterogeneity? What is needed for information about treatment response heterogeneity to be validated and used in practice? |
|
4:15 pm |
Summary and preview of next day |
4:45 pm |
Reception |
5:45 pm |
Adjourn |
Day 2: Friday, April 26 | |
8:00 am |
Coffee and light breakfast available |
8:30 am |
Welcome, brief agenda overview, and summary of previous day |
9:00 am |
Predicting individual responses Session format Introduction to issue Burton Singer, University of Florida Presentations: Data-driven prediction models Individual prediction Respondents and panel discussion: Peter Bach, Sloan Kettering Mitchell Gail, National Cancer Institute Q&A and open discussion Session questions: How can patient-level observational data be used to create predictive models of the treatment response in individual patients? What statistical methodologies are needed? How can predictive analytic methods be used to study the interactions of treatment with multiple patient characteristics? How should the clinical history (longitudinal information) for a given patient be utilized in the creation of rules to predict the response of that patient to one or more candidate treatment regimens? What are effective methodologies for producing prediction rules to guide the management of an individual patient on the basis of their comparability to the results of RCTs, OSs, and archived patient records? How can we blend predictive models, which can predict impact of treatment choices, and causal modeling, that compare predictions under different treatments? |
10:45 am | Break |
11:00 am |
Conclusions and strategies going forward Panel: Cynthia D. Mulrow, University of Texas Jean R. Slutsky, Agency for Healthcare Research and Quality Steven N. Goodman, Stanford University Session questions: What are the major themes and conclusions from the workshop’s presentations and discussions? How can these themes be translated into actionable strategies with designated stakeholders? What are the critical next steps in terms of advancing analytic methods? What are the critical next steps in developing databases that will generate evidence to guide clinical decision making? What are critical next steps in disseminating information on new methods to increase their appropriate use? |
12:15 pm |
Summary and next steps Comments from the Chairs Joe Selby, Patient-Centered Outcomes Research Institute Ralph Horwitz, GlaxoSmithKline Comments and thanks from the IOM Michael McGinnis, IOM |
12:45 pm |
Adjourn |
Planning Committee
Co-Chairs
Ralph I. Horwitz, GlaxoSmithKline
Joe V. Selby, Patient-Centered Outcomes Research Institute
Members
Anirban Basu, University of Washington
Troyen A. Brennan, CVS/Caremark
Steven N. Goodman, Stanford University
Louis B. Jacques, Centers for Medicare & Medicaid Services
Jerome P. Kassirer, Tufts University School of Medicine
Michael S. Lauer, National Heart, Lung, and Blood Institute
David Madigan, Columbia University
Sharon-Lise T. Normand, Harvard University
Richard Platt, Harvard Pilgrim Health Care Institute
Burton H. Singer, University of Florida
Jean R. Slutsky, Agency for Healthcare Research and Quality
Robert Temple, U.S. Food and Drug Administration
Staff Officer
Claudia Grossmann
cgrossmann@nas.edu
202.334.3867