Appendix B

Workshop Agenda on Weight of Evidence

March 27-29, 2013
National Academy of Sciences
2101 Constitution Ave., N.W.
Washington, DC 20418

WEDNESDAY, MARCH 27, 2013

8:00 Welcome to Workshop
  Jonathan Samet
  Chair, Committee to Review the IRIS Process
  Professor and Flora L. Thornton Chair, Department of Preventive Medicine
  Keck School of Medicine, University of Southern California
   
ASSEMBLING THE EVIDENCE
   
  This session will address approaches to identifying evidence on agents being considered in IRIS assessments. It will cover methods for searching literature and other data bases. The session will also consider the complicating issues of publication bias, “the grey literature,” selective publication of model results, and access to primary data. A further major set of topics include the use of systematic approaches for characterizing study quality, methods for qualitatively and quantitatively assessing heterogeneity across studies, and use of quantitative synthesis (meta-analysis). An additional topic, potentially relevant to some assessments, is whether all assessments need a comprehensive review of the literature.
   
8:15 Introduction and Overview of Session
  Lisa Bero
  Member, Committee to Review the IRIS Process
  Professor, Department of Clinical Pharmacy
  University of California, San Francisco
   
8:25 Systematic Review of Animal Studies and Approaches for Characterizing Study Quality
  Malcolm MacLeod
  Professor of Neurology and Translational Neuroscience
  University of Edinburgh
   
8:40 Systematic Review of Human Studies and Approaches for Characterizing Study Quality
  Karen Robinson
  Associate Professor
  Medicine, Epidemiology, and Health Policy and Management
  Johns Hopkins Medical Institutions


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 144
Appendix B Workshop Agenda on Weight of Evidence March 27-29, 2013 National Academy of Sciences 2101 Constitution Ave., N.W. Washington, DC 20418 WEDNESDAY, MARCH 27, 2013 8:00 Welcome to Workshop Jonathan Samet Chair, Committee to Review the IRIS Process Professor and Flora L. Thornton Chair, Department of Preventive Medicine Keck School of Medicine, University of Southern California ASSEMBLING THE EVIDENCE This session will address approaches to identifying evidence on agents being considered in IRIS assessments. It will cover methods for searching literature and other data bases. The session will also consider the complicating issues of publication bias, “the grey literature,” selective publication of model results, and access to primary data. A further major set of topics include the use of systematic approaches for characterizing study quality, methods for qualitatively and quantitatively assessing heterogeneity across studies, and use of quan- titative synthesis (meta-analysis). An additional topic, potentially relevant to some assess- ments, is whether all assessments need a comprehensive review of the literature. 8:15 Introduction and Overview of Session Lisa Bero Member, Committee to Review the IRIS Process Professor, Department of Clinical Pharmacy University of California, San Francisco 8:25 Systematic Review of Animal Studies and Approaches for Characterizing Study Quality Malcolm MacLeod Professor of Neurology and Translational Neuroscience University of Edinburgh 8:40 Systematic Review of Human Studies and Approaches for Characterizing Study Quality Karen Robinson Associate Professor Medicine, Epidemiology, and Health Policy and Management Johns Hopkins Medical Institutions 144

OCR for page 144
Appendix B 145 8:55 Development, Maintenance, and Use of an Air Pollution Data Base Richard Atkinson Senior Lecturer in Epidemiology St. George’s University of London 9:10 Panel Discussion with Speakers on Assembling the Evidence Key Questions (1) Do IRIS assessments necessarily require full systematic reviews? (2) How might as- sessment of risk of bias differ between studies of chemicals and studies of other interven- tions, such as drugs? (3) What are the implications of heterogeneity of findings for risk relationships? (4) What approaches should be used for assembling different types of evi- dence, such as epidemiological and toxicological? (5) How can mechanistic information be systematically identified? MECHANISM AND MODE OF ACTION There is a pressing need to improve efficiency in the risk-assessment process and incorpo- rate high-throughput technology in evaluating the potential health effects of chemicals. Several efforts are underway by EPA to improve chemical risk assessment. For example, EPA’s high-throughput testing program (ToxCast) is designed to identify chemicals with the greatest potential risk to human health. EPA’s IRIS program is charged with evaluating and integrating these and other multiple types of evidence regarding potential adverse ef- fects of environmental contaminants on human health: mechanistic studies, animal bioas- says, and human studies. This panel will discuss current and future use of data on mecha- nism and mode of action in weight-of-evidence considerations. Specific topics of interest are (a) evaluation of strength-of-evidence related to mechanisms, (b) the use and interpre- tation of high-throughput toxicity screening data, and (c) application of genomic dose- response data to chemical risk assessment. Consideration of application of mechanistic data to cancer and noncancer chemical risk assessment within IRIS assessments is of over- arching interest. 10:30 Introduction and Overview of Session David Dorman Member, Committee to Review the IRIS Process Professor of Toxicology, College of Veterinary Medicine North Carolina State University 10:40 Use of High-Throughput and High-Data-Content Technologies in Chemical Risk Assessment Rusty Thomas Director, Institute for Chemical Safety Sciences The Hamner Institutes for Health Sciences 11:00 Panel Discussion of High-Throughput Data for Determining Mechanism or Mode of Action Panelists: David Schwartz, Chair of Medicine, Professor of Medicine and Immunology, University of Colorado; George Leikauf, Professor of Environmental and Occupational Health, Graduate School of Public Health, University of Pittsburgh; Rusty Thomas, Direc- tor, Institute for Chemical Safety Sciences, The Hamner Institutes for Health Sciences; Joe Rodricks, Principal, ENVIRON; and Thomas Hartung, Professor and Doerenkamp- Zbinden Chair for Evidence-based Toxicology, Director Center for Alternatives to Animal Testing, Johns Hopkins Bloomberg School of Public Health

OCR for page 144
146 Review of EPA’s Integrated Risk Information System (IRIS) Process Key Questions Topic 1: How will findings from new high-throughput assays be used? Can data from high-throughput assays replace more traditional apical end points that are examined in animal toxicity studies? How can dose-dependent changes in mechanisms identified from high-throughput assays be incorporated into chemical risk assessments? How can pharma- cokinetic and similar data inform the interpretation of high-throughput screening assays? Topic 2: How should mechanistic information be incorporated into IRIS assessments? How can the science be advanced to improve qualitative and quantitative application of mechanistic information? What are the evidence criteria for concluding that a mechanism is established as relevant to an agent and outcome? INTEGRATION OF DATA EPA’s IRIS program is charged with evaluating and integrating multiple types of evidence regarding potential effects of environmental contaminants on human health: mechanistic studies, animal bioassays, and human studies. Assessments are often challenging due to sparse evidence, the use of relatively high doses in experimental bioassays, unclear toxico- logical mechanisms of action, and unmeasured co-exposures and other threats to validity in observational designs. This session will address qualitative and quantitative strategies for integrating evidence of different types in human health risk assessments. 1:00 Introduction and Overview of Session Scott Bartell Member, Committee to Review the IRIS Process Associate Professor, Program in Public Health University of California, Irvine 1:10 Qualitative and Quantitative Methods for Integrating Evidence Duncan Thomas Professor and Verna Richter Chair in Cancer Research, Keck School of Medicine University of Southern California Panel Discussion on Integrating Various Data Panelists: Steve Goodman, Professor of Medicine and Epidemiology, Stanford University; Kristina Thayer, Director, Office of Health Assessment and Translation, National Toxi- cology Program; Duncan Thomas, Professor and Verna Richter Chair in Cancer Research, Keck School of Medicine, University of Southern California; Tracey Woodruff, Professor and Director, Program on Reproductive Health and the Environment, University of Cali- fornia, San Francisco; and Lauren Zeise, Deputy Director for Scientific Affairs, Office of Environmental Health Hazard Assessment, California EPA Key Questions Topic 1: Hypothetical mechanisms or modes of action have been proposed for some toxi- cants, largely based on research in animal models. Consequently, it might be difficult to identify or exclude additional mechanisms for toxic effects in humans. Should mechanistic information be used in a qualitative manner, such as in Hill's biological “plausibility” crite- rion? Can information from observational or clinical studies on intermediate end points related to mechanisms be helpful? How can mechanistic understanding best be reflected in dose-response model selection or parameter estimation? Topic 2: How should evidence of toxicity from high-dose animal studies be weighed against null findings from one or more epidemiologic studies at lower exposures? What level of epidemiologic evidence would be sufficient to dismiss a toxic effect in animals as

OCR for page 144
Appendix B 147 irrelevant to humans? How can dose-response relationships be combined from different types of research, for example, animal bioassay and epidemiological? Topic 3: Should positive epidemiologic studies with weaker designs (for example, ecolog- ical studies, or studies with unmeasured known confounders) or with positive but non- significant associations contribute to the weight of evidence, or should they be considered only as hypothesis generating? CAUSALITY The IRIS assessments evaluate hazard, specifically whether the chemical of concern is a cause of one or more adverse outcomes. The goal of the causal criteria session is to consid- er the best methods available for systematically evaluating the evidence from individual studies with respect to whether, and to what degree, a chemical causes a particular health outcome, and for combining the evidence in individual studies into an overall judgment as to the likelihood of a causal relationship. Specific goals of the session include (1) consider- ing the utility of existing causal criteria outlined in the most recent IRIS documents; (2) comparing causal assessment methods used by other national and international organiza- tions, with the potential goals of elaborating new guidelines for assessing strength of evi- dence for causation and of achieving some harmonization across agencies; and (3) consid- ering whether the Hill “criteria” are still useful as guides to synthesizing the overall evidence for causation, or whether alternative criteria or guidelines might be an improve- ment on approaches developed almost half a century ago. 3:00 Introduction and Overview of Session Richard Scheines Member, Committee to Review the IRIS Process Professor and Head of Philosophy Department Carnegie Mellon University 3:10 The Role of Mechanism in Causal Assessments and the State of Bradford-Hill Steve Goodman Professor of Medicine and Epidemiology Stanford University 3:25 Application of Causal Methods to Assess Effects of Chemical Exposures in Practice Lauren Zeise Deputy Director for Scientific Affairs Office of Environmental Health Hazard Assessment California EPA 3:40 Comparing Weight-of-Evidence Frameworks for Causation Lorenz Rhomberg Principal Gradient 3:55 Panel Discussion with Speakers on Causal Methods Key Questions Should the approach to causal inference within EPA guidelines be revised? Are the long- standing causal criteria still useful, given the range of evidence considered in IRIS assess- ments? How should causal judgments be made in practice? How can they be most useful for practitioners? 4:55 Opportunity for Public Comment

OCR for page 144
148 Review of EPA’s Integrated Risk Information System (IRIS) Process THURSDAY, MARCH 28, 2013 8:00 Welcome to Concluding Session of Workshop Jonathan Samet Chair, Committee to Review the IRIS Process Professor and Flora L. Thornton Chair, Department of Preventive Medicine Keck School of Medicine, University of Southern California 8:15 Putting the Pieces Together: A Case Study Tracey Woodruff Professor and Director Program on Reproductive Health and the Environment University of California, San Francisco 8:45 Workshop Discussion: From Start to Finish – Systematic Review and Evidence Integration Speakers, Panelists, and Committee Members METHODS FOR CHARACTERIZING AND COMMUNICATING UNCERTAINTY One of the primary aims of systematic reviews is to characterize and communicate the state- of-evidence on a specific topic. Absence of evidence and uncertainties may be characterized using different approaches that range from implicit characterization (qualitative discussion, unexplained variance) to explicit and quantitative characterization. In most cases, communi- cating uncertainty qualitatively or quantitatively should be an intrinsic element of such ef- forts. Numerical, verbal, and graphical tools are all widely used to characterize and com- municate uncertainty, but with varying success. In this session, methods for characterizing and communicating uncertainties in the IRIS assessment will be considered. 9:15 Introduction and Overview of Session Ann Bostrom Member, Committee to Review the IRIS Process Professor, Daniel J. Evans School of Public Affairs University of Washington 9:25 Characterizing Uncertainty Jay Kadane Leonard J. Savage University Professor of Statistics, Emeritus Carnegie Mellon University 9:45 How the Public Interprets Uncertainty Communication: Some Lessons from the IPCC David Budescu Anne Anastasi Professor of Psychometrics and Quantitative Psychology Fordham University 10:00 Panel Discussion on Uncertainty Panelists: Tim Lash, Professor, Rollins School of Public Health, Emory; Chris Frey, Dis- tinguished University Professor, North Carolina State University; David Budescu, The Anne Anastasi Professor of Psychometrics and Quantitative Psychology, Fordham Univer- sity; Jay Kadane, Leonard J. Savage University Professor of Statistics, Emeritus, Carnegie Mellon University; and Thomas Wallsten, Professor, Department of Psychology, Universi- ty of Maryland

OCR for page 144
Appendix B 149 Key Questions What approaches would enhance the consideration and presentation of uncertainty in IRIS assessment? What attributes of users and uses of IRIS should guide methods for character- izing uncertainties in IRIS assessments? What do we know about tools that are readily available for use in quantifying uncertainty in IRIS? USE OF EXPERT JUDGMENT Expert judgment is used in systematic review processes and throughout IRIS assessments, as discussed in the earlier sessions at this workshop. Expert judgment is also used in risk analysis to fill gaps when data are unavailable. Although it is an inherent component of IRIS assessments, this has not been explicitly acknowledged. In this session, the use of expert judgment in the IRIS assessment will be considered, identifying those points in the review and assessment process where expert judgment is important. The session will con- sider processes for using expert judgment as discussed throughout the workshop in previ- ous sessions and in risk assessment, including elicitation and Delphi approaches. 11:00 Introduction and Overview of Session Ann Bostrom Member, Committee to Review the IRIS Process Professor, Daniel J. Evans School of Public Affairs University of Washington 11:15 Panel Discussion on Expert Judgment Panelists: Tim Lash, Professor, Rollins School of Public Health, Emory; Chris Frey, Dis- tinguished University Professor, North Carolina State University; David Budescu, The Anne Anastasi Professor of Psychometrics and Quantitative Psychology, Fordham Univer- sity; Jay Kadane, Leonard J. Savage University Professor of Statistics, Emeritus, Carnegie Mellon University; and Thomas Wallsten, Professor, Department of Psychology, Universi- ty of Maryland [NOTE: All invited workshop participants are urged to participate in this particular dis- cussion.] Suggested topics to address by the panel: (a) elicitation techniques (b) understanding the specificity of expertise and to what extent interdisciplinary expertise is required or possi- ble, (c) opportunities (when and where) for the value of expert judgments in IRIS and (d) limitations (including expert bias) on the value of expert judgments in IRIS. Key Questions What are best practices for identifying appropriate expertise and eliciting expert judg- ments, what is the evidence for their effectiveness, and how could they inform the IRIS process? What types of biases in expert judgments might affect IRIS assessments, and how could these be mitigated? 12:15 Opportunity for Public Comment 12:30 Adjourn Workshop