In response to recommendations provided in the 2011 National Research Council (NRC) report Review of the Environmental Protection Agency’s Draft IRIS Assessment of Formaldehyde, the US Environmental Protection Agency (EPA) has begun to revise its process for developing toxicologic assessments under its Integrated Risk Information System (IRIS). Materials provided to the present committee (see Table 1-1) indicate that many NRC recommendations have been implemented, and others are yet to be implemented.
Some recommendations from the NRC formaldehyde report dealt with specific steps of the IRIS process, and others dealt more generally with the overall process. The present chapter focuses on the general recommendations from that report, assesses EPA’s response to them, and provides further suggestions and guidance for refining what has been implemented. It also discusses two general issues concerning the IRIS process: increasing efficiency and using expert judgment. The chapters that follow review and assess EPA’s response to the recommendations specific to various steps of the IRIS process, from framing the assessment through identifying, evaluating, and integrating the evidence to deriving toxicity values (see Figure 1-2).
The EPA report Status of Implementation of Recommendations (EPA 2013a) summarized the general recommendations of the NRC formaldehyde report (NRC 2011) (see Box 2-1). That report’s recommendations focused on improving the clarity of the assessments by rigorous editing, reducing text volume, and addressing redundancies and inconsistencies; describing assessment methods more fully; enhancing quality-control processes for assessments; standardizing review and evaluation approaches; and ensuring that the various chemical-assessment teams included appropriate expertise.
EPA has made multiple changes in the IRIS program to address the general recommendations in the NRC formaldehyde report (NRC 2011). Although not called for by the formaldehyde report, new leadership for the IRIS program is in place, including Kenneth Olden (former director of the National Institute of Environmental Health Sciences) as the director of EPA’s National Center for Environmental Assessment and Vincent Cogliano (former section head of the Monograph Section of the International Agency for Research on Cancer [IARC]) as the acting director of the IRIS program. EPA has recently adopted a new document structure for IRIS assessments, drafted a preamble to be included with all IRIS assessments, and instituted several changes to enhance quality-control processes within the IRIS program. Those changes are reviewed in the following sections.
• To enhance the clarity of the document, the draft IRIS assessment needs rigorous editing to reduce the volume of text substantially and address redundancies and inconsistencies. Long descriptions of particular studies, for example, should be replaced with informative evidence tables. When study details are appropriate, they could be provided in appendices.
• Chapter 1 needs to be expanded to describe more fully the methods of the assessment, including a description of search strategies used to identify studies with the exclusion and inclusion criteria clearly articulated and a better description of the outcomes of the searches…and clear descriptions of the weight-of-evidence approaches used for the various noncancer outcomes. The committee emphasizes that it is not recommending the addition of long descriptions of EPA guidelines to the introduction, but rather clear concise statements of criteria used to exclude, include, and advance studies for derivation of the RfCs [reference concentrations] and unit risk estimates.
• Elaborate an overall, documented, and quality-controlled process for IRIS assessments.
• Ensure standardization of review and evaluation approaches among contributors and teams of contributors; for example, include standard approaches for reviews of various types of studies to ensure uniformity.
• Assess disciplinary structure of teams needed to conduct the assessments.
Source: NRC 2011, pp. 152, 164.
New Document Structure
A major concern of the 2011 NRC formaldehyde report was that IRIS assessments had become too expansive, were poorly organized and edited, and thus were deficient in communicating clearly, cogently, and transparently how conclusions were reached in the assessments. Specifically, in the formaldehyde IRIS assessment, the basis of the conclusions on health outcomes was not clear and transparent, and the rationales for selecting studies for deriving quantitative toxicity values for cancer and noncancer outcomes were not well developed or consistent.
Consequently, the NRC committee that reviewed the formaldehyde assessment offered a number of general suggestions to enhance document clarity, including reducing document length, editing rigorously, eliminating redundancies and inconsistencies, and using evidence tables rather than long narrative descriptions of individual studies. In response, EPA developed and has implemented a new document structure that streamlines the assessments (EPA 2013a) and appropriately organizes them into two broad sections: hazard identification and dose-response analysis. EPA noted that the new organization aligns better with the traditional risk-assessment paradigm. The new document structure also includes an executive summary to highlight major conclusions. Recent IRIS assessments reflect the new document structure (EPA 2013b,c).
The committee agrees that the new document structure, which is reflected in the toxicologic review of benzo[a]pyrene (EPA 2013c), leads to better organized and streamlined assessments and reduces redundancies. It also notes that EPA has embraced the use of evidence tables and graphic displays of study findings to reduce text volume and enhance clarity and transparency. As a result, the descriptions of individual studies have been shortened. That approach brings the IRIS assessments much more in line with the state of practice for systematic reviews.
IRIS Assessment Preamble
Another important concern of the formaldehyde committee was that the assessment methods were not clearly articulated. At the time of that review, EPA typically provided a brief generic introductory chapter that simply referred to EPA guidelines. Accordingly, the formaldehyde committee recommended that EPA explain more fully the methods and approaches used. In response, EPA has developed a preamble to be included with all IRIS assessments that follows the model used by IARC and that “describes the application of existing EPA guidance and the methods and criteria used in developing the assessments” (EPA 2013a, p. 6). It discusses the general scope and elements of the IRIS program; the peer-review process for the IRIS assessments; the identification, selection, and evaluation of studies; integration of evidence; and derivation of toxicity values.
The present committee finds that the preamble is a useful statement, which presumably will be revised as methods and procedures are modified and updated. As appropriate, each IRIS assessment should note the version of the preamble included in the assessment. The broad description of principles and methods in the preamble does not replace, however, the need for a brief description in each IRIS assessment that indicates how the general principles described in the preamble have been applied in the case of that specific assessment. For example, specific health outcomes and populations that are considered might vary from chemical to chemical, and this variation might lead to notable methodologic differences between assessments. Much of the methodologic detail specific to individual assessments might be presented in the protocols for the systematic reviews conducted as part of the IRIS process; these could be provided as appendixes to each assessment (see Chapter 3 for further discussion).
Initiatives to Improve the Overall Process, Quality Control, and Documentation
To improve the overall process, quality control, and documentation, EPA has developed or adopted several new initiatives, including development of guidance for chemical-assessment teams, institution of chemical-assessment support teams (CASTs), development of information-management tools, initiation of scoping exercises, and expansion of stakeholder engagement (EPA 2013a). The following sections review and evaluate each initiative.
Guidance for Chemical-Assessment Teams
The chemical-assessment team for each IRIS assessment is a multidisciplinary team. The teams often include contractors who provide technical and analytic support, such as conducting literature searches or performing dose-response analyses. (The committee discusses the CASTs, which have a broader role, below.) A concern of the NRC formaldehyde committee was that all team members use a standard and consistent approach in the assessment-development process. In response, EPA is developing instructions for contractors and has provided an example of instructions for conducting dose-response modeling to the present committee (EPA 2013a, Appendix C). EPA is also developing a handbook to guide the development of IRIS assessments (EPA 2013a, Appendix F); however, the committee does not fully understand relationships among the various documents and is concerned that the existence of several guidance documents might cause confusion. A better approach might be to develop a single handbook that can be used by members of the chemical-assessment team regardless of whether they are EPA staff or contractors. In any case, the purpose of and relationships among the various guidance documents—preamble, contractor instructions, and handbook—should be clearly stated.
Institution of Chemical-Assessment Support Teams
As discussed, IRIS assessments involve multifaceted and interdisciplinary chemical-assessment teams that have general expertise and expertise specific to the chemicals in question. In the past, the teams have been supported by discipline-specific work groups that helped to ensure methodologic consistency among IRIS assessments. In late 2011, EPA formally implemented an initiative to establish three CASTs, each composed of two senior scientists, a senior statistician, and a staff scientist to serve as a rapporteur (EPA 2013a). Every IRIS assessment is assigned to one CAST, and the CAST meets with the chemical-assessment team associated with the IRIS assessment to which it has been assigned. The three CASTs then meet weekly to discuss issues that have arisen in their chemical-specific meetings. The CAST initiative was developed to meet several objectives: to provide a forum for problem-solving; to ensure that appropriate scientific expertise is available to each team; to identify problems or issues early in the process, particularly ones that need program-wide discussion; to increase objectivity and consistency among assessments; to monitor the implementation of recommendations of the NRC formaldehyde report; and to assist in responding to peer review and public comments and in documenting and communicating decisions (EPA 2013a).
This initiative addresses the clear need for continuing and consistent expert oversight of individual assessments and the overall IRIS program. The committee endorses EPA’s efforts and suggests two advances that will strengthen the overall scientific expertise and leadership in IRIS assessments further. First, the draft IRIS assessments need to identify more clearly and explicitly the members of all teams involved in the work. The identity of CAST members is not included in the report template provided to the committee (EPA 2013a, Appendix A) or in recent draft assessment reports (see, for example, EPA 2013c), and, more important, the roles of team members in various reports are not identified at least at the level now required for authors and other contributors in most biomedical journals. Second, the draft and final reports are subject to considerable peer review within EPA, by federal agencies, and by external reviewers. However, the committee recommends that expert judgment from outside EPA and the government be involved to fill critical gaps in expertise; this could be accomplished with the new Chemical Assessment Advisory Committee (CAAC) of the EPA Science Advisory Board. For example, CAAC members could provide specific expertise for a chemical being assessed by periodically reviewing activities at critical stages of each IRIS assessment and interacting with the chemical-assessment team and the CAST assigned to the assessment. The need for increased expert judgment in the IRIS process is discussed further in the section “Using Expert Judgment in the IRIS Process.”
Development of Information-Management Tools
In its status report to the committee, EPA described several information-management tools that should promote quality control in the IRIS process (EPA 2013a, Appendix D). First, EPA has developed the Comment Tracker Database, which captures peer-review and public comments and EPA’s responses to them. It facilitates project management by allowing chemical managers to make clear assignments of comments to various team members and facilitates identification of comments that will require substantial time or resources to address. The database should also help to identify recurrent comments (or themes) and therefore prompt a more in-depth review of the comments.
Second, EPA is developing a Cross-Chemical Comparisons Database, which will allow EPA to search for common topics or issues that arise in different chemical assessments. The database should allow EPA staff to determine how scientific issues have been addressed in various chemical assessments, to identify issues that are raised repeatedly by reviewers and stakeholders, and to compare comments provided at various stages of the IRIS process and identify possible inconsistencies.
Third, EPA has invested in a substantial expansion of its Health and Environmental Research Online (HERO) database that provides access to the scientific literature identified for IRIS chemicals, where available, and highlights results of broad literature searches. Some chemicals include a “LitFlow” link that provides visual highlights of the broad search, including reference counts from database searches and additional search strategies and links to the references that were identified, considered, and excluded and to the primary sources of health-effects data.
The committee finds that the management tools that EPA has described should help with quality assurance and management of the IRIS process, but a systematic approach to using the accumulated information will need to be developed to strengthen the process further. If some management tools are not available to the public, EPA will face a challenge in maintaining full transparency.
Initiation of Scoping for IRIS Assessments
EPA (2013a) identifies the need for a scoping process to ensure that each assessment is as informative and useful as possible for the various groups that will use IRIS assessments. EPA (2013a, Appendix E) is consistent with the risk-assessment guidance provided in the report Science and Decisions: Advancing Risk Assessment (NRC 2009) in describing the scoping process as one that seeks input from EPA program and regional offices to identify the information and the level of detail needed to inform their decisions. For example, Are some exposure routes or durations of concern? Are some specific life stages, exposure windows, or groups of particular concern? The desired “outcome of the scoping process is a statement that outlines the focus of the assessment, the nature of the hazard characterization needed, and a clear indication of issues that are beyond the scope of the IRIS assessment” (EPA 2013a, p. E-3). The committee notes that scoping is different from problem formulation, which is an early step in the systematic-review process that explicitly defines what is to be evaluated in the assessment and how it is to be evaluated. (See Chapter 3 for a discussion of problem formulation.)
Expansion of Stakeholder Engagement
EPA (2013a, p. 9) acknowledges the importance of stakeholder engagement, lists opportunities that it has provided for stakeholder input throughout the IRIS process, and indicates its plan for expanded stakeholder engagement. EPA’s initiatives are consistent with recommendations from past NRC reports, which have repeatedly called for robust stakeholder involvement in environmental decision-making. For example, NRC (2009, p. 13) stated that “greater stakeholder involvement is necessary to ensure that the process is transparent and that risk-based decision-making proceeds effectively, efficiently, and credibly. Stakeholder involvement needs to be an integral part of the risk-based decision-making framework, beginning with problem formulation and scoping.” The present committee agrees that early and continuing stakeholder involvement not only will increase the likelihood that EPA will address the concerns of diverse stakeholders but should strengthen the quality of IRIS assessments.
In considering initiatives to expand stakeholder involvement, the various stakeholder groups should be recognized. Stakeholder groups are often identified as having opposing viewpoints regarding chemical risk assessment. Nongovernment organizations, such as environmental advocacy groups, often represent people who might be exposed to the substances that IRIS reviews. Those groups generally seek more conservative or protective health standards and call for the rapid completion of IRIS assessments. They have repeatedly expressed a concern that the public might be exposed to substances that threaten health and safety because assessments have not been completed in a timely manner (Sass and Rosenberg 2011; Denison 2012). In contrast, other organizations and individuals represent industrial and government entities that produce, use, and release chemicals, some of which are toxic. Those stakeholders typically express a con-
cern that scientifically unjustifiably conservative toxicity values will prove costly and provide relatively little additional protection of public health, so they often argue for less protective standards or urge more study before IRIS assessments are completed. In reality, the array of stakeholders who are interested in the IRIS process is much broader. For example, risk assessors, policy setters, and other public-health officials need toxicity values on IRIS to be timely, up to date, protective of public health (including sensitive populations), informative about all relevant end points, and transparent about uncertainties that might result in underestimation or overestimation of the actual hazard. Furthermore, the scientific community—toxicologists, epidemiologists, and other groups of scientists—generally favors the incorporation of valid scientific research and information into the public-policy process, plays a key role in development of toxicity-testing methods (Ashby 2003), and is often at the forefront of identifying toxic hazards.
A well-designed process of stakeholder engagement in the development of each assessment should keep all stakeholders informed, provide suitable opportunities for diverse input, and promote the smooth and timely completion of the draft assessment. Stakeholder involvement today begins with the nomination of substances for review, but not all potential stakeholders are likely to be aware of this opportunity. The IRIS program lists planned reviews in the Federal Register (78 Fed. Reg. 48674 ) and on its Web site. The committee suggests that EPA publish an IRIS workplan at least once a year. Furthermore, there should be a clear and readily accessible process for parties outside EPA to suggest new chemical assessments and revisions of completed assessments on the basis of new evidence.
As noted earlier, EPA (2013a) indicated opportunities for expanded stakeholder input. In July 2013, it outlined a process that includes three public meetings (EPA 2013d,e). First, EPA promises to conduct a “public meeting focused on identifying the scientific information available for the chemical under assessment” after an internal planning and scoping meeting. In January 2013, EPA included public stakeholders and other federal agencies in planning meetings for the inorganic arsenic assessment. The committee supports EPA’s plan to release a draft planning and scoping summary before such meetings and a final summary afterward (V. Cogliano, EPA, personal commun., August 20, 2013). However, the final summary should be completed quickly—the committee notes that the summary of a November 2012 stakeholder meeting was not released until August 2013 (EPA 2013f).
The second public meeting is slated to occur during the draft-development process. EPA plans to “release the literature search and a search strategy, evidence tables, exposure-response figures, and, as appropriate, information on anticipated key scientific issues for the chemical” (EPA 2013d). EPA noted that “out of consideration for stakeholders with limited resources, they will be released at one time. This way, stakeholders do not have to spend time and money retrieving and extracting data from hundreds of papers…The public will be asked whether any important data were missed” (V. Cogliano, EPA, personal commun., August 20, 2013). EPA clearly is proceeding with these types of meetings: one was held in December 2013, at which preliminary materials developed for the draft assessments of ethyl tert-butyl ether, tert-butyl alcohol, and hexahydro-1,3,5-trinitro-1,3,5-triazine were discussed.1 The committee finds that EPA’s willingness to engage in early discussion with its stakeholders is a major step forward, and it urges EPA to maintain some flexibility in the process. For example, if public discussion leads to the discovery or selection of important new studies, EPA might wish to develop new evidence tables for discussion at the next meeting. That flexibility would be beneficial if important new data were brought forward. The committee, however, is not suggesting that EPA routinely add new steps to the assessment-development process.
Finally, after EPA completes each draft assessment and coordinates with other federal agencies and the executive branch, there will be—as there were before EPA announced process
enhancements—a formal public comment period and a public meeting to receive public comments, both of which have been and will be announced in the Federal Register.
Even in the face of expanded transparency and enhanced stakeholder engagement, there is concern about the uneven participation of the first two principal stakeholder groups. Most public comments on draft IRIS assessments have come from industry or parties representing the interests of entities that produce, use, and release possibly toxic substances. Indeed, almost all the public input—written and oral—received by the present committee has come from trade organizations. Furthermore, from January 2011 to October 2013, over 100 distinct substantive comments were submitted to the IRIS program.2 Representatives of entities that produce, use, or release the studied substances submitted over 80 comments. In that period, only a few comments were submitted by public-interest organizations concerned with the environment. Comments submitted by concerned citizens or entities apparently representing them contained little or no specific scientific information that might influence the IRIS program’s findings.
Industry representation and input constitute an important element of stakeholder participation, and its comments are often cogent and constructive. Some industry stakeholders also have the resources for initiating and quickly completing literature reviews and research that might be relevant to a particular assessment (for example, studies of formaldehyde dosimetry). However, other key stakeholders have fewer resources and are not generally organized and staffed to provide comments or detailed scientific input. Thus, their important perspectives and voices might be less well represented to EPA. Therefore, the committee encourages EPA to continue the additional efforts to ensure that the full breadth of perspectives on the IRIS process and specific IRIS assessments are made available to the agency.
One way to ensure broad stakeholder input would be to provide technical assistance to enable under-resourced stakeholders to develop and provide input to the IRIS program; this could be modeled after other EPA technical-assistance programs. For example, EPA’s Superfund program has a long history of providing technical assistance in the form of grants and more recently direct consultation to neighbors of sites on the National Priorities List (EPA 2012a). The grants generally improve the process of remedial decision-making by ensuring that the affected public understands both the characterization and the remediation of hazardous-waste contamination and by making it easier for such people to provide constructive input (EPA 2012b).
Assessment of Overall Quality
As discussed above, EPA (2013a) has addressed several issues in managing the overall quality of the IRIS process. One important step is the development of the draft handbook (EPA 2013a, Appendix F), which provides detailed guidance for various steps of the IRIS process. EPA (2013a, Appendix C) is also drafting guidance for contractors. As noted above, the committee concludes that the best approach might be to provide a single detailed guidance document for all those involved in the development of IRIS assessments. Multiple guidance documents could create confusion and inconsistencies. If multiple guidance documents are developed, EPA should provide specific explanations of the applications of the various documents, and the relationships among them should be clearly stated.
It is important to ensure that work done by all contributors to the process meets quality standards. For example, the selection of studies for initial consideration and for further review (see Chapter 4) is critical, as is the evaluation of the risk of bias in individual studies (see Chapter 5). Although the draft handbook (EPA 2013a, Appendix F) provides guidance that is generally informative and useful, it fails to define specific procedures for estimating and evaluating the reliability and validity of processes that are central to the hazard-identification part of the pro-
cess, such as identifying, selecting, and evaluating evidence. Accordingly, the processes need empirical investigation. For example, there needs to be an assessment of whether the process for identifying studies is replicable (that is, whether the protocol leads to the same body of studies when repeated, possibly by different teams) and of whether the process for study selection is valid (that is, whether the protocol, as applied in routine practice, leads to the same studies that would be selected by an expert team). Both reliability and validity should be assessed in empirical studies of reasonable size. Such data would provide information for evaluating the quality of the IRIS process and improving it through better selection or training of those who select and rate the relevance of the studies and who abstract data for the systematic review. Conducting studies that will lead to standards for rater and abstracter reliability and validity should in the long run be cost-effective by reducing error. Such methodologic studies would not need to be performed for each assessment once standards are established and tracking implemented to ensure that the standards are met. The committee notes that no explicit measures or systems are provided by EPA for assessing the reliability and validity of contractor work as compared with similar work by EPA staff or external “gold standards.”
The committee applauds EPA’s initiatives that are designed to enhance the quality of the IRIS process and highlights below several quality-control measures that could be implemented or developed further from EPA’s initial efforts.
• Develop explicit timelines for the various components of IRIS assessments. The committee recognizes that IRISTrack on the EPA Web site provides some information, but the information is often too general or incomplete (for example, “TBD” [to be determined] is listed as a completion date for many chemicals).
• Develop explicit guidelines for external researchers and laboratories that are providing useful data for the IRIS process, such as raw data to facilitate reanalysis by EPA. Additional guidance regarding laboratory protocols might be needed for some types of data, such as high-throughput data.
• Implement a periodic strategic planning process that allows the IRIS program to identify long-term needs and goals with a 3- to 10-year horizon.
• Provide IRIS staff with opportunities for continuing training to help to ensure the application of current hazard-assessment and dose-response practices as these practices evolve.
The committee notes that there might be quality-control processes promulgated in other federal agencies that could be exploited to improve quality management, and EPA might want to investigate other similar federal programs.
EPA has a monumental task in safeguarding the public’s health; the IRIS program is an integral part of that effort. The agency must navigate within the constraints of its resources between the necessity of making scientifically valid and informative assessments of the health consequences of chemical exposures and the need to do so in a timely and cost-effective manner. Given the challenges facing EPA, organizational, managerial, and scientific efficiency becomes critical, particularly in light of the constraint of inevitably shrinking resources. Thus, promoting efficiency in the IRIS program is paramount.
The committee shares EPA’s view that establishing transparent, consistent processes that include opportunities for stakeholder input might reduce delays and promote efficiency in the IRIS process. Furthermore, as noted by EPA (2013d), implementing stopping rules that establish flexible cutoff points for the acceptance of new studies and data should improve productivity in the IRIS process. Participants in the IRIS stakeholder meeting in November 2012 suggested implementing stopping rules to reduce delays (EPA, 2013f, p. 9). The committee emphasizes that
any stopping rules for a particular assessment would need to be grounded in general principles regarding what constitutes pivotal evidence and a reasonable period of delay. EPA should add appropriate text to the preamble, such as “An assessment might be delayed while awaiting potentially pivotal evidence from further analysis or follow-up of a critical epidemiologic study or from a critical animal study.”
Several additional suggestions that might be of future use to the IRIS program to promote efficiencies in both the short and long terms are provided below.
• Enhance interactions with other agencies or organizations, within and outside government, to identify existing information and chemical evaluations that might be used, if the external methods are sound and appropriate, instead of recreating them. Avoiding duplication of effort is an important efficiency-promoting activity.
• Continue to expand efforts to develop computer systems, such as the HERO database, that facilitate storage and annotation of information relevant to IRIS’s mission. Whenever possible, interagency efforts should be considered to enhance efficiency further and reduce duplication of effort.
• Continue development of automated literature and screening procedures, sometimes referred to as text-mining. Such approaches offer the possibility of recurrent, automated literature searching for relevant papers or related papers. Text-mining tools are available from the US National Library of Medicine (Lu 2011) and are also being developed by EPA.
• Promote within EPA a research program that studies the best way to use and incorporate data that are being generated from new in vitro, in silico, and high-throughput toxicity testing into the IRIS process.
The name Integrated Risk Information System might suggest to some a rather mechanical and automated system of assessing the health consequences of chemical exposure. The name might also imply that it is desirable to make any information-gathering and assessment process as free of human judgment, and hence potential human error, as possible. However, all steps of the IRIS process, especially the evidence integration and conclusions reached, are necessarily laden with human judgment, as are most scientific endeavors.
Expert judgment is often used to resolve problems in the face of uncertainty, but its application to toxicity assessments is often poorly described and often considered a “black box.” As noted by Pronk et al. (2012), “the lack of explicit rules makes it difficult to determine the consistency of expert-based decisions over time” or among assessments. In response, the committee sees two approaches that EPA could take to address its concerns regarding the use of expert judgment in the IRIS process. First, EPA can further systematize expert-judgment procedures in the IRIS process and establish their validity and reliability through methodologic research. Second, EPA can ensure balance and expertise in such judgments through expert peer review. Both quality-control mechanisms are necessary in the IRIS process, and EPA appears to be already pursuing them to some extent.
Developing expertise in a specific domain is considered to take at least a decade of dedicated training and study (Ericsson et al. 1993). Experts understand the relationship of concepts within their specific domains and are able to take an organized approach to identifying the elements of a problem and what is known and not known regarding it (Larkin et al. 1980; Chi et al. 1988; Ericsson and Chamess 1994). It follows that in tasks requiring expert judgment discussed later in this report—for example, designing search strategies (see Chapter 4), identifying confounders (see Chapter 5), integrating the evidence for hazard identification (see Chapter 6), and determining how to translate knowledge into prior distributions for analysis in a Bayesian model (see Chapters 6 and 7)—it is essential that appropriate domain-specific expertise be identified
and included with recognition that experts in different fields will probably be required, depending on the task.
As noted in EPA (2011), expert judgment can be susceptible to cognitive biases, although less than lay judgment (Gilovich et al. 2002; Koehler et al. 2002), and how information is presented has the potential to alter judgments. For expert-judgment elicitations, one needs to formulate clear questions, to develop formal protocols, and to summarize and share relevant evidence with the experts (see, for example, EPA 2011, pp. 13-14). As with any such elicitation, the structure of expert judgment in group settings in the context of the IRIS program (as described in Chapter 6 of this report) deserves close attention.
Several steps of the IRIS process require competent professional expert judgment, and the committee concludes that there needs to be a stronger role throughout the IRIS process for expert judgment derived from broadly expert and representative panels, perhaps, as suggested above, as an adjunct to the new CAAC. Integrating the evidence and deriving toxicity values especially should be recognized as requiring a high level of expert judgment to make the conclusion reached and the values derived as valid, reliable, and reputable as possible. At the same time, there is a need for more systematic procedures for assessing the reliability and validity of aspects of the IRIS process—such as literature searching, screening, and evaluation—that are most amenable to the development and application of systematic procedures that can be cost-effectively implemented by competent professional staff. The history of subjectivity in science, the arts, and esthetics goes back a long way and still causes tension in scientific discourse (Shapin 2011; Klempe 2012). The only tentative solution is to describe as accurately as possible the methods by which scientific and policy decisions are made, by whom, and with what expertise.
Finding: The committee is impressed and encouraged by EPA’s progress, recognizing that the implementation of the recommendations in the NRC formaldehyde report is still in process. If current trajectories are maintained and objectives still to be implemented are successfully brought to fruition, the IRIS process will become much more effective and efficient in achieving its basic goal of developing human-health assessments that can provide the scientific foundation for ensuring that risks posed to public health by chemicals are assessed and managed optimally.
Recommendation: EPA needs to complete the changes in the IRIS process that are in response to the recommendations in the NRC formaldehyde report and specifically complete documents, such as the draft handbook, that provide detailed guidance for developing IRIS assessments. When those changes and the detailed guidance, such as the draft handbook, have been completed, there should be an independent and comprehensive review that evaluates how well EPA has implemented all the new guidance. The present committee is completing its report while those revisions are still in progress.
Finding: Although it is clear that quality control (QC) of the IRIS assessment process is critical for the outcome of the program, the documents provided do not sufficiently discuss the QC processes or provide guidelines that adequately separate the technical methods from the activities of QC management and program oversight. For example, the role of the CASTs in the QC process is not specifically described.
Recommendation: EPA should provide a quality-management plan that includes clear methods for continuing assessments of the quality of the process. The roles of the various internal entities involved in the process, such as the CASTs, should be described. The assessments should be used to improve the overall process and the performance of EPA staff and contractors.
Recommendation: When extracting data for evidentiary tables, EPA should use at least two reviewers to assess each study independently for risk of bias. The reliability of the independent coding should be calculated; if there is good agreement, multiple reviewers might not be necessary.
Finding: The current scoping process for obtaining input from within the agency is clear, but opportunities for stakeholder input from outside EPA early in the process are less clear.
Recommendation: EPA should continue its efforts to develop clear and transparent processes that allow external stakeholder input early in the IRIS process. It should develop communication and outreach tools that are tailored to meet the needs of the various stakeholder groups. For example, EPA might enhance its engagement with the scientific community through interactions at professional-society meetings, advertised workshops, and seminars. In contrast, greater use of social media might help to improve communications with environmental advocacy groups and the public.
Finding: EPA has taken steps to expand opportunities for stakeholder input and discussion that are likely to improve assessment quality. However, not all stakeholders with an interest in the IRIS process have the resources to provide timely comments.
Recommendation: Similar to other EPA technical-assistance programs, EPA should consider ways to provide technical assistance to under-resourced stakeholders to help them to develop and provide input to the IRIS program.
Finding: Promoting efficiency in the IRIS program is paramount given the constraint of inevitably shrinking resources. Thus, the committee agrees with EPA that stopping rules are needed given that the process for some IRIS assessments has become too long as revisions are repeatedly made to the assessments to accommodate new evidence and review comments.
Recommendation: The stopping rules should be explicit and transparent, should describe when and why the window for evidence inclusion should be expanded, and should be sufficiently flexible to accommodate truly pivotal studies. Such rules could be included in the preamble.
Recommendation: Regarding promotion of efficiencies, EPA should continue to expand its efforts to develop computer systems that facilitate storage and annotation of information relevant to the IRIS mission and to develop automated literature and screening procedures, sometimes referred to as text-mining.
Finding: The draft handbook and other materials are useful but lack explicit guidance as to the methods and nature of the use of expert judgment throughout the full scope of the assessment-development process, from literature searching and screening through integrating evidence to analyzing the dose-response relationship and deriving final toxicity values.
Recommendation: More details need to be provided on the recognition and applications of expert judgment throughout the assessment-development process, especially in the later stages of the process. The points at which expert judgment is applied should be identified, those applying the judgment should be listed, and consideration should be given to harmonizing the use of expert judgment at various points in the process.
Ashby, J. 2003. The leading role and responsibility of the international scientific community in test development. Toxicol. Lett. (140-141):37-42.
Chi, M.T.H., R. Glaser, and M.J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, NJ: Erlbaum.
Denison, R.A. 2012. EDF Comments for EPA IRIS Stakeholder Panel. Environmental Defense Fund, November 13, 2012 [online]. Available: http://www.epa.gov/iris/publicmeeting/stakeholders-kickoff/Denison_IRIS_Stakeholder_Comments.pdf [accessed Nov. 20, 2013].
EPA (U.S. Environmental Protection Agency). 2011. Expert Elicitation Task Force White Paper. Prepared for the Science and Technology Policy Council. August 2011 [online]. Available: http://www.epa.gov/stpc/pdfs/ee-white-paper-final.pdf [accessed Nov. 20, 2013].
EPA (U.S. Environmental Protection Agency). 2012a. Technical Assistance Grants [online]. Available: http://www.epa.gov/superfund/community/tag/ [accessed Nov. 20, 2013].
EPA (U.S. Environmental Protection Agency). 2012b. Superfund Technical Assistance: Giving Communities an Informed Voice [online]. Available: http://www.epa.gov/superfund/accomp/news/tag.htm [accessed Nov. 20, 2013].
EPA (U.S. Environmental Protection Agency). 2013a. Part 1. Status of Implementation of Recommendations. Materials Submitted to the National Research Council, by Integrated Risk Information System Program, U.S. Environmental Protection Agency, January 30, 2013 [online]. Available: http://www.epa.gov/iris/pdfs/IRIS%20Program%20Materials%20to%20NRC_Part%201.pdf [accessed Nov. 13, 2013].
EPA (U.S. Environmental Protection Agency). 2013b. Toxicological Review of Methanol (Noncancer) (CAS No. 67-56-1) in Support of Summary Information on the Integrated Risk Information System (IRIS). EPA/635/R-11/001Fa. U.S. Environmental Protection Agency, Washington, DC. September 2013 [online]. Available: http://www.epa.gov/iris/toxreviews/0305tr.pdf [accessed Nov. 13, 2013].
EPA (U.S. Environmental Protection Agency). 2013c. Toxicological Review of Benzo[a]pyrene (CAS No. 50-32-8) in Support of Summary Information on the Integrated Risk Information System (IRIS), Public Comment Draft. EPA/635/R13/138a. National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. August 2013 [online]. Available: http://cfpub.epa.gov/ncea/iris_drafts/recordisplay.cfm?deid=66193 [accessed Nov. 13, 2013].
EPA (U.S. Environmental Protection Agency). 2013d. Enhancements to EPA’s Integrated Risk Information System Program [online]. Available: http://www.epa.gov/iris/pdfs/irisprocessfactsheet2013.pdf [accessed Aug. 13, 2013].
EPA (U.S Environmental Protection Agency). 2013e. IRIS Process Flow Chart [online]. Available: http://www.epa.gov/iris/pdfs/IRIS_PROCESS_FLOW_CHART.PDF [accessed Nov. 20, 2013].
EPA (U.S. Environmental Protection Agency). 2013f. Integrated Risk Information System Program: Summary Report from November 2012 Public Stakeholder Meeting [online]. Available: http://www.epa.gov/iris/pdfs/Summary%20Report%20Nov2012%20IRIS%20Public%20Stakeholder%20Mtg.pdf [accessed Nov. 20, 2013].
Ericsson, K.A., and N. Chamess. 1994. Expert performance: Its structure and acquisition. Am. Psychol. 49(8):725-747.
Ericsson, K.A., R.T. Krampe, and C. Tesch-Römer. 1993. The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 100(3):363-406.
Gilovich, T., D. Griffin, and D. Kahneman, eds. 2002. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press.
Klempe, S.H. 2012. Psychology-tensions between objectivity and subjectivity. Integr. Psychol. Behav. Sci. 46(3):373-379.
Koehler, D.J., L. Brenner, and D. Griffin. 2002. The calibration of expert judgment: Heuristics and biases beyond the laboratory. Pp. 686-715 in Heuristics and Biases: The Psychology of Intuitive Judgment, T. Gilovich, D. Griffin, and D. Kahneman, eds. Cambridge: Cambridge University Press.
Larkin, J.H., J. McDermott, D.P. Simon, and H.A. Simon. 1980. Models of competence in solving physics problems. Cognitive Sci. 4(4):317-345.
Lu, Z. 2011. PubMed and beyond: A survey of web tools for searching biomedical literature. Database 2011:baq036. doi: 10.1093/database/baq036.
NRC (National Research Council). 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academies Press.
NRC (National Research Council). 2011. Review of the Environmental Protection Agency's Draft IRIS Assessment of Formaldehyde. Washington, DC: National Academies Press.
Pronk, A., P.A. Stewart, J.B. Coble, H.A. Katki, D.C. Wheeler, J.S. Colt, D. Baris, M. Schwenn, M.R. Karagas, A. Johnson, R. Waddell, C. Verrill, S. Cherala, D.T. Silverman, and M.C. Friesen. 2012. Comparison of two expert-based assessments of diesel exhaust exposure in a case–control study: programmable decision rules versus expert review of individual jobs. Occup. Environ. Med. 69(10):752-758.
Sass, J., and D. Rosenberg. 2011. The Delay Game: How the Chemical Industry Ducks Regulation of the Most Toxic Substances. Natural Resources Defense Council, September 2011 [online]. Available: http://www.nrdc.org/health/files/IrisDelayReport.pdf [accessed Nov. 20, 2013].
Shapin, S. 2011. The sciences of subjectivity. Soc. Stud. Sci. 42(2):170-184.