Over the course of a 1.5-day workshop, the US Environmental Protection Agency (EPA) made presentations to the committee on changes that are transforming the Integrated Risk Information System (IRIS) process. The committee used that information and recently released IRIS documents to judge the extent to which EPA has adequately addressed recommendations made in previous National Academies reports, primarily Review of EPA’s Integrated Risk Information System (IRIS) Process (NRC 2014).1 The committee’s overall comments are provided below; findings regarding individual recommendations are in Appendix E.
The 2014 report (Chapter 2 in NRC 2014) offered recommendations related to the IRIS process and evaluated EPA’s progress in implementing the suggestions made in Review of the Environmental Protection Agency’s Draft IRIS Assessment of Formaldehyde (NRC 2011).2 Above all, the 2014 report commented on the need to continue to sustain the evolution of the program’s procedures and to consider how EPA will do so in the context of continually advancing scientific methods. In the 4 years since the 2014 recommendations, the IRIS program clearly has maintained a trajectory of change that has accelerated under the new leadership of the EPA National Center for Environmental Assessment (NCEA) and the IRIS program. The committee was impressed by the scope of changes that have been or are being implemented and by the engagement of scientists throughout NCEA, EPA more broadly, other federal agencies, and academe to effect change. Such engagement is appropriate inasmuch as funding for the use of external contractors has diminished, and there is expertise in relevant fields throughout the agency. Supervisory and communication strategies are in place, and formal quality management has been implemented. The committee notes that EPA will need to ensure that quality management extends to activities that are conducted by people who are outside the IRIS program.
Changes in some of the critical elements of the overall IRIS process are still in progress. The 2014 committee was given an incomplete draft of the handbook; the handbook is intended to provide guidance on the IRIS process. The 2014 committee recommended completion of that handbook; the present committee was not given a draft of the handbook. EPA indicated that the handbook is still in development and is “being updated to reflect Agency input, evolving IRIS practices as systematic-review approaches are tested through implementation, and public comment received on chemical-specific protocols” (Slide 22, Appendix C). Public release is anticipated in 2018. The handbook is expected to provide critical guidance for the development of IRIS assessments, and the present committee urges that high priority be given to its completion, peer review, and release. Reference to it will facilitate transparency on the approach for specific IRIS assessments. In the absence of a final version of the handbook, EPA is describing its approach for the reviews in its protocol documents, and this practice provides transparency into the assessment process while the handbook is being completed. The committee notes, however, that the handbook should not become a final, fixed set of guidelines but rather should be a document that evolves over time.
1 Referred to hereafter as the 2014 report. The committee that produced that report is referred to as the 2014 committee.
2 Referred to hereafter as the 2011 report.
The 2014 committee also commented on the need to incorporate input from various stakeholders—including industry, academe, and nongovernment organizations—at appropriate points in the process; this recommendation has been heeded by past and current program leaders. Three points in the process, including development of assessment plans and systematic-review protocols, have been identified at which public comments will be sought (Slide 24, Appendix C). Although the present committee was not shown the approach for acknowledging public comments and incorporating them into the process, the handbook should describe how this will be done. The committee was impressed by other NCEA program activities that engage stakeholders, including dissemination of tools that it has developed, such as the benchmark-dose (BMD) modeling software, and provision of training.
EPA also commented that it was moving away from a one-size-fits-all approach to what it termed a portfolio approach, as described in Box 2-1. The move toward a portfolio approach appears to add need-based and context-based flexibility to the IRIS program. EPA used chloroform as an example; it is developing a reference concentration for inhalation exposures and assessing whether the reference concentration protects against carcinogenic effects adequately. The decision to limit the assessment was based on consultation with EPA regulatory programs. Overall, the portfolio approach is expected to conserve agency resources, and it is consistent with the recommendations of the National Academies report, Science and Decisions: Advancing Risk Assessment (NRC 2009).
SYSTEMATIC REVIEW: PROBLEM FORMULATION, PROTOCOL DEVELOPMENT, AND EVIDENCE IDENTIFICATION AND EVALUATION
The 2014 report offered many recommendations related to systematic review, including problem formulation, protocol development, evidence identification, and evidence evaluation (Chapters 3–5, NRC 2014). The committee found that the IRIS program has made substantial progress in incorporating systematic-review methods into its process and assessments. Development and implementation of systematic-review methods have been facilitated by the recruitment of the current IRIS program director, who has extensive experience in the development of the methods and their application to chemical risk assessment. The IRIS program has also expanded internal training programs that are designed to improve staff understanding of the methods.
Furthermore, the IRIS program has developed a number of formal and informal collaborations with groups that are active in systematic review, including the National Toxicology Program Office of Health Assessment and Translation, the World Health Organization (WHO), the European Food Safety Authority, the International Collaboration for Automation of Systematic Reviews, and the Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies (CAMARADES). Some of those collaborations help to position the IRIS program as a leader in advancing systematic-review methods, such as the development or modification of risk-of-bias tools for animal toxicity studies.
The committee was impressed by the efforts of IRIS program management to develop within the IRIS program the scientific expertise needed to conduct systematic reviews. Some notable changes have included the establishment of a systematic-review working group that should lead to increased efficiency and consistency among assessments. Other workgroups that are focused, for example, on epidemiology, physiologically based pharmacokinetic (PBPK) models, and neurotoxicology have been created; these teams of appropriate subject-matter experts are expected to support the IRIS process further through improved rigor of scoping and problem formulation and through improvements in other steps of the systematic-review process.
The 2014 report offered numerous recommendations related to systematic-review processes that are accepted as standards of practice in the scientific community. The present committee found multiple examples of the IRIS program’s consideration and implementation of those recommendations, such as the development of systematic-review protocols, inclusion of an information specialist who is trained in systematic-review methods in the work groups, and the use of two-person teams for data extraction and risk-of-bias assessments. The IRIS program is also appropriately using a variety of software tools to assist with literature management (HERO), scoping (SWIFT), screening (Distiller), and data extraction (HAWC). The use of those and other software tools with input from appropriate subject-matter experts should improve efficiency, transparency, and rigor and directly address recommendations in the 2014 report. Many of the operational approaches used by the IRIS program are described in the assessment plans or the systematic-review protocols, and sufficient details are given to provide assurances that standardized systematic-review methods are being developed and applied by the IRIS program. The committee expects future systematic-review protocols to be streamlined and to become less generic when the handbook is completed.
The 2014 report also offered several recommendations about evaluating individual studies. Those recommendations encouraged EPA to use or develop tools for assessing risk of bias in different types of studies (human, animal, and mechanistic) and to add quality-assessment items relevant to particular systematic-review questions. EPA has implemented a process for evaluating risk of bias, and several documents that were provided to the committee (for example, EPA 2018b; Orem-Zavaleta 2018) demonstrate implementation of EPA’s risk-of-bias tools and how EPA has augmented them with additional question-specific elements to assess study validity. The IRIS program, however, should provide information on the choice and use of tools, including its rationale for the choice of particular risk-of-bias domains. Including that documentation in the IRIS handbook will improve transparency. The committee notes that evaluation of risk of bias, although important, is not the only way to evaluate study quality. Accordingly, the IRIS program should show how other important methodologic characteristics of a particular study will be evaluated, and EPA should continue to seek and evaluate additional tools that can help to assess study quality.
As part of revisions of the IRIS process, EPA is producing assessment plans and systematic-review protocols. The committee found overlap between those documents; for example, PECO statements are found in both types of documents.3 Indeed, the added value of a two-step process (assessment plan and protocol) was unclear to the committee. It was not immediately clear whether the assessment plan also serves as a “data call” for additional studies that are outside the scope specified by the systematic review but could inform the overall chemical-assessment process. Some additional clarification of terminology and clearer descriptions of how the documents will be used could help the public to understand how chemical assessments move through the IRIS process.
3 A PECO statement is a structured framework that defines a question by specifying population, exposure, comparator, and outcome to be considered in a systematic review.
The committee identified several ways in which the IRIS program could benefit from refinements. For example, the link between scoping and problem formulation outlined in the assessment plan and development of the PECO statement was not well described. Improving the description of how scoping and problem formulation are used to focus the goals of a systematic review will lead to greater specificity in descriptions of outcomes, inclusion and exclusion criteria, and other elements found in the systematic-review protocol and will further improve the transparency and scientific rigor of the process. The committee found that the IRIS program included the dates and results of its literature searching and screening (for example, as appendixes) in draft systematic-review protocols that are undergoing public comment. Completing the literature search as part of protocol development is inconsistent with current best practices for systematic review, and the IRIS program is encouraged to complete the public-comment process and finalize the protocol before initiating the systematic review. Doing so will improve transparency in the IRIS process.
The committee identified several recommendations in the 2014 report that reflect broad scientific efforts that extend beyond the IRIS program. For example, several recommendations were related to the evaluation and use of mechanistic data in a systematic review. EPA’s systematic-review process indicates that mechanistic data can be considered at various steps; for example, the draft protocol for the IRIS assessment of chloroform (EPA 2018b) describes how mechanistic data will be considered. Although appropriate tools, such as those to evaluate risk of bias in mechanistic studies, are in early stages of development in the broader scientific community, the IRIS program has developed approaches for the evaluation of PBPK models that will be used in assessments (Orme-Zavaleta 2018). The committee expects similar evaluation methods for other types of mechanistic evidence to emerge on a case-by-case basis and to include methods for determining at what stage and how mechanistic data could be used in an IRIS assessment. For example, mechanistic data were used by a National Academies committee to inform development of PECO statements for reproductive outcomes associated with o-phthalate compounds (NASEM 2017a). The committee notes that the use of mechanistic data by the IRIS program is consistent with other EPA programs, such as the Office of Pesticide Programs; for example, in the recent hazard identification conducted for benzo[a]pyrene (EPA 2017b), the IRIS program used mechanistic data extensively. Nonetheless, establishment of a framework for when and how mechanistic data would be identified, evaluated, and used remains challenging. The challenge is not unique to the IRIS program and has been identified for future work in the National Toxicology Program (NTP) handbook for conducting systematic reviews and evidence integration (NTP 2015a, p. 73–74).
Finally, the committee considered best practices for systematic reviews in other medical disciplines. Current best practices recommended by the Institute of Medicine (IOM 2011) suggest that the IRIS teams involved in the systematic-review process should be independent of those involved in regulatory decision-making who use the products of the systematic-review teams. The committee notes that the current organizational structure of the IRIS program in the EPA Office of Research and Development is consistent with those best practices.
The 2011 report recommended standardizing an approach for synthesizing evidence within data streams (human, animal, and mechanistic) and integrating evidence across data streams (NRC 2011, p. 165).4 From 2011 to 2013, the IRIS program moved solidly in that direction, as evidenced by its draft handbook (EPA 2013) and its example applications of the approach in two draft IRIS reports—the Toxicological Review of Methanol (Noncancer) and the Toxicological Review of Benzo[a]pyrene (see NRC 2014, pp. 93–96). Although the 2014 committee recognized that substantial progress had occurred during 2011–2013, it made several additional recommendations to guide the IRIS program toward a more systematic
4 IRIS uses the phrase evidence synthesis to refer to the task of combining evidence from a given evidence stream, such as human or animal, and the phrase evidence integration to refer to the task of combining evidence from different evidence streams.
process for evidence synthesis and integration that would maximize transparency, efficiency, and scientific validity.
The major recommendation in Chapter 6 of the 2014 report guided IRIS to choose between making its current guided expert process more transparent and adopting a more structured, GRADE-like,5 process along the lines of the NTP (NRC 2014, p. 105). The IRIS program has explicitly chosen the first option, using structured categories with criteria to guide expert judgment, and EPA has made substantial strides toward more systematic and transparent evidence synthesis (see Slides 65–84, Appendix C; posters D-4 and D-5, Appendix D). Specifically, the IRIS program has created processes and guidelines for synthesizing human evidence and animal evidence that support choosing one category for characterizing the strength of evidence (see Slides 82–84, Appendix C). The guidelines focus on human and animal evidence streams and use mechanistic evidence to inform evidence synthesis and to provide scientific guidance for evidence integration in the steps that follow. In using Bradford Hill criteria to move beyond association to causation and to build on the systematic evaluations of individual study quality conducted in the step before evidence synthesis,6 the IRIS program has created a process for evidence synthesis that is scientifically consistent with the state of the art and that effectively leverages approaches of other programs, such as NTP, that face similar challenges. Increased transparency is evident in the examples and the workshop presentations, but further transparency would be obtained with completion of a handbook that provides more details about processes, reasoning behind decisions, and approaches for presenting results. In the interim, while EPA is completing its handbook, it is releasing protocols for each assessment that include a description of how evidence within each data stream will be synthesized and how evidence from multiple data streams will be integrated. The draft protocol for the IRIS assessment of chloroform (EPA 2018b) was provided as an example. The committee supports EPA’s approach.
Integration of evidence across data streams was described by EPA in its presentations (see Slides 79–87, Appendix C; posters D-4 and D-5, Appendix D) and in the draft chloroform protocol (EPA 2018b, pp. 43–53). Again, the process and framework within which evidence integration takes place (Slides 82–84; Appendix C) are consistent with state-of-the-art approaches taken by other scientific institutions or agencies, such as NTP, that face similar challenges.
Some questions have been raised about the use of mechanistic data in evidence integration. When animal or human data are extensive, mechanistic data can be used to evaluate the evidence within or across the animal or human data streams rather than as a third stream of evidence to be analyzed separately and then combined with human and animal evidence. When extensive mechanistic data are available and human and animal data on apical end points are sparse, mechanistic data might be used reliably as a third data stream to identify hazards, as has been done for the dioxin-like polychlorinated biphenyls (IARC 2016). Mechanistic data are important in identifying potential adverse outcomes, including ones that are not evaluated in guideline-driven animal testing; in informing dose–response assessment; and in determining the relevance of animal data for human health risk estimation. For example, in the case of phthalates (poster D-7, Appendix D), mechanistic data were used to determine that not all effects on male reproductive development in rodents were relevant for humans, and the data provided a basis for selecting the studies that were most relevant as a starting point in establishing a reference dose. However, EPA acknowledged that understanding of mechanisms relevant to effects of phthalates on development is incomplete, and that uncertainty makes it difficult to estimate risk primarily on the basis of mechanistic information. Although organizing the body of evidence according to a mechanistic framework might at first seem desirable because of biologic relevance, mechanistic frameworks today could probably be completed for only a few chemicals. As noted in the 2014 report, solid conclusions about causality can be drawn without mechanistic information,7 for example, when there is strong and consistent evidence from animal or epidemiology studies.
5 GRADE is defined as grading of recommendations, assessment, development and evaluation.
Another recommendation from Chapter 6 of the 2014 report concerns expanding EPA’s capacity to perform quantitative evidence integration for hazard identification, for example, by using meta-regression or Bayesian analysis. To avoid compromising efficiency and timeliness in producing assessments, the 2014 report recommended developing such analytic capacities in parallel with other work in the IRIS program. EPA has taken the recommendation seriously and has explored meta-analytic approaches to combining animal data within species to determine whether the evidence indicates a chemical hazard, for example, whether trimethylbenzene poses a neurotoxic hazard (poster D-2, Appendix D). The IRIS program also initiated work on a Bayesian approach to combining data from different animal species (poster D-10, Appendix D).8 The Bayesian work is promising, but application to IRIS assessments has not yet occurred. It is clear that the IRIS program has made progress here; the agency should continue with its efforts in this field.
Another recommendation from both the 2011 and the 2014 reports urged the use of more standardized, structured evidence tables to support the evidence-integration narrative9 and emphasized the utility of a somewhat standard template for the narrative. The 2017 Toxicological Profile for Benzo[a]pyrene (EPA 2017b) provides an example of structured evidence tables that directly support the evidence-integration narrative, first for synthesis of individual data streams and then in an integrated summary form that connects evidential categorization with the supporting studies (Table 1-20, page 1-108). The final table lays out the evidence that the chemical is a human carcinogen by first introducing the human evidence on cancer from benzo[a]pyrene or precursors from complex mixtures and the human mechanistic studies and then discussing the findings of in vivo animal studies on tumors associated with multiple routes of exposure, adding the studies of precursor events, and finally presenting the evidence that precursor events are likely to occur in humans. The format is clear, well structured, and straightforward to follow. Although a well-reasoned discussion on noncancer effects is available in the same document, structured-narrative justifications of the evidence-integration process and the conclusion were not as well developed as those on cancer. In the workshop, EPA stated that standardized descriptors for noncancer effects are still needed and are being discussed within the agency.
EPA illustrated current thinking regarding the template for evidence integration in the workshop (Slide 85, Appendix C) and in the chloroform draft protocol (EPA 2018b). The template has many characteristics of the GRADE approach to evaluating evidence, with similar labels for conclusions about the strength of the evidence within and across data streams. The approach appears to conform with the state of the art and bears considerable similarity to the system used by NTP (NTP 2015a,b). Although the chloroform protocol provides some illustration of EPA’s approach, more detailed guidance and completed examples are needed to judge EPA’s application of the template for evidence integration.
In summary, the IRIS program has made substantial strides in meeting the recommendations of the 2011 and 2014 reports regarding synthesis and integration of evidence. The IRIS process that was presented to the committee is dramatically more systematic, transparent, and scientifically defensible than the one presented in the 2010 IRIS Toxicological Review of Formaldehyde (EPA 2010).
Recommendations regarding derivation of toxicity values were provided in Chapter 7 of the 2014 report. An important recommendation in that chapter was to “develop criteria for determining when evidence is sufficient to derive toxicity values.” In the workshop, EPA described the overall process and criteria that the agency intends to use to implement that recommendation and indicated that it would develop toxicity values when the evidence-integration conclusion is the “strongest” or a “moderately strong conclusion for a human health effect.” As noted, EPA clarified that the agency intends to systematize processes
9 An evidence-integration narrative is a description of the available evidence and the argument for or against a particular hazard.
to maintain transparency in reaching the hazard conclusion (Slides 132–133, Appendix C), although standard descriptors for noncancer effects are being reviewed within the agency and are not yet final.
EPA’s approach is consistent with the 2014 recommendation that formal dose–response assessments should be restricted to outcomes on which evidence integration has led to the strongest or a moderately strong conclusion on the given health effect, such as known or likely to be carcinogenic to humans (Slide 131, Appendix C). EPA indicated that when there is less strong evidence on a human health effect, such as suggestive evidence of cancer, the decision to develop a toxicity value will be determined by the situation (for example, when there is a well-conducted study and a value would be useful for a decision). However, EPA did not present criteria to be used in such cases.
The one example in which criteria have been applied to support the derivation of toxicity values was the chloroprene reassessment (Orme-Zavaleta 2018). In that document, EPA focused its systematic review on publications since the 2010 assessment. EPA concluded that the new studies did not change the conclusions in the 2010 assessment and did not justify a reassessment of human health effects (that is, derivation of new toxicity values). Although commenting on the conclusions in that assessment is beyond the scope of the present committee’s task, the committee acknowledges that EPA’s reassessment described its criteria for evaluating risk of bias and study sensitivity needed to detect a true effect and that it presented criteria for evaluating PK/PBPK studies. Furthermore, EPA explained why each study considered in the final assessment did not change the conclusions reached in the 2010 IRIS assessment and did not justify a reassessment of human health effects. Thus, it is clear that EPA is making progress toward improving transparency in its use of systematic review and expert judgment to inform the derivation of toxicity values directly.
Another important recommendation in the 2014 report was that EPA “continue its shift toward the use of multiple studies rather than single studies for dose–response assessment” (NRC 2014). The present committee noted that progress has been made in the use of multiple studies for dose–response assessments as exemplified in the recent assessments of ethylene oxide and benzo[a]pyrene (Slides 134–135, Appendix C) and builds on efforts to compare candidate reference doses or concentrations in previous assessments, such as in the 2012 IRIS Toxicological Review of Tetrachloroethylene (EPA 2012). EPA is further developing new tools for visualizing comparisons to communicate the outcome of assessments more effectively, as was demonstrated in the workshop by using HAWC. EPA acknowledged, and the committee agrees, that the development of systematic assessments for many types of mechanistic studies that could contribute to the assessment remains challenging, not only to EPA but to the scientific community generally. However, the process that EPA previously developed to review PK/PBPK models and to describe how they could be used in dose–response and toxicity-value assessments (EPA 2006) is a good example of best practices. As other forms of mechanistic data become more readily available, partly driven by previous National Academies reports (NASEM 2017b; NRC 2007), the IRIS program should develop new approaches for using such studies to inform dose–response and toxicity-value assessments (Slides 142–147, Appendix C). Such guidance will improve transparency and encourage new science, whether it is used to support evidence of potential toxicity or, just as important, to provide perspectives on the potential exposure conditions that could reasonably be expected to cause toxicity.
The 2014 report also recommended that EPA use formal methods for combining multiple studies and further develop and expand its use of Bayesian and other formal quantitative methods for dose–response assessment and derivation of toxicity values (NRC 2014). EPA has begun to develop and apply tools for meta-regression analysis and Bayesian approaches and has demonstrated their application in case studies (Slides 135, 136, 139, and 140, Appendix C; posters D-2 and D-10, Appendix D). Implementation of the recommendation will continue and will require sustained resources and continued capacity-building to develop a process that is ultimately transparent, is replicable, and represents best practices for the future. And it will require close collaborations between domain experts in the biologic and mathematical or statistical disciplines within EPA and with other agencies and stakeholders to establish clear criteria and guidance, including articulation of underlying assumptions, strengths, and weaknesses of each approach. The committee notes that care must be taken when combining results within or between studies in developing dose–response relationships inasmuch as multiple mechanisms, each with its own potential dose–response relationship, might be involved. In such cases, clearly articulated expert judgment, criteria for expert selection,
and multidisciplinary collaborations need to be supported and used in the development and application of new mathematical approaches.
The 2014 report recommended that EPA develop IRIS-specific guidelines to frame analysis and communication of uncertainty (NRC 2014). EPA has made substantial progress in developing and adopting tools to address uncertainty analysis and communication (Slides 136–138, Appendix C; poster D-6, Appendix D). It demonstrated its work during the workshop and focused on model uncertainty (Slide 136, Appendix C) and the probabilistic distribution of toxicity values (Slides 137–138, Appendix C). It further indicated that the IRIS program intends to adopt the WHO/International Programme on Chemical Safety guidance (Slide 137, Appendix C) and to provide various calculations when reporting toxicity values, including ranges of risk-specific toxicity values (Slide 138, Appendix C) to demonstrate uncertainty. The committee recognizes that the steps taken are in the right direction for an evolving process and encourages EPA to continue to develop and test new tools in collaboration with other agencies and stakeholders. Equally important, the committee encourages EPA to continue its effort to frame uncertainty analysis and communications to address multiple sources of uncertainty surrounding toxicity values.
Overall, the committee is encouraged by the steps that EPA has taken, which have accelerated during the last year under new leadership. During the workshop, the committee was impressed by the overall enthusiasm displayed by EPA staff and the substantive progress toward full implementation of systematic review and transparency in IRIS assessments. The committee fully appreciates that changing the process and implementing systematic-review procedures while producing final assessments is a huge challenge for any organization, especially in such a short period (12 months) and with a shrinking staff. Because new tools and approaches will ultimately be needed to implement past National Academies recommendations, especially for incorporating mechanistic information and for integrating evidence across studies, the committee is encouraged by IRIS program efforts to collaborate with other EPA staff, other government agencies, and academe to have the right mix of expertise to develop new approaches and best practices for conducting assessments.
DuMouchel, W.H., and J.E. Harris. 1983. Bayes methods for combining the results of cancer studies in humans and other species: Rejoinder. J. Am. Stat. Assoc. 78(382):313-315.
EPA (US Environmental Protection Agency). 2006. Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment. EPA/600/R-05/043F. National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. August 2006. Available: https://cfpub.epa.gov/ncea/risk/recordisplay.cfm?deid=157668 [accessed March 6, 2018].
EPA. 2010. IRIS Toxicological Review of Formaldehyde (Interagency Science Consultation Draft). EPA/635/R-10/002C. U.S. Environmental Protection Agency, Washington, DC [online]. Available: https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=223603 [accessed February 27, 2018].
EPA. 2012. Toxicological Review of Tetrachloroethylene (Perchloroethylene) [CAS No. 127-18-4]. EPA/635/R-80/011F. U.S. Environmental Protection Agency, Washington, DC. February 2012 [online]. Available: https://cfpub.epa.gov/ncea/iris/iris_documents/documents/toxreviews/0106tr.pdf [accessed February 27, 2018].
EPA. 2013. Part I: Status of Implementation of Recommendations. Materials Submitted to the National Research Council. Integrated Risk Information System Program, U.S. Environmental Protection Agency. January 30, 2013 [online]. Available: https://www.epa.gov/sites/production/files/2014-06/documents/iris_program_materials_to_nrc_part_1.pdf [accessed February 23, 2018].
EPA. 2017a. IRIS Assessment Plan for Chloroform [CASRN 67-66-3]. EPA/635/R-17/330. Integrated Risk Information System, National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC.
EPA. 2017b. Toxicological Review of Benzo[a]pyrene. EPA/635/R-17/003Fa. Integrated Risk Information System, National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC [online]. Available: https://cfpub.epa.gov/ncea/iris/iris_documents/documents/toxreviews/0136tr.pdf [accessed February 14, 2018].
EPA. 2018a. A Message from the IRIS Program, January 2018. IRIS Recent Additions, January 25, 2018 [online]. Available: https://www.epa.gov/iris/iris-recent-additions [accessed February 14, 2018].
EPA. 2018b. Systematic Review Protocol for the IRIS Chloroform Assessment (Inhalation) [CASRN 67-66-3]. EPA/635/R-17/486. Integrated Risk Information System, National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC.
EPA. 2018c. IRIS Assessment Plan for Uranium (Oral Reference Dose) (Scoping and Problem Formulation Materials) [CASRN 7440-61-1]. EPA/635/R-17/787. Integrated Risk Information System, National Center for Environmental Assessment, Office of Research and Development.
IARC (International Agency for Research on Cancer). 2015. Polychlorinated Biphenyls and Polybrominated Biphenyls. IAEC Monographs on the Evaluation of Carcinogenic Risks to Humans Vol. 107 [online]. Available: http://monographs.iarc.fr/ENG/Monographs/vol107/mono107.pdf [accessed March 22, 2018].
IOM (Institute of Medicine). 2011. Finding What Works in Health Care: Standards for Systematic Review. Washington, DC: The National Academies Press.
Jones, D.R., J. Peters., J.L. Rushton, A.J. Sutton, and K.R. Abrams. 2009. Interspecies extrapolation in environmental exposure standard setting: A Bayesian synthesis approach. Regul. Toxicol. Pharmacol. 53(3):217-225.
National Academies of Sciences, Engineering, and Medicine. 2017a. Application of Systematic Review Methods in an Overall Strategy for Evaluating Low-Dose Toxicity from Endocrine Active Chemicals. Washington, DC: National Academies Press.
National Academies of Sciences, Engineering, and Medicine. 2017b. Using 21st Century Science to Improve Risk-Related Evaluations. Washington, DC: National Academies Press.
NRC (National Research Council). 2007. Toxicity Testing in the 21st Century: A Vision and a Strategy. Washington, DC: The National Academies Press.
NRC. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press.
NRC. 2011. Review of the Environmental Protection Agency’s Draft IRIS Assessment of Formaldehyde. Washington, DC: The National Academies Press.
NRC. 2014. Review of EPA’s Integrated Risk Information System (IRIS) Process. Washington, DC: The National Academies Press.
NTP (National Toxicology Program). 2015a. Handbook for Conducting a Literature-Based Health Assessment Using OHAT Approach for Systematic Review and Evidence Integration. Office of Health Assessment and Translation (OHAT), Division of the National Toxicology Program, National Institute of Environmental Health Sciences, U.S. Department of Health and Human Services, January 9. Available: https://ntp.niehs.nih.gov/ntp/ohat/pubs/handbookjan2015_508.pdf [accessed March 6, 2018].
NTP. 2015b. Handbook for Preparing Report on Carcinogens Monographs. Office of Health Assessment and Translation (OHAT), Division of the National Toxicology Program, National Institute of Environmental Health Sciences, U.S. Department of Health and Human Services, July 20, 2015. Available: https://ntp.niehs.nih.gov/ntp/roc/handbook/roc_handbook_508.pdf [accessed March 6, 2018].
Orme-Zavaleta, J. 2018. Response to the Request for Correction (RFC). Letter to Robert Holden, Liskow & Lewis, New Orleans, LA, from Jennifer Orme-Zavaleta, Principal Deputy Assistant Administrator for Science, Office of Research and Development, Washington, DC, January 25, 2018; Attachment 1. EPA Response to the Denka Performance Elastomers (DPE) Request for Correction of the Toxicological Review of Chloroprene (CAS No. 126-99-8) In Support of Summary Information on the Integrated Risk Information System (IRIS); Attachment 2. Systematic Review of Chloroprene [CASRN 126-99-80] Studies Published Since 2010 IRIS Assessment to Support Consideration of the Denka Request for Correction (RFC). January 2018 [online]. Available: https://www.epa.gov/sites/production/files/2018-01/documents/epa_repsonse_to_mr._holdren_jan_25_2018_complete.pdf [accessed February 9, 2018].