National Academies Press: OpenBook

Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version (2022)

Chapter: 2 Overview of Organization and Content of the Handbook

« Previous: 1 Introduction
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

2

Overview of Organization and Content of the Handbook

This chapter evaluates the organization and overall content of the ORD Staff Handbook for Developing IRIS Assessments (the handbook) (EPA, 2020a) and whether key aspects of the assessment process are represented (Question 1 in Table 1-1). In doing so, the chapter specifically considers the Integrated Risk Information System (IRIS) assessment process illustrated in Figures O-1 (p. xviii) and O-2 and Table O-1(p. xix) in the handbook.

Overall, the committee found that the handbook reflects the significant improvements the U.S. Environmental Protection Agency (EPA) has made in its IRIS process for developing draft assessments. For instance, the process includes sophisticated, state-of-the-art methods related to using systematic evidence maps for scoping and systematic review methods for hazard identification. Moreover, the IRIS program is clearly helping to advance the science of systematic review as applied to hazard identification. EPA staff are actively involved in the ongoing development of methods, such as study evaluation and handling of mechanistic data. The committee recognizes that implementation of many of the methods now used in the IRIS assessment process is challenging for EPA, as they have not been previously used by the agency and some methods are still evolving. The committee is impressed and encouraged by the progress the IRIS program has made to date. The IRIS process for developing assessments can serve as a model for other parts of EPA that are implementing systematic review methods.

ORGANIZATION OF THE HANDBOOK

The strengths and advances in methodology for the IRIS assessment process are unevenly and not always clearly conveyed in the handbook. The handbook alone does not adequately describe the overall flow of the process for developing an IRIS assessment. Thus, more could be done to ensure that the handbook meets its objectives of providing transparency about the process and fostering consistency in assessments developed by the IRIS program by operationalizing the process. The presentation (Thayer, 2021) and written materials that EPA provided to the committee were important resources that enabled it to understand some sections of the handbook. Inclusion of information and examples from Dr. Thayer’s presentation into the handbook would improve its clarity. For example, the presentation used screenshots from the Health Assessment Workspace Collaborative (HAWC) software used by EPA for IRIS assessments to illustrate how study evaluations within and across evidence stream judgments were carried through the IRIS process. The committee understands that HAWC is a flexible program that can be adapted to accommodate changes in methods. Thus, it is important that illustrations from HAWC match the guidance in the handbook.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

The committee also acknowledges that the IRIS assessment plan (IAP) published for each IRIS assessment provides details on specific methods and approaches that are employed for specific IRIS assessments. Nevertheless, if each step in the IRIS assessment process were clearly outlined in the handbook, it would make the IRIS assessment process more transparent while simultaneously benefitting the handbook users who conduct IRIS assessments. The handbook chapters for specific steps in the IRIS assessment process lack a general description of the methods used for the step and a clear outcome expected at the completion of the step. For example, handbook Chapter 8, “Data Extraction,” describes the process for extracting data from studies that have been identified as meeting PECO (population, exposure, comparator, outcome) inclusion criteria and being “sufficiently informative,” as described in Chapters 4 and 6 of the previous handbook. Handbook Chapter 8 also addresses methods for standardizing the presentation of effect sizes and doses, and data display options. These processes are heavily dependent on the use of HAWC. The level of detail presented in Chapter 8 obscures how and where data extraction fits into the overall assessment process. If much of the detail about HAWC and data display were presented as supplementary material, the chapter could focus on the conceptual content related to standardizing effect sizes and doses.

The committee also found that the poor organization of parts of the handbook text detracts from its readability. For example, numerous “call-backs” and “call-forwards” across multiple chapters make the handbook difficult to navigate and the process less transparent. Moreover, differences in the level of detail, a degree of repetition, and occasional inconsistencies in information across some chapters also make the handbook difficult to follow.

Chapter 7 of the handbook, “Organizing the Hazard Review: Approach to Synthesis of Evidence,” appears to be discussed as part of the planning process where outcomes are prioritized for synthesis, analysis plans are developed, and next steps are refined and prioritized. The outcome of that chapter and how it feeds into the revision of the IRIS assessment protocol are not clearly presented in the handbook.

Another example of a need for improved organization of the handbook is the overlap between Chapter 9, “Analysis and Synthesis of Human and Experimental Animal Data,” and Chapter 11, “Evidence Integration.” Both chapters list considerations for evidence synthesis within a group of outcomes of human or animal evidence (e.g., Table 9-1 (p. 9-3) and Table 11-2 (p. 11-10)), but the criteria are not the same in each chapter. Only Chapter 11 describes how the ratings for each of these considerations produces a strength of evidence judgment for a body of human or animal evidence, which is then advanced to the evidence integration step.

ILLUSTRATIONS OF THE IRIS ASSESSMENT PROCESS

Figures O-1 and O-2 and Table O-1 in the handbook portray the process for developing a draft IRIS assessment as strictly linear. However, as mentioned in the handbook text, the process is iterative. Searches for studies and evidence mapping can identify a wider range of studies than those eventually included in evidence synthesis and integration. Some studies, which may be considered supplemental at the evidence mapping stage,

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

may be included in the IRIS assessment at a later stage. For example, additional studies that provide information on mechanisms or toxicokinetics (TK)1 may enter the IRIS process at a later stage. The handbook does not provide a clear illustration of how the number of studies initially identified is narrowed or expanded throughout the assessment process. In addition, the handbook does not clearly illustrate when and where in the process the outcomes for the PECO questions for the systematic reviews for hazard identification are defined or refined. The systematic review protocol is shown as a single step in the IRIS assessment process, but multiple systematic review protocols could be part of a single IRIS assessment. EPA noted that the IAP is a research protocol for scoping and problem formulation, where systematic evidence maps are used increasingly (Thayer, 2021). Producing evidence maps can enhance transparency regarding the available data, but the use of evidence maps is not well described in the handbook. In addition, the handbook does not adequately indicate how the units of analysis are eventually selected for hazard identification and toxicity value derivation.

EPA explained that the IRIS assessment protocol has initial and final versions that describe the systematic review and dose-response methodologies (Thayer, 2021). Both versions of the protocol include PECO, literature searching and screening, literature inventory to describe included studies, tagging of supplemental materials, study evaluation, data extraction, and evidence synthesis/integration. The final version of the IRIS assessment protocol may encompass revisions made in consideration of public comments received on the initial protocol. The final version of the protocol also provides more specific details on methods that could not be fully described during the initial version, such as analysis of mechanistic and TK data and non-standard dose-response methodology. The final version could also include the list of prioritized outcomes/endpoints, but as noted elsewhere in this report the process for prioritizing the endpoints may not be clear.

OVERALL CONTENT OF THE HANDBOOK

The committee found that the handbook does not focus sufficiently on the major steps in the IRIS assessment process: planning the assessment, study evaluation, evidence synthesis, evidence integration, and selecting and deriving toxicity values. Streamlining the handbook to concentrate on describing the key concepts, definitions, and instructions required to complete those major components of the IRIS process would improve the transparency and usability of the handbook. A professional editor could assist with this task and advise on whether details on methods that are rapidly evolving, such as evaluation of human epidemiology studies of environmental exposures or mechanistic study evaluation, would be better placed in supplementary materials, where they could be updated regularly, rather than in the main text.

___________________

1 In this report toxicokinetics refers to the absorption, distribution, metabolism, and excretion processes also called ADME or pharmacokinetics. However, consistent with the handbook, models are described as pharmacokinetic (PK) or physiologically based pharmacokinetic (PBPK) models rather than toxicokinetic models.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

The clarity of the handbook was also hindered by the lack of a glossary of key terms. Definitions for key terms such as “scoping,” “synthesis,” “integration,” and “sensitivity” are currently scattered across different chapters. In addition, the handbook sometimes uses terminology in an inconsistent and unconventional manner, with a tendency to use terms with pre-established definitions in the general area of evidence synthesis to designate various IRIS-specific processes and products. Examples include the use of the term “protocol” to describe the general plan for an IRIS assessment rather than a detailed plan for conducting a specific systematic review; the use of the terms “preliminary literature survey,” “systematic evidence map,” and “literature inventory” to refer to an evidence database, when all three terms have different established meanings; and the use of “scoping review” to refer to a complex process that includes engaging stakeholders in the identification of programmatic needs, rather than a survey of a body of evidence conducted to inform a research planning process.

Use of Systematic Review Methods

As recommended in previous National Academies reports, systematic review methods are being incorporated into the IRIS assessment process. These reports have defined systematic review as a “scientific investigation that focuses on a specific question and uses explicit, prespecified scientific methods to identify, select, assess, and summarize the findings of similar but separate studies” (IOM, 2011, p. 1). In practice, the IRIS program is applying systematic review methods appropriately to parts of the assessment process (e.g., evidence evaluation and synthesis for hazard identification). However, in some places, the handbook suggests that systematic review methods are being used at all points in the assessment process, which is not actually the case.

In particular, Figure O-1 suggests that systematic review methods are used for all steps from developing the literature identification strategy through selection of studies for dose-response assessment. However, not all of these steps in the IRIS process are amenable to systematic review methods. For example, although evidence mapping is a rigorous method used for scoping, it is not a systematic review method. As noted in the handbook, systematic review methods are not directly applicable for conducting dose-response analyses, although studies that form the basis of toxicity value derivations should have gone through a systematic review process during hazard identification. Indeed, all studies that may be used for toxicity values, whether human, animal, or mechanistic, are expected to go through a systematic review process before determination of hazard identification. On the other hand, some mechanistic and TK studies that enter the IRIS process provide supplementary information (e.g., on biological plausibility) and may or may not need to go through a systematic review process, depending on the needs of the assessment. The handbook would be more useful and transparent if it clearly delineated where systematic review methods are appropriate and how they are applied in the IRIS process.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Mechanistic and Toxicokinetic Data

Multiple places in the handbook describe the roles of mechanistic data, TK data, and results from PK models and physiologically based pharmacokinetic (PBPK) models. However, the descriptions are not entirely consistent. For instance, Table 2-2 of the handbook (p. 2-7) provides examples of TK studies and PK and PBPK studies, but this information is repeated with some variation in Section 4.2.1. The main discussion of mechanistic information is deferred until Chapter 10 of the handbook, leading to multiple instances of both backward and forward referencing, which makes the handbook confusing as to when and how mechanistic and TK data are incorporated into the assessment. The readability of the handbook would be improved by specifying, near the beginning of the handbook, the range of potential roles for mechanistic, TK, and other related data in the assessment (e.g., see Table 2-1 of this report), and then carrying that information consistently throughout the rest of the document.

TABLE 2-1 Common Uses of Mechanistic and Toxicokinetic Evidence in IRIS Assessments

Mechanistic Evidence Toxicokinetic (ADME) Evidencea
Hazard Evidence Synthesis:
  • Identity of active moiety of an agent
  • Identity of target tissue
  • Key characteristics
  • Interspecies differences (including human relevance)
  • Mode of Action/Adverse Outcome Pathway analysis
  • Endpoints that may be linked/related/grouped together
  • Potential for read-across for predicting certain endpoints, based upon well-studied similar chemicals
  • Susceptible populations and life stages
Hazard Evidence Synthesis:
  • Active moiety
  • Target tissue concentration
  • Route of exposure differences
  • Interspecies differences
  • Susceptible populations and life stages
Dose-Response:
  • Low-dose extrapolation approach
  • Interspecies differences in sensitivity
  • Dose-metrics for each endpoint
  • Susceptible populations and life stages
Dose-Response:
  • PK or PBPK model(s) to predict dose metrics
  • Route of exposure differences/extrapolation
  • Interspecies extrapolation
  • Susceptible populations and life stages

a Includes information relating to PK and PBPK models.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

The handbook does not adequately describe the role of mechanistic data throughout the IRIS assessment process. For example, it is not always clear when mechanistic data would be treated as a separate data stream for hazard identification, when it would be used to increase or decrease confidence in animal or human studies based on biological plausibility, and when it would potentially serve both roles. Despite this shortcoming, it is laudable and significant that the IRIS program is striving to incorporate mechanistic data in a manner that is consistent with efforts to move toward decreased reliance on animal-based studies and increased use of New Approach Methods (NAMs), such as those that rely on in silico or in vitro methods. An increased use of mechanistic data in the IRIS assessment process could encourage its use more broadly and serve as a model for other agencies conducting hazard evaluations.

Susceptible Populations

Most risk assessments involving environmental, industrial, or occupational chemical exposures have the potential to involve susceptible life stages or other types of susceptibility (e.g., genetic polymorphisms; aging systems and pre-existing disease; and social, lifestyle, and demographic factors [as described in the handbook’s Table 9-2 on p. 9-6]). Identifying susceptible populations and determining whether there are sufficient data to evaluate dose-response relationships in such populations are challenges common to most, if not all, chemical assessments. Those challenges are particularly important for susceptible life stages since the period of susceptibility may be relatively brief and thus may not involve consideration of long-term exposure. This has implications for the type of age-dependent adjustment factors that may be needed for cancer assessments and the shorter-term exposure parameters that may be needed to align with the susceptibility window. The extent, nature, quality, outstanding issues, and uncertainties with respect to the evidence base for susceptible populations are important considerations.

The handbook recognizes the importance of assessing potentially susceptible populations by encouraging their consideration throughout the assessment process, and the handbook describes in multiple places how the evidence base for susceptible populations should be handled. However, the handbook treats this aspect of hazard identification as a special case evaluation that may not be required unless animal, human, or mechanistic evidence points to a particular susceptibility or life stage. Discussions that define what constitutes evidence of susceptibility and describe the types of data that may inform such susceptibility are not presented in one place in the handbook, but those topics are alluded to in various places throughout the document. For example, the handbook describes refinements to the evaluation plan (handbook Chapter 5) to include “studies that address critical lifestage or exposure duration-specific knowledge of the development of the health outcome (e.g., for endpoints relating to organ development or cancer, respectively)” (EPA, 2020a, p. 5-2). In Chapter 9 of the handbook (Table 9-2, p.9-6), several other factors are described that may contribute to susceptibility. While it is helpful to alert handbook users to such factors, the handbook does not provide a process for formally considering the susceptibility factors that apply to a given assessment, data available to

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

inform such a consideration, and the needed refinements to PECO statements and the assessment plan.

Linkage of Dose-Response with Hazard Identification

The handbook chapters on selection of studies for toxicity assessment (Chapter 12) and derivation of toxicity values (Chapter 13) are less well developed than the other chapters devoted to hazard identification. Handbook Chapter 12 on selection of studies is not clearly linked to the earlier handbook chapters. For example, it is unclear how the results of the systematic reviews that are conducted for hazard identification are used to select studies for the dose-response assessment. The transition from hazard identification to derivation of toxicity values needs to be improved, particularly with respect to selecting studies for dose-response. By contrast, much of the material in Chapter 13 on deriving toxicity values is described in other EPA documents but not the handbook. Thus, the description of the process for deriving toxicity values could be streamlined substantially to focus on the most common practices, while noting by reference where less common approaches based on mechanistic, TK, or novel quantitative methods may be obtained and applied.

The committee concluded that the transparency and usability of the handbook could be improved by streamlining all of its text to focus on the major steps in the IRIS process, eliminating repetition among the chapters, incorporating examples from IRIS assessments and software tools such as HAWC, and ensuring that terminology is used consistently across chapters.

GAPS IN THE HANDBOOK

The committee identified several key gaps in the content of the handbook: handling of publication bias and funding bias, planning for handbook updates, preparing for new or emerging types of studies that may be used in future IRIS assessments, and including quality assurance in the IRIS assessment process.

Publication Bias and Funding Bias

Publication bias is the publication or non-publication of research results based on the nature and direction of the results (Higgins and Thomas, 2019). Funding bias refers to an association between study funding sources and financial ties of investigators with research outcomes that are favorable for the sponsors (Holman et al., 2019). NRC (2014) recommended that publication bias and funding bias be addressed in IRIS assessments. Handbook Section 8.1.1 notes that the data collection tool, HAWC, allows for collection of information on study funding source and author conflicts of interest, but this does not appear to be mandatory. Author conflicts of interest are mentioned in Section 9.4.3 as a possible source of reporting or publication bias, but funding bias is not mentioned at all. Publication bias is mentioned in the handbook chapters on Evidence Synthesis (Chapter 9) and Evidence Integration (Chapter 11), particularly in relation to how it could impact

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

the assessment of consistency across studies. Funding bias is not mentioned at all in the handbook.

The NRC (2014) recommendation to address funding bias as part of the study evaluation process was based largely on evidence obtained from human studies because less was known about the extent of funding bias in animal research. Since the 2014 report was published, evidence for both publication and funding bias in human and animal literature has increased. Meta-research studies continue to identify publication bias (Rezende et al., 2018; van der Naald et al., 2020) and funding bias (Huss et al., 2007; Anglemyer et al., 2015; Bero et al., 2015; Chartres et al., 2016; Friedman and Friedman, 2016; Lundh et al., 2017; Wells, 2017) across a variety of fields. Other methodologies, such as case studies, qualitative analysis of interview data from researchers, or internal industry documents, have described specific studies that have been influenced by industry or not published (for example, see Michaels (2006)). There remains a need to conduct meta-research evaluations for some types of studies included in IRIS assessments. For example, a major source of data for chemical assessments conducted by IRIS is standardized animal toxicity studies, typically done by contract research organizations (CROs) under good laboratory practices. The same CROs may be funded by various sources, including industry and government, often simultaneously, and evidence for the influence of funding source in such cases is lacking. Funding bias is important to consider regardless of the funding source.

Methods to detect and evaluate impacts of publication bias or funding bias are available or are being developed. The handbook (Section 9.4.3) notes methods to detect publication bias and assess its impact on evidence synthesis. However, it does not provide guidance on whether or how to apply these methods to assess the impact of publication bias on IRIS assessments. Although most current study evaluation tools are not adequate for addressing bias related to conflicts of interest (Lundh et al., 2019), some tools are available for incorporating assessments of funding bias or author conflict of interest into study evaluation (SIGN, 2011; Woodruff et al., 2011; Moga et al., 2012; Downes et al., 2016). The Navigation Guide (Woodruff and Sutton, 2014, for example, includes author conflicts of interest and study funding source as components of study evaluation (Woodruff et al., 2011), although studies are not excluded based on a high risk of bias in any single domain. Cochrane is developing a Tool for Addressing Conflicts of Interest in Trials (TACIT)2 to evaluate the occurrence and impact of funding bias and author conflicts of interest on systematic reviews of trials. Funding and publication biases can also be investigated at the evidence synthesis and integration stages. For example, the impact of funding bias on effect estimates obtained by evidence synthesis can be evaluated by excluding studies by funding source in a sensitivity analysis or by conducting subgroup analyses by funding type. Incorporating assessments of publication bias or funding bias at the evidence evaluation or synthesis stage could influence the confidence in estimated effects.

___________________

2 The TACIT website is https://tacit.one/.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Updating the Handbook

The preface of the handbook indicates that the handbook is a “living document” and “the IRIS program will update the IRIS handbook as needed for major shifts in approaches based on emerging science and experience gained through its application to a broader spectrum of assessments” (EPA, 2020a, p. xiv). Given the current complexity of the handbook and EPA’s existing review process, it is difficult to see how the handbook could be updated in a timely manner. The handbook does not describe a process for updating, which would include proposed timelines, how to identify major changes that would need to go through external peer review, and how to more quickly update details of evolving methods (e.g., study evaluation and software tools) that could be linked to the handbook without the need for external review.

Anticipating Future Types of Studies

A handbook chapter on preparedness for incorporating future types of studies would demonstrate EPA’s involvement in the development of emerging methods. Currently, human epidemiology and animal toxicology studies are the main focus of the handbook. A preparedness chapter could identify additional types of studies that EPA is considering for future IRIS assessments (e.g., high throughput, non-rodent vertebrate studies, and in silico). If NAMs were to be recognized as eventual replacements for animal-intensive toxicology studies, emerging methodologies for NAMs of potential future relevance to IRIS could be highlighted. Evaluation of chemical mixtures is a gap in the handbook and may be beyond the current scope of IRIS assessments. The handling of life stage studies, including developmental stage studies, in IRIS assessments is not fully developed in the handbook. The handbook does not recognize new methodological developments in those areas and does not target them for possible future use.

In addition, systematic reviews of human, animal, mechanistic, or other types of relevant data are becoming more common in the general field of environmental health. Incorporating systematic reviews from other sources into IRIS assessments could enhance process efficiency.

Quality Assurance

Previous National Academies reports have noted weaknesses in the quality assurance of the IRIS process. NRC (2014) recommended that EPA provide a quality management plan that included clear methods for continuous evaluations of the quality of the IRIS process. NASEM (2018) found the IRIS management had taken multiple steps to ensure high-quality, including structures to improve the quality of the IRIS assessments.

The handbook can play a key role in the quality assurance of the IRIS assessment process because it can standardize the practices of IRIS program staff and contractors working on IRIS assessments. However, the handbook lacks a section on the overall quality assurance of the assessment process, how quality will be monitored, and how EPA staff and consultants will be trained, as needed, to meet the quality assurance standards.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

The handbook includes quality assurance procedures related to some individual steps in the process. For example, handbook Chapter 6 indicates that the quality assurance of the study evaluation process is accomplished by having each study evaluation generally conducted independently by at least two reviewers, with a process for comparing and resolving differences. However, the handbook provides the option to use only one reviewer, which is not standard practice for systematic reviews, and fails to note which of the described methods have been tested empirically. Empirically based methods that are valid and reliable could establish a minimum set of standards for quality assurance. Alternatively, pragmatic standards, developed collaboratively by methodologists and those who conduct IRIS assessments, could be established. For example, Cochrane and others have developed methodological expectations for the conduct, reporting, and updating of systematic reviews (Schaefer and Myers, 2017; Higgins and Thomas, 2019; Whaley et al., 2020). The Cochrane standards are categorized as “mandatory” or “desirable” for systematic review publication.

FINDINGS AND RECOMMENDATIONS

General Finding: The handbook reflects the significant improvements EPA has made in its IRIS assessment process. For instance, the process includes sophisticated, state-of-the-art methods related to using systematic evidence maps for scoping and systematic review methods for hazard identification. Moreover, the IRIS Program is clearly helping to advance the science of systematic review methods as applied to hazard identification. EPA staff are actively involved in the ongoing development of methods, such as study evaluation and use of mechanistic data. The IRIS assessment methods can serve as a model for other parts of EPA that are implementing systematic review methods.

Findings and Tier 1 Recommendations

Finding: The objectives of the handbook are to improve transparency about the IRIS assessment process and provide operationalizable instruction for those conducting IRIS assessments. Achieving both of these objectives in a single document may be difficult.

Recommendation 2.1: EPA should engage a professional editor with specific expertise in developing handbook-like materials to assist with the handbook revision. The editor should enhance the transparency and ease of use of the handbook by focusing the material on the concepts, definitions, and instructions needed to complete the main steps in the IRIS assessment process; eliminating unnecessary repetition among the chapters; and ensuring that terminology is used consistently across chapters. [Tier 1]

Recommendation 2.2: EPA should add a glossary to the handbook for defining key terms. Single definitions should be provided for concepts, and the definitions should be applied consistently throughout the handbook. [Tier 1]

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Finding: The handbook uses unconventional terminology, with a tendency to use terms with pre-established definitions in the area of evidence synthesis to designate various IRIS-specific processes and products. Examples include the use of the term “protocol” to describe the general plan for an IRIS assessment, rather than a detailed plan for conducting a specific systematic review; the terms “preliminary literature survey,” “systematic evidence map,” and “literature inventory” to refer to an evidence database, when all three terms have different established meanings; and “scoping review” to refer to a complex process that includes engaging stakeholders in the identification of programmatic needs, rather than a survey of a body of evidence conducted to inform a research planning process.

Systematic review methods are used for parts of the overall IRIS process. However, the handbook suggests, at some points, that systematic review methods are being used when they are not. The IRIS program is applying systematic review methods appropriately to parts of the process (e.g., evidence evaluation and synthesis for hazard identification). However, the handbook does not clearly differentiate when systematic methods (such as a systematic search of the literature or another method that is applied systematically) versus systematic review methods are being used throughout the IRIS process.

Recommendation 2.3: The handbook should use terminology in a manner that is consistent with existing, accepted definitions in related fields. When alternative definitions are used for the IRIS assessment process, the handbook should provide explicit justification. [Tier 1]

Recommendation 2.4: When systematic review methods are being used for parts of the IRIS assessment process, this should be stated and the relevant methodological literature should be referenced. [Tier 1]

Finding: The handbook does not adequately describe the overall process or flow for developing IRIS assessments and the iterative nature of some of the steps in the process.

Recommendation 2.5: EPA should create new graphical and tabular depictions of the IRIS assessment process as it is currently practiced, and should not feel constrained to mirror the process depicted by the 2014 National Academies report Review of EPA’s Integrated Risk Information System (IRIS) Process or any other report (including this one). A professional editor could be of assistance here. [Tier 1]

Finding: The handbook includes very detailed information on some methods that may undergo rapid development (e.g., data extraction). On the other hand, the handbook lacks specific examples from relevant IRIS assessments and examples of the software used by EPA, such as HAWC, that are needed to fully understand the IRIS program’s methods.

Recommendation 2.6: The handbook should incorporate more examples from relevant IRIS assessments and examples of software used by EPA, such as HAWC. EPA could provide them as supplementary material or links to other content. [Tier 1]

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Finding: Publication bias and funding bias are mentioned sparingly, or not at all, in the handbook, although empirical evidence from a variety of fields shows that funding bias and publication bias can alter effect estimates when evidence is synthesized. Funding bias might also have an effect on the confidence of study ratings from evidence evaluation. Tools and methods to detect and assess these biases are available.

Recommendation 2.7: The handbook should describe how to detect and assess the effect of funding bias on the confidence of study ratings from evidence evaluation or effect estimates from synthesis. [Tier 1]

Recommendation 2.8: The handbook should describe how to detect and assess the effect of publication bias on effect estimates from synthesis. [Tier 1]

Findings and Tier 2 Recommendations

Finding: The roles of mechanistic and TK information throughout the IRIS assessment process are not clearly described in the handbook. For instance, the main discussion of the use of mechanistic data is deferred until Chapter 10, so that discussions of those types of data earlier in the handbook require call-forwards and do not have adequate context to be clearly understandable. In addition, it is not clear when mechanistic data could be treated as a separate data stream for hazard identification, could be used to increase or decrease confidence in animal or human studies based on biological plausibility, or could be used to inform key science considerations.

Recommendation 2.9: The handbook should include introductory material clarifying the possible roles of mechanistic and TK information in the IRIS assessment process. As it is recognized that methods and approaches in this area are continuing to evolve and be refined, the handbook need not (and cannot) specify every possibility but rather should focus on the most common roles for such data in the assessment process (see Table 2-1 of this report). EPA should obtain a professional editor’s assistance (see Recommendation 2.1) in determining how best to organize the more detailed discussions of mechanistic TK information throughout the relevant sections of the handbook. [Tier 2]

Finding: The handbook recognizes the importance of assessing potentially susceptible populations by encouraging their consideration throughout the process. There are multiple places in the handbook that describe how the evidence base for susceptible populations should be incorporated into IRIS assessments. However, the handbook treats this area of hazard identification as a special case evaluation that may not be required unless animal, human, or mechanistic evidence points to a particular susceptibility or life stage. Discussions that define what constitutes evidence of susceptibility and describe the types of data that may inform such susceptibility are not provided in one place in the handbook, but those topics are alluded to in various places throughout the document.

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Recommendation 2.10: The handbook should include introductory material summarizing how susceptible populations are to be identified, how relevant literature are to be sought and catalogued, and how this information is to be used to refine PECO statements and the assessment protocol. [Tier 2]

Finding: The handbook chapter on selection of studies for toxicity value determination does not directly follow on from the earlier handbook chapters. For example, it is unclear how the results of the systematic reviews that are conducted for hazard identification are used to select studies for the dose-response assessment. As described, hazard identification and toxicity value determination appear to be disconnected processes. Thus, earlier portions of the handbook are difficult to evaluate without an adequately described connection to how studies will ultimately be selected for toxicity determination.

Recommendation 2.11: The handbook should include introductory material describing the criteria for using the results of hazard identification and other considerations for the selection of studies for dose-response. [Tier 2]

Finding: The handbook lacks a section on the overall quality assurance of the IRIS assessment process.

Recommendation 2.12: The handbook should include a section on ensuring the overall quality of the IRIS assessment process that establishes a minimum set of standards for quality assurance and provides procedures for monitoring quality and training EPA staff and contractors, as needed, to meet the quality assurance standards. [Tier 2]

Finding: Given the current complexity of the handbook and EPA’s existing review process, it is difficult to see how the handbook could be updated in a timely manner. The handbook does not describe a process for its updating, which would include proposed timelines, how to identify major changes that would need to go through external peer review, and how to more quickly update details of evolving methods (e.g., study evaluation and software tools) that could be linked to the handbook without need for external review.

Recommendation 2.13: The handbook should include content describing the time-line and process for updating of the handbook. [Tier 2]

Findings and Tier 3 Recommendation

Finding: Human epidemiology and animal toxicology studies are the main focus of the handbook. The handbook does not indicate the types of studies that might be considered for future IRIS assessments (e.g., high throughput, non-rodent vertebrate studies, and in silico). Consideration of chemical mixtures is a gap in the handbook that may be beyond

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

the current scope of IRIS assessments. The committee urges EPA to plan for including additional types of studies in IRIS assessments.

Finding: Systematic reviews in environmental health are becoming more common, but the handbook does not describe how EPA will identify, evaluate, and incorporate systematic reviews from other sources.

Recommendation 2.14: The handbook should include discussions that recognize new types of research and describe how EPA is preparing for possible inclusion of additional study types into IRIS assessments. To enhance the efficiency of the IRIS assessment process, EPA should consider how it will identify, evaluate, and incorporate systematic reviews from other sources, as they become more available. [Tier 3]

Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 19
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 20
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 21
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 22
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 23
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 24
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 25
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 26
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 27
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 28
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 29
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 30
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 31
Suggested Citation:"2 Overview of Organization and Content of the Handbook." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 32
Next: 3 Planning Assessments »
Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version Get This Book
×
 Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version
Buy Paperback | $25.00 Buy Ebook | $20.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The U.S. Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) program develops human health assessments that focus on hazard identification and dose-response analyses for chemicals in the environment. The ORD Staff Handbook for Developing IRIS Assessments (the handbook) provides guidance to scientists who perform the IRIS assessments in order to foster consistency in the assessments and enhance transparency about the IRIS assessment process. At the request of the EPA, this report reviews the procedures and considerations for operationalizing the principles of systematic reviews and the methods described in the handbook for determining the scope of the IRIS assessments, evidence integration, extrapolation techniques, dose-response analyses, and characterization of uncertainties.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!