This chapter presents the committee’s overall conclusions about the approach used by the U.S. Department of Defense (DOD) to develop a proposed occupational exposure level (OEL) for trichloroethylene (TCE), and highlights responses to the objectives specified in the Statement of Task. The numbers that appear in brackets after the headings in this chapter correspond to the five areas that the committee was asked to comment on:
- An analysis on the overall approach and suggestions for individual components of the report (i.e., literature review, evidence synthesis based on weight of evidence [WOE], point-of-departure derivation, use of physiologically based pharmacokinetic modeling, use of extrapolation tools [e.g., uncertainty factors] and assessment of the cancer exposure-response) that may lead to improvements in the accuracy of the proposed process.
- Determine if the process in deriving an OEL for TCE, including the quantitative WOE approach to determine relevance of controlled laboratory studies and overall approach corroborating alternative lines of evidence, is scientifically sound.
- Determine if the derived OEL value is supported by the available toxicity information and has followed the WOE approach outlined in the report, and provide a final summary opinion of the approach and the scientific support for the derivation of the OEL.
- Determine whether the development of a range of cancer risk levels was appropriately supported.
- Due to the controversial nature of the evidence for developmental defects, determine whether the DOD report considered this evidence in an unbiased manner that was consistent with its use of other toxicological evidence and used sound professional judgment in its evaluation of this evidence.
DOD derived candidate OELs for six noncancer end points, including neurological, liver, kidney, immunological, reproductive, and developmental effects. DOD then selected the lowest candidate OEL, which was based on immunological effects, as the overall OEL of 0.9 parts per million (ppm). Selecting the most sensitive OEL was consistent with DOD’s process (see Figure 2-2 in Chapter 2).
DOD’s proposed OEL is more conservative than existing OELs established by the Occupational Safety and Health Administration (OSHA 2019), the National Institute for Occupational Safety and Health (NIOSH 2007), and the American Conference of Governmental Industrial Hygienists (ACGIH 2017).
TCE is also considered a human carcinogen (EPA 2011; IARC 2014) and DOD estimated the cancer risk posed by the proposed OEL. DOD’s cancer assessment focused on TCE-induced kidney cancer and non-Hodgkin’s lymphoma (NHL) (Sussan et al. 2019). Cancer risk levels were determined by DOD for the effects of kidney cancer alone, as well as for the combined effects of kidney cancer and NHL. In DOD’s draft report, the risk for kidney cancer was estimated as 1 in 1,000 for 45 years of occupational exposure at the proposed OEL of 0.9 ppm (Sussan et al. 2019). The committee raised several concerns about DOD’s cancer analysis because the data were not evaluated to the same extent as the noncancer data were (see Chapter 5).
The committee commends DOD for proactively developing an OEL to meet its needs. When compared with other occupational exposure guidelines, DOD’s proposed OEL for noncancer effects has the potential to be more health protective than existing guidelines for DOD workers.
Chapters 3 through 5 provide the committee’s detailed discussion of the approach used by DOD to derive the OEL for TCE. The committee identified several areas in which the process and its specific application to TCE could be strengthened. The most notable concerns are highlighted in this chapter.
Although DOD expressed a goal of using systematic review principles to derive an OEL,1 the committee found multiple critical deficiencies in the approach described in the draft report (see Chapter 4 and further elaboration below). DOD also referred to its approach as a “systematic literature search” and other terms, which made it unclear what type of review was performed. Systematic review and systematic literature search are not synonymous terms. Systematic review has become a term of art within the scientific community, and standards for conducting a systematic review for clinical applications have been defined by the Institute of Medicine and others (IOM 2011). Many of the IOM standards have been adapted for addressing environmental health questions (Woodruff and Sutton 2014; NTP 2019), which could readily be applied to TCE to meet the standard of a systematic review.
1 For example the report states: “This report describes a method designed to identify the most robust and relevant scientific information and is consistent with systematic review principles” (Sussan et al. 2019, p. 4).
Evaluation of DOD’s systematic review using AMSTAR 2 (Shea et al. 2017) and the accompanying online checklist2 reveals DOD produced a critically low-quality systematic review (see Table 6-1). This rating is driven by several factors including the lack of a systematic review protocol, inadequate methods to assess risk of bias, and incomplete description of individual studies. Systematic reviews that lack these three items but include all other items in the AMSTAR 2 checklist would be rated as no better than a low-quality review—illustrating the importance of these elements. Some of these deficiencies could be easily corrected. For example, several tools have been developed for the assessment of risk of bias in toxicology studies (e.g., NTP 2019). Evidence tables could also provide a useful framework for presenting key study design features and findings in a consistent manner (NRC 2011, 2014).
|1. Did the research questions and inclusion criteria for the review include the components of the Population, Exposure, Comparator, and Outcome statement?||Yes|
|2. Did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and did the report justify any significant deviations from the protocol?||No|
|3. Did the review authors explain their selection of the study designs for inclusion in the review?||Yes|
|4. Did the review authors use a comprehensive literature search strategy?||Partial Yes|
|5. Did the review authors perform study selection in duplicate?||Yes|
|6. Did the review authors perform data extraction in duplicate?||No|
|7. Did the review authors provide a list of excluded studies and justify the exclusions?||No|
|8. Did the review authors describe the included studies in adequate detail?||Partial Yes|
|9. Did the review authors use a satisfactory technique for assessing the risk of bias (RoB) in individual studies that were included in the review?||No|
|10. Did the review authors report on the sources of funding for the studies included in the review?||No|
|11. If meta-analysis was performed did the review authors use appropriate methods for statistical combination of results?||Not applicable|
|12. If meta-analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?||Not applicable|
|13. Did the review authors account for RoB in individual studies when interpreting/discussing the results of the review?||No|
|14. Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review?||No|
|15. If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?||Not applicable|
|16. Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?||No|
NOTE: The questions were slightly modified to align with the goals of DOD’s TCE systematic review. Two committee members independently evaluated DOD’s systematic review and reached consensus on the answers. See https://amstar.ca/Amstar_Checklist.php.
The committee agrees with DOD’s use of the U.S. Environmental Protection Agency (EPA) Integrated Risk Information System (IRIS) assessment of TCE (EPA 2011) as a starting point for its assessment but has concerns regarding its application. DOD’s intent was to increase efficiency by limiting the scope of DOD’s systematic review to the more recent studies. Using EPA’s IRIS assessment to support scoping and problem formulation is an appropriate use of this document (e.g., identify the target organ systems). However, DOD inadequately describes how the literature and analyses provided in the IRIS assessment were used to select older studies for consideration in deriving an OEL. This lack of documentation is especially troubling because all of the key studies used by DOD to derive candidate OELs were from EPA’s review.
Updating EPA’s narrative review of TCE with a systematic review does not result in the creation of an updated systematic review.3 Therefore, the use of the EPA’s TCE assessment for this purpose4 is questionable because it predates EPA’s use of systematic review in the IRIS program. Thus, the TCE assessment shares weaknesses found in IRIS assessments of other chemicals produced around the same time. For example, weaknesses found in the IRIS assessment of formaldehyde included incomplete documentation of assessment methods, lack of clear inclusion and exclusion criteria, insufficient use of evidence tables, and a lack of uniform approaches to evaluate the strengths and weaknesses of critical studies
3 This caveat would apply to all narrative reviews that do not meet accepted standards for a systematic review.
4 The committee acknowledges uncertainty as to whether this was indeed DOD’s goal. Additional clarification of the goal of updating the IRIS assessment would have helped resolve this uncertainty.
The committee does not suggest that a systematic review is required for OEL development. Many occupational toxicity values, including some developed by the National Academies of Sciences, Engineering, and Medicine, have relied on narrative reviews of the literature. Standard operating procedures were established by the National Academies for determining and documenting how exposure guidelines were developed (NRC 1992, 2000, 2001; NASEM 2016). These procedures included guidance on the types of evidence to consider, selection of studies, determinations about points of departure, approaches to extrapolate data, exposure adjustments, and consideration of uncertainties. These methods helped to ensure consistency in how toxicity values were determined and, when appropriate, could be adapted for the derivation of an OEL.
DOD’s approach to developing an OEL recognizes that there are strengths in the systematic review process absent in some narrative reviews (Ferrari 2015). These strengths include scoping, problem formulation, development of a Population, Exposure, Comparator, and Outcome (PECO) statement, use of explicit inclusion and exclusion criteria, and assessment of study quality, among others. These aspects are strengths because they increase the transparency and reproducibility of a review. DOD could incorporate some of these strengths into a narrative review used to develop an OEL. Developing formal procedures would further increase the scientific rigor and transparency of the process. Important elements of a formal procedure include:
- Use of pre-defined eligibility criteria;
- Documentation of literature search methods;
- Evaluation of a comprehensive range of literature;
- Critical evaluation of all studies; and
- Summarization of key studies.
DOD could also consider using a well-conducted narrative review that includes certain elements found in systematic reviews, updating an existing systematic review, or performing a new systematic review for developing an OEL. Each of these choices would yield a more comprehensive and rigorous approach than many other methods for developing OELs. This decision could be influenced by a number of factors, including resource availability, time constraints, and the decision context that the assessment will support.
DOD Study Applicability Tool
As discussed in Chapter 4, DOD created a study applicability tool to evaluate individual animal studies (Sussan et al. 2019). This tool scores individual studies on a scale of 100 points and is based on the sum of scores from nine domains grouped into aspects of study quality (40%), strength of results (25%), relevance/applicability (25%), and consistency of data (10%). This tool was used
to evaluate each animal study considered during DOD’s dose-response assessment. DOD used the study applicability tool to evaluate 40 noncancer inhalation and 16 oral toxicity studies. Table 6 in DOD’s assessment (Sussan et al. 2019, p. 49) provides cutoffs for the four categories5 used for the study applicability scores. DOD did not establish a threshold for the elimination of individual studies.
The committee identified several concerns related to the use of this tool. First, the tool was not applied to evaluate animal studies of cancer end points and justification was lacking for considering cancer and noncancer data differently. The tool contains domains that are used to evaluate individual studies and other domains that are used for a body of evidence. In addition, assigning numerical scores to individual studies is not part of best practices in systematic review and has been shown to falsely imply a relationship between scores and effect or association (Jüni et al. 1999; Greenland and O’Rouke 2001; Herbison et al. 2006; Higgins and Green 2011). Cochrane specifically recommends against the use of scales resulting in a quantitative summary score (Higgins and Green 2011). Finally, an assessment tool was not developed and applied to epidemiological studies and DOD provided little documentation on how the quality and relevance of epidemiologic studies were evaluated.
The committee recommends that DOD abandon the use of this study applicability tool in favor of established tools to assess risk of bias of animal and human studies (e.g., NTP 2019). If DOD chooses to continue to develop the tool, it should separate assessments of individual study quality from evaluations of the body of evidence. Tools should be applied to epidemiological and toxicological studies to ensure that the different lines of evidence are evaluated with the same degree of rigor. Best practices should be followed to avoid the use of scales resulting in a quantitative summary score.
DOD’s Cancer Evaluation
The committee has concerns with DOD’s estimation of cancer risks in the draft report because cancer studies were not evaluated to the same extent as noncancer studies. Specifically, epidemiological studies, the basis of DOD’s cancer assessment, were not evaluated with tools to assess study quality, relevance, or risk of bias. Because DOD’s evaluation of cancer risk is based on dose-response data derived from a study that did not undergo a risk of bias evaluation nor was a comparison to cancer slopes from other studies made, it is unclear if the most appropriate study was used to derive the cancer slope factors. The committee recommends that DOD include cancer end points in all steps of its hazard identification and dose-response assessment (see Chapters 4 and 5). Additionally, it is possible that additional (or different) cancer end points would be identified if a full hazard assessment of cancer was performed. Addressing dose-response of both
5 Categories and their respective suggested relative value: high applicability (80-100); moderate applicability (60-79); low applicability (50-59); and unreliable (<50).
noncancer and cancer end points within a unified framework is consistent with recommendations of past National Academies committees (e.g., NRC 2009).
Use of Human Studies in the Assessment
DOD developed a PECO statement that included human studies, and the associated literature search yielded 58 human studies published since 2010 that met the inclusion criteria (Sussan et al. 2019, Figure 1).6Figure 2-2 in Chapter 2 provides a graphical representation of DOD’s process for deriving an OEL for TCE. This figure shows that human evidence can inform identification of toxic end points of concern. DOD’s assessment states
Due to the generally limited quantitative information on exposure assessment from human epidemiologic studies as well as the known and unknown co-exposures typically inherent in human exposure studies, epidemiologic studies were considered, as mentioned below, as alternative lines of evidence in the selection of the PODs. (Sussan et al. 2019, p. 7)
Dismissing the human evidence in this way is inconsistent with recommended best practices (NRC 2014; NASEM 2017). A previous National Academies committee has demonstrated how a synthesis and determination of certainty of evidence can be conducted for human evidence (NASEM 2017).
The committee notes that DOD’s process does not include a step to assess study quality or risk of bias of human studies, as was performed with animal studies. The committee recommends that DOD assess the risk of bias of human studies and include this evidence stream in the hazard assessments. The National Academies (2017) illustrate the use of one tool for the evaluation of human studies. DOD assessments could include separate synthesis and determination of certainty of evidence for animal, human, and, when appropriate, mechanistic evidence. DOD’s approach for developing an OEL should then also include methods for integrating the evidence streams to reach a final causal determination of hazard. These measures will strengthen DOD’s assessment by allowing rigorous assessment and integration of the robust information on TCE.
Use of PBPK Models and Bayesian Approaches in the Dose-Response Assessment
The committee commends DOD for the use of a physiologically based pharmacokinetic (PBPK) model and Bayesian approaches in the method to deriving an OEL for TCE. The use of a well-characterized PBPK model to support OEL development was a strength of DOD’s process. The PBPK model was used to
6 An unknown number of human studies may also have been identified as relevant from the IRIS TCE assessment; however, details regarding which studies may have been considered are lacking.
perform route-to-route extrapolations (e.g., oral to inhalation) that increased the confidence in points of departure derived from oral studies. The PBPK model was also used to adjust for the different inhalation exposures used in the animal studies. DOD should consider using the PBPK model to evaluate the oral data available for all six target end points, rather than just immunological, reproductive, and developmental effects, to take advantage of the robust database on TCE.
The committee also endorses DOD’s use of Bayesian approaches to guide the selection of uncertainty factors. The inclusion of Bayesian approaches is in accordance with recommendations made by the NRC (2014). Bayesian approaches allow for systematic integration of uncertainties arising from multiple sources in the derivation of a toxicity value. DOD also included analyses performed using default uncertainty factors, allowing for direct comparisons between the two approaches.
Congenital Heart Defects
DOD tasked the committee with assessing whether its evaluation of the evidence for developmental defects was unbiased and consistent with its proposed approach. TCE and its relationship to congenital heart defects (CHDs) has been a source of controversy so the committee focused on this end point when addressing this objective. A focal point of this debate is a study conducted by Johnson et al. (2003), which evaluated pregnant rats and fetuses following gestational exposure to TCE via drinking water. A variety of non-dose-related CHD in rat fetuses at oral concentrations as low as 250 parts per billion were reported. If DOD selected this study for derivation of an OEL, it would have led to an appreciably lower point of departure and OEL for developmental effects. Appendix C in the draft DOD report is devoted to assessing this study and the rationale for excluding it, such as deviations from accepted scientific methods and lack of corroboration with other inhalation developmental toxicity studies. The basis for singling out the Johnson study for exclusion appears arbitrary and inconsistent with the process of how other studies were evaluated. Most notably, the Johnson et al. (2003) study did not receive an applicability score, suggesting that it was not evaluated using DOD’s study applicability tool but was excluded from consideration earlier in the process. Other studies assessed by DOD for deriving an OEL also had one or more methodological concerns (see Appendix D in Sussan et al. 2019). In addition, corroborating key studies used to derive OELs with data from other studies evaluating similar end points were not always available.
To reduce bias, DOD should identify severe experimental methodologic shortcomings beforehand (preferably in a protocol) that would preclude the inclusion of studies in the hazard identification or the dose-response analysis. Previous National Academies committees have termed these shortcomings “fatal flaws” (NRC 2014). Examples of such exclusion criteria include instability of the test compound, inappropriate animal models, inadequate or no controls (or comparison group), or invalid measures of exposure or outcome (NRC 2014). An alternative approach would be to include all relevant studies that meet the predetermined
eligibility requirements in the hazard identification and then have a subsequent step where studies with high risk of bias are excluded from dose-response assessment, according to a pre-defined protocol. Based on documentation provided in Appendix C, neither of these approaches was used by DOD.
The committee concurs that the Johnson et al. (2003) study has multiple study design flaws (e.g., lack of concurrent controls, groups evaluated over a 6-year period, unequal group sizes); poor documentation (unable to reconstruct portions of study conduct); and reporting errors (republication of data from a previous study for comparison with new data without acknowledgment or discussion) that suggest the study would be at high risk of bias. Similar concerns have been raised by Makris et al. (2016) and Wikoff et al. (2018). In the absence of identifying fatal flaws beforehand that could be used to exclude studies, the committee suggests the Johnson et al. (2003) study (in addition to all of the studies) be assessed for risk of bias using an appropriate tool. An impartial risk of bias assessment should appropriately categorize this study’s suitability and contribution to the overall evidence integration and generation of a point of departure. This approach was used by both Wikoff et al. (2018) and Makris et al. (2016). Both of these investigators assessed the Johnson study for risk of bias and then considered the appropriateness of the study for inclusion in subsequent steps.
Mechanistic evidence may also shed light on the relationship between CHD in offspring and oral and inhalation gestational exposure to TCE. Mechanistic studies (e.g., in vitro or avian in ovo) were not considered in the evaluation of TCE related to the pathogenesis of CHD. The committee recommends consideration of these studies in DOD’s hazard identification or evidence synthesis for this outcome. The failure to consider mechanistic (e.g., in vitro) data for CHD is illustrative of a broader issue because mechanistic data were rarely and inconsistently used by DOD despite it being included in its process (see Figure 2-2 in Chapter 2).
The results from DOD’s review of human evidence are similar to assessments conducted previously (Bukowski 2014; Makris et al. 2016; Wikoff et al. 2018); no causal relationship has been identified between TCE exposure during human pregnancy and CHD. However, consistency in how DOD handled the human studies was lacking. For instance, three epidemiological studies conducted by Bove et al. (1995; see Bove 1996), Lagakos et al. (1986), and Ruckart et al. (2013) were included by DOD in the assessment of other outcomes (such as carcinogenicity of the immune system and other developmental abnormalities), but were not included in the CHD assessment despite these studies having outcomes relevant for this end point. Furthermore, human studies were not assessed for risk of bias by DOD. While the overall conclusion drawn by DOD about the human studies related to TCE and CHD might be appropriate, the assessment would be improved if all relevant human studies were assessed for risk of bias using an appropriate tool.
The committee’s evaluation of the presented CHD data also revealed some additional inconsistencies with respect to how DOD’s systematic review was performed. A literature search was performed on November 20, 2017, and covered evidence published between January 2010 and November 2017 (Sussan et al.
2019). Appendix C references a rat study conducted by the Halogenated Solvents Industry Alliance, Inc. and performed by Charles River Laboratories (Coder 2018) that attempted to replicate the Johnson et al. (2003) study. Including this document is inconsistent with the search terms and dates used to query the literature, and an explanation is needed for why this study was considered.
The committee found both strengths and weaknesses in DOD’s approach to developing an OEL for TCE. If DOD implements the recommendations of this report, it will strengthen the transparency of its process and improve confidence in the final OEL value. The committee recognizes that implementing some of the recommendations will dramatically change the process used by DOD, which will take time and may require additional resources. Thus, DOD could consider using the proposed OEL as an interim value while these improvements occur. In the short term, DOD could focus efforts on the recurring concerns that touched on nearly all steps in DOD’s approach, namely a lack of transparency that arises from incomplete description of the methods used and inconsistent application of the methods across different data streams (e.g., animal versus human studies, oral versus inhalation studies, noncancer versus cancer effects).
Arito, H., M. Takahashi, and T. Ishikawa. 1994. Effect of subchronic inhalation exposure to low-level trichloroethylene on heart rate and wakefulness-sleep in freely moving rats. Sangyo Igaku. 36(1):1-8.
Bove, F.J. 1996. Public drinking water contamination and birthweight, prematurity, fetal deaths, and birth defects. Toxicol. Ind. Health 12(2):255-266.
Bove, F.J., M.C. Fulcomer, J.B. Klotz, J. Esmart, E.M. Dufficy, and J.E. Savrin. 1995. Public drinking water contamination and birth outcomes. Am. J. Epidemiol. 141(9): 850-862.
Bukowski, J. 2014. Critical review of the epidemiologic literature regarding the association between congenital heart defects and exposure to trichloroethylene. Crit. Rev. Toxicol. 44(7):581-589.
Coder, P.S. 2018. An Oral (Drinking Water) Study of the Effects of Trichloroethylene (TCE) on Fetal Heart Development in Sprague Dawley Rats. Charles River Laboratories Ashland, LLC [as cited in Sussan et al. 2019].
Ferrari, R. 2015. Writing narrative style literature reviews. Medical Writing 24:230-234. doi: 10.1179/2047480615Z.000000000329.
Greenland, S., and K. O’Rourke. 2001. On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions. Biostatistics 2(4):463-471.
Herbison, P., J. Hay-Smith, and W.J. Gillespie. 2006. Adjustment of meta-analyses on the basis of quality scores should be abandoned. J. Clin. Epidemiol. 59(12):1249-1256.
Higgins, J.P.T., and S. Green (eds). 2011. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration [online]. Available: https://training.cochrane.org/handbook [accessed July 3, 2019].
Johnson, P.D., S.J. Goldberg, M.Z. Mays, and B.V. Dawson. 2003. Threshold of trichloroethylene contamination in maternal drinking waters affecting fetal heart development in the rat. Environ. Health Perspect. 111(3):289-292.
Jüni, P., A. Witschi, R. Bloch, and M. Egger. 1999. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 282(11):1054-1060.
Kjellstrand, P., B. Holmquist, N. Mandahl, and M. Bjerkemo. 1983. Effects of continuous trichloroethylene inhalation on different strains of mice. Acta Pharmacol. Toxicol. (Copenh). 53(5):369-374.
Lagakos, S.W., B.J. Wessen, and M. Zelen. 1986. An analysis of contaminated well water and health effects in Woburn, Massachusetts. J. Am. Stat. Assoc. 81(395):583-596.
Makris, S.L., C. Siegel Scott, J. Fox, T.B. Knudsen, A.K. Hotchkiss, X. Arzuaga, S.Y. Euling, C.M. Powers, J. Jinot, K.A. Hogan, B.D. Abbott, E.S. Hunter, III, and M.G. Narotsky. 2016. A systematic evaluation of the potential effects of trichloroethylene exposure on cardiac development. Reprod. Toxicol. 65:321-358.
NASEM (National Academies of Sciences, Engineering, and Medicine). 2016. Refinements to the Methods for Developing Spacecraft Exposure Guidelines. Washington, DC: The National Academies Press.
NASEM. 2017. Application of Systematic Review Methods in an Overall Strategy for Evaluating Low-Dose Toxicity from Endocrine Active Chemicals. Washington, DC: The National Academies Press.
NASEM. 2018. Progress Toward Transforming the Integrated Risk Information System (IRIS) Program: A 2018 Evaluation. Washington, DC: The National Academies Press.
NRC (National Research Council). 1992. Guidelines for Developing Spacecraft Maximum Allowable Concentrations for Space Station Contaminants. Washington, DC: National Academy Press.
NRC. 2000. Methods for Developing Spacecraft Water Exposure Guidelines. Washington, DC: National Academy Press.
NRC. 2001. Standing Operating Procedures for Developing Acute Exposure Guideline Levels for Hazardous Chemicals. Washington, DC: National Academy Press.
NRC. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press.
NRC. 2011. Review of the Environmental Protection Agency’s Draft IRIS Assessment of Formaldehyde. Washington, DC: The National Academies Press.
NRC. 2014. Review of EPA’s Integrated Risk Information System (IRIS) Process. Washington, DC: The National Academies Press.
NTP (National Toxicology Program). 2019. Handbook for Conducting a Literature-Based Health Assessment Using OHAT Approach for Systematic Review and Evidence Integration. Research Triangle Park, NC: Office of Health Assessment and Translation, Division, National Toxicology Program, National Institute of Environmental Health Sciences. March 4, 2019 [online]. Available: https://ntp.niehs.nih.gov/pubhealth/hat/review/index-2.html [accessed July 3, 2019].
Ruckart, P.Z., F.J. Bove, and M. Maslia. 2013. Evaluation of exposure to contaminated drinking water and specific birth defects and childhood cancers at Marine Corps Base Camp Lejeune, North Carolina: A case-control study. Environ. Health 12:104.
Sanders, V., A.N. Tucker, K.L. White, B.M. Kauffmann, P. Hallett, R.A. Carchman, J.F. Borzelleca, and A.E. Munson. 1982. Humoral and cell-mediated immune status in mice exposed to trichloroethylene in drinking water. Tox. Appl. Pharm. 62:358-368.
Shea, B.J., B.C. Reeves, and G. Wells. 2017. AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008.
Sussan, T.E., G.J. Leach, T.R. Covington, J.M. Gearhart, and M.S. Johnson. 2019. Trichloroethylene: Occupational Exposure Level for the Department of Defense. January 2019. U.S. Army Public Health Center, Aberdeen Proving Ground, MD.
Whiting, P., J. Savovic, J.B. Higgins, D.M. Caldwell, B.C. Reeves, B. Shea, P. Davies, J. Kleijnen, and R. Churchill. 2016. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J. Clin. Epidemiol. 69:225-234.
Wikoff, D., J.D. Urban, S. Harvey, and L.C. Haws. 2018. Role of risk of bias in systematic review for chemical risk assessment: A case study in understanding the relationship between congenital heart defects and exposures to trichloroethylene. Int. J. Toxicol. 37(2):125-143.
Woodruff, T.J., and P. Sutton. 2014. The Navigation Guide systematic review methodology: A rigorous and transparent method for translating environmental health science into better health outcomes. Environ. Health Perspect. 122(10):1007-1014.
Woolhiser, M.R., S.M. Krieger, J. Thomas, and J.A. Hotchkiss. 2006. Trichloroethylene (TCE): Immunotoxicity potential in CD rats following a 4-week vapor inhalation exposure. Midland, MI: Dow Chemical Company.
Wu, K.L., and T. Berger. 2007. Trichloroethylene metabolism in the rat ovary reduces oocyte fertilizability. Chem. Biol. Interact. 170(1):20-30.