12
Moving Beyond the Current State of Practice
The committee found that the Environmental Protection Agency (EPA) draft Integrated Risk Information System (IRIS) assessment could be improved in several ways. Such changes are not necessary for completing the current assessment but should be considered when tetrachloroethylene is re-evaluated. They include improvements in the presentation and organization of information, addition of transparency in documenting procedures used for identifying and selecting studies, and the use of evolving approaches to uncertainty analysis. Guidance in many of these areas is provided in a recent National Research Council (NRC 2009) report Science and Decisions, which discusses advancing risk assessment practices.
ORGANIZATION AND APPROACH
There is a vast amount of literature on tetrachloroethylene, and drafting of the IRIS assessment was hampered by the need to manage such a large volume. EPA should consider ways of reorganizing the document to streamline the presentation of data and analyses. The current organization requires that some information be duplicated in various places. Part of the document also appears to be targeted to controversies in interpretation of some aspects of the data. In several instances, the committee found that EPA had spent more time in debunking others’ positions than in bolstering its own arguments.
Although the draft provides a comprehensive review of the available data, it is not clear whether studies were evaluated case by case or a consistent set of criteria were applied. To ensure consistent and transparent analysis of the data, criteria for identifying, analyzing, and selecting studies should be established in advance to guide the assessment in focusing on the most relevant studies. Study design and methods are the most important factors in study selection. Other fac-
tors, such as exposure considerations and outcomes, will also play a role in selection.
Consideration of the quality of an assessment is predicated on not only its content but the process by which it was prepared. There should be a preassessment discussion of problem formulation and issue identification that indicates the extent of reliance on previous reviews, the focus of the future effort, and the specific issues on which the assessment is likely to be focused. (Guidance on the design of a risk assessment in its formative stages is provided by the NRC [2009].) That would serve as a basis for soliciting external multidisciplinary input at an early stage in such critical matters as mode of action and evaluation of information on specific end points (including both toxicologic and epidemiologic data). It would include a priori delineation and weighting of criteria for evidence of hazard and options analysis for dose-response assessment and associated uncertainties. Attention to specifying evaluation criteria and the options considered is expected to contribute considerably to transparency in the separation of science judgment from science-policy choices.
To increase transparency, accountability, and defensibility and to improve the content and process of assessments, the committee offers the following recommendations regarding future assessments of tetrachloroethylene:
-
The nature of, timeframe for, and extent of consideration of relevant data should be clearly framed and stated (for example, standard searching of identified electronic sources with criteria specified, cutoff date past which no additional data were considered, and identification of current studies by reviewers).
-
Exclusion criteria for particular studies should be clearly identified and explained (for example, unpublished or published after a particular date). In particular, there should be description of steps taken to ensure that studies identified after the original search were selected without bias from the totality of the available data.
-
The methods used for qualitative characterization of uncertainties should be clearly identified, explained, and documented. Qualitative assessment of uncertainty involves (WHO 2008) evaluation of the level of uncertainty of each specified source according to a scoring method, identification and description of the major sources of uncertainty, appraisal of the knowledge base associated with each major source of uncertainty, identification of controversial sources of uncertainty, evaluation of the subjectivity of choices of controversial sources of information, and iteration until the output reflects the current state of knowledge.
-
The specific nature of the process of preparing and reviewing the assessment—including identification of authors and reviewers, timeline and nature of peer input, consultation, and peer review—should be set forth.
UNCERTAINTY ASSESSMENT
Scientific Needs
Beginning as early as the 1980s (NRC 1983), expert scientific advisory groups have been recommending that risk analyses include a clear discussion of the uncertainties in risk estimation. The National Research Council (NRC 1994; 2009) stated the need to describe uncertainty and to capture variability in risk estimates. The Presidential/Congressional Commission on Risk Assessment and Risk Management (PCCRARM 1997) recommended against a requirement or need for a “bright-line,” or single-number, level of risk. Regulatory science often requires selection of a limit for a contaminant, but the limit always contains uncertainty as to how protective it is. Explicit quantification of uncertainty enables decisions regarding degree of protection to be made in the policy arena rather than buried among assumptions of a technical analysis. Risk characterization became EPA policy in 1995, and the principles of transparency, clarity, consistency, and reasonableness are explicated in the 2000 Risk Characterization Handbook (EPA 2000). Criteria for transparency, clarity, consistency, and reasonableness require analysts to describe and explain the uncertainties, variability, and known data gaps in a risk analysis and imply that decision-makers should explain how they affect resulting decision-making processes (EPA 1992, 1995, 2000).
On numerous occasions, the National Research Council has explicitly called for the use of probabilistic risk assessment (NRC 2006b, 2007). In 1983, it formalized the risk-assessment paradigm that includes dose-response analysis as a key component (NRC 1983). In 1989, it recommended that EPA consider the distribution of exposure and sensitivity of response in the population (NRC 1989). In 1991, it stated that when assessing human exposure to air pollutants, EPA should present model results with estimated uncertainties. In 1993, it recommended that EPA thoroughly discuss uncertainty and variability in the context of ecologic risk assessment (NRC 1993). In 1994, in a major review of risk-assessment methodology, it stated that “uncertainty analysis is the only way to combat the ‘false sense of certainty,’ which is caused by a refusal to acknowledge and (attempt to) quantify the uncertainty in risk predictions” (NRC 1994). And in 2002, it suggested that EPA’s estimation of health benefits was not wholly credible, because the agency failed to deal formally with uncertainties in its analyses (NRC 2002).
EPA’s Science Advisory Board (SAB) has made recommendations similar to those of the National Research Council. It urged EPA to characterize variability and uncertainty more fully and more systematically and to replace singlepoint uncertainty factors with a set of distributions by using probabilistic methods (EPASAB 2007). EPA has developed numerous internal handbooks on conducting quantitative analysis of uncertainties in various contexts (e.g., EPA
1995, 1997, 1998, 2000, 2001). In 2009, it provided a detailed overview of the current use of probabilistic risk analysis in the agency (including 16 detailed case-study examples), an enumeration of the relevance of probabilistic risk analysis to decision-making, common challenges faced by decision-makers, an overview of probabilistic risk-analysis methodology, and recommendations on how probabilistic risk analysis can support regulatory decision-making. EPA’s National Exposure Research Laboratory has recently explored methodologic issues in dealing with uncertainty quantitatively when air-quality, exposure, and dose models are coupled (Ozkaynak et al. 2008).
There are numerous texts on analysis of uncertainty (e.g., Morgan and Henrion 1990; Cullen and Frey 1999; Vose 2008). The World Health Organization (WHO) has recently released guidance on qualitative and quantitative methods of uncertainty analysis in the context of exposure assessment (WHO 2008). Its guidelines have been used by EPA to support uncertainty assessments related to exposure to and health effects of criteria pollutants under the National Ambient Air Quality Standards. Hence, the framework is a general one. In particular, WHO proposed guiding principles that are adapted as follows:
-
Uncertainty analysis should be an integral part of the assessment.
-
The objective and level of detail of the uncertainty analysis should be based on a tiered approach and be consistent with the overall scope and purpose of the assessment.
-
Sources of uncertainty and variability should be systematically identified.
-
The presence or absence of moderate to strong dependence of one input on another should be discussed and appropriately accounted for.
-
Data, expert judgment, or both should be used to inform the specification of uncertainties in scenarios, models, and inputs.
-
Sensitivity analysis should be an integral component of the assessment.
-
Uncertainty analyses should be fully and systematically documented in a transparent manner, including quantitative aspects pertaining to data, methods, inputs, models, and outputs; sensitivity analysis; qualitative aspects; and interpretation of results.
-
The results of the assessment, including uncertainty, should be subject to an evaluation process that may include peer review, model comparison, quality assurance, or comparison with relevant data or independent observations.
-
Where appropriate for an assessment objective, assessments should be iteratively refined to incorporate new data and methods to reduce uncertainty and to improve the characterization of variability.
-
Communication of assessment uncertainties to stakeholders should reflect the needs of different audiences in a transparent and understandable manner.
Decision-Making Context for Use of Uncertainty Assessment
EPA decision-makers face scientifically complex problems that entail uncertainty. A risk assessment includes exposure assessment, dose-response assessment, and risk characterization. Methods for quantifying uncertainty in exposure assessment are well accepted and widely applied (e.g., Cullen and Frey 1999). Risk can be characterized for a population (for example, the expected number of excess cancers) or an individual (for example, the incremental lifetime risk of excess cancer). The need for characterization of uncertainty in risk characterization is supported by numerous National Research Council studies (for example, NRC 1994). The decision context of risk assessment includes setting priorities for the activities of the assessment and development of data for the assessment to characterize and, where possible, reduce uncertainty and managing risk. Decision-makers often want to know who is at risk, the magnitude of risk, and tradeoffs between risk-management alternatives. Examples of specific questions that decision-makers may ask include the following (Bloom et al. 1993; Krupnick et al. 2006):
-
How representative is the estimate (for example, what is the variability around an estimate)?
-
What are the major gaps in knowledge, and what major assumptions are used in the assessment? How reasonable are the assumptions?
-
Is it likely that additional data collection and research would lead to a different decision? How long would it take to collect the information, how much would it cost, and would the resulting decision be substantially different?
Moving Beyond the Current State of Practice
EPA’s assessment of tetrachloroethylene follows a traditional approach for developing “cancer slope factors” and “hazard indexes” that take into account uncertainties qualitatively and through uncertainty factors. Although EPA claims to have introduced a new method for uncertainty analysis in the context of the dose-response assessment of tetrachloroethylene, in fact the only differences between the draft IRIS aassessment for tetrachloroethylene and those of other chemicals are the consideration of multiple end points and the limited use of bootstrap simulation for only a portion of uncertainties. The various alternative dose-response estimates developed represent inter-end-point variability, not uncertainty.
The well-accepted default-based approach to developing dose-response relationship estimates leads to point estimates, not distributional ranges. The choice of point estimates is based on default assumptions regarding uncertainty factors and default inference methods for fitting and interpretation of doseresponse functions. Therefore, such estimates do not depict uncertainty quantita-
tively in conjunction with the final result, and they are based on assumptions that may mix policy judgments about degree of protection and scientific goals of developing a best estimate. Thus, the state of practice does not fully meet the spirit of principles, guidelines, and recommendations that have accrued over the years from such science advisory bodies as the EPA’s SAB, WHO, and most recently the National Research Council (NRC 2009). Today, the approach that EPA has taken is considered to be the best practice but not a state-of-the-art practice. For example, although uncertainty factors are used to account for such issues as extrapolation from subchronic to chronic exposure, interspecies extrapolation, and adjustments from lowest-observed-adverse-effect levels to no observed-adverse-effect levels, the use of such factors does not characterize uncertainty. There is a lack of transparency as to the basis of those factors and whether they mix policy-based assumptions with science-based assessments. Furthermore, a user of the resulting dose-response estimates has no information regarding the quantitative range of uncertainty.
Others have illustrated methods that could be used to quantify uncertainty in dose-response assessment, but such techniques are not reviewed, considered, or applied in EPA’s draft assessment of tetrachloroethylene. We mention a few illustrative examples of techniques that others have explored. Evans et al. (1994) demonstrated a probability-tree method for quantifying uncertainty associated with low-dose cancer risk. IEc (2006) has demonstrated a method for quantifying uncertainty in concentration-response functions for fine particulate matter that is based on a formal, systematic approach to eliciting subjective probability distributions from multiple carefully selected experts. Small (2008) enumerates an approach that, if implemented, would advance the state of practice in combining multiple sources of uncertainty, including combination based on judgment and data. In this approach, a prior distribution is postulated to the options on a key assumption, such as the one for MOA, or a key choice, such as candidate data sets. Each final risk estimate is a result of a combined set of assumptions and choices propagating through the risk-assessment process tree and is assigned a probability that results from the prior probabilities assigned to each associated assumption and choice. The collection of all final risk estimates will thus cover all admissible combinations of assumptions and choices and will form a probabilistic distribution that quantifies the full range of variation of the risk estimates. Additionally, this probabilistic distribution of risk estimates can be used, with the incorporation of new data, to obtain posterior probabilities for the assumptions and choices involved in each step of risk estimation. With the help of a distribution of risk estimates to reflect the overarching uncertainties and variations, regulatory policy can be less dependent on a principal study or a few data sets. In fact, the risk-management process can use the distributional properties to choose and justify a final risk estimate in the context of this full range of uncertainties and variations.
Hence, EPA in general and the IRIS program in particular should explore methods for adoption or adaptation to improve the qualitative and quantitative characterization of uncertainty. In general, there should be both well-structured
qualitative assessment of uncertainties and quantitative assessments wherever possible. Preference should be given to quantitative assessment as the desirable approach, and justification for the use of qualitative instead of quantitative approaches should be provided. For example, it should be explained why the state of science is adequate to characterize a point estimate but not a range of uncertainty if quantitative methods of uncertainty analysis are not used.
A key way forward in quantifying uncertainty is to accept the role of expert scientific judgment. Such judgment is used routinely to make inferences regarding hazard identification and in developing dose-response characterizations of chemicals. The examples of Evans et al. (1994), IEC (2006), and Small (2008) rely on encoding expert judgment as subjective probability distributions to various degrees. The appropriate selection and application of methods for quantifying uncertainty in dose-response relationships are undergoing development and need additional research from which guidance on best practices can be derived. As an example of the exploratory nature of dealing with uncertainty in dose-response relationships, the 2007 Resources for the Future workshop “Uncertainty Modeling in Dose Response: Dealing with Simple Bioassay Data, and Where Do We Go from Here?” explored a variety of methods for quantifying uncertainty and the needed role of qualitative assessment to deal with aspects of dose-response modeling that are believed not to be amenable to quantification. Some quantitative techniques that were explored were bootstrap simulation and probabilistic inversion with isotonic regression and Bayesian-model averaging to deal with uncertainty in model structure. However, although there is not yet a default method for quantifying uncertainty in dose-response relationships, EPA can and should review and adopt or adapt various methods that are being explored in the scientific community, taking particular note of the possibilities for combining expert judgment and data with Bayesian approaches.