National Academies Press: OpenBook

Science and Decisions: Advancing Risk Assessment (2009)

Chapter: 4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment

« Previous: 3 The Design of Risk Assessments
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 93
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 94
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 95
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 96
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 97
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 98
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 99
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 100
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 101
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 102
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 103
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 104
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 105
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 106
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 107
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 108
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 109
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 110
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 111
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 112
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 113
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 114
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 115
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 116
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 117
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 118
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 119
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 120
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 121
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 122
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 123
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 124
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 125
Suggested Citation:"4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment." National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/12209.
×
Page 126

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment introduction to the Issues and Terminology Characterizing uncertainty and variability is key to the human health risk-assessment process, which must engage the best available science in the presence of uncertainties and difficult-to-characterize variability to inform risk-management decisions. Many of the top- ics in the committee’s statement of task (Appendix B) address in some way the treatment of uncertainty or variability in risk analysis. Some of those topics have existed since the early days of environmental risk assessment. For example, Risk Assessment in the Federal Government: Managing the Process (NRC 1983), referred to as the Red Book, addressed the use of inference guidelines or default assumptions. Science and Judgment in Risk Assess- ment (NRC 1994) provided recommendations on defaults, use of quantitative methods for uncertainty propagation, and variability in exposure and susceptibility. The role of expert elicitation in uncertainty analysis has been considered in other fields for decades, although it has only been examined and used in select recent cases by the Environmental Protection Agency (EPA). Other topics identified in the committee’s charge whose improvement requires new consideration of the best approaches for addressing uncertainty and variability include the cumulative exposures to contaminant mixtures involving multiple sources, exposure pathways, and routes; biologically relevant modes of action for estimating dose-response relationships; models of environmental transport and fate, exposure, physiologically based pharmacokinetics, and dose-response relationships; and linking of ecologic risk-analysis methods to human health risk analysis. Much has been written that addresses the taxonomy of uncertainty and variability and the need and options for addressing them separately (Finkel 1990; Morgan et al. 1990; EPA 1997a,b; Cullen and Frey 1999; Krupnick et al. 2006). There are also several useful guide- lines on the mechanics of uncertainty analysis. However, there is an absence of guidelines on the appropriate degree of detail, rigor, and sophistication needed in an uncertainty or variability analysis for a given risk assessment. The committee finds this to be a critical is- 93

94 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT sue. In presentations to the committee (Kavlock 2006; Zenick 2006) and recent evaluations of emerging scientific advances (NRC 2006a, 2007a,b), there is the promise of improved capacity for assessing risks posed by new chemicals and risks to sensitive populations that are left unaddressed by current methods. The reach and depth of risk assessment are sure to improve with expanding computer tools, additional biomonitoring data, and new toxicology techniques. But such advances will bring new challenges and an increased need for wisdom and creativity in addressing uncertainty and variability. New guidelines on uncertainty analysis (NRC 2007c) can help enormously in the transition, facilitating the introduction of the new knowledge and techniques into agency assessments. Characterizing each stage in the risk assessment process—from environmental release to exposure to health effect (Figure 4-1)—poses analytic challenges and includes dimensions of uncertainty and variability. Consider trying to understand the possible dose received by individuals and, on the average, by a population from the application of a pesticide. The extent of release during pesticide application may not be well characterized. Once the pes- ticide is released, the exposure pathways leading to an individual’s exposure are complex and difficult to understand and model. Some of the released substance may be transformed in the environment to a more or less toxic substance. The resulting overall exposure of the community near where the pesticide is released can vary substantially among individuals by age, geographic location, activity patterns, eating habits, and socioeconomic status. Thus, there can be considerable uncertainty and variability in how much pesticide is received. Those factors make it difficult to establish reliable exposure estimates for use in a risk as- sessment, and they illustrate how the characterization of exposure with a single number can be misleading. Understanding the dose-response relationship—the relationship between the dose and risk boxes in Figure 4-1—is as complex and similarly involves issues of uncertainty and variability. Quantifying the relationship between chemical exposure and the probabil- ity of an adverse health effect is often complicated by the need to extrapolate results from high doses to lower doses relevant to the population of interest and from animal studies to humans. Finally, there are interindividual differences in susceptibility that are often difficult to portray with confidence. Those issues can delay the completion of a risk assessment (for decades in the case of dioxin) or undermine confidence in the public and those who use risk assessments to inform and support their decisions. Discussions of uncertainty and variability involve specific terminology. To avoid confu- sion, the committee defines in Box 4-1 key terms as it has used them. The importance of evaluating uncertainty and variability in risk assessments has long been acknowledged in EPA documents (EPA 1989a, 1992, 1997a,b, 2002a, 2004a, 2006a) and National Research Council reports (NRC 1983, 1994). From the Red Book framework and the committee’s emphasis on the need to consider risk management op- tions in the design of risk assessments (Chapters 3 and 8), it is evident that risk assessors must establish procedures that build confidence in the risk assessment and its results. EPA builds confidence in its risk assessments by ensuring that the assessment process handles uncertainty and variability in ways that are predictable, scientifically defensible, consistent with the agency’s statutory mission, and responsive to the needs of decision-makers (NRC 1994). For example, several environmental statutes speak directly to the issue of protecting susceptible and highly exposed people (EPA 2002a, 2005c, 2006a). EPA has accordingly developed risk-assessment practices for implementing these statutes, although, as noted below and in Chapter 5, the overall treatment of uncertainty and variability in risk assess- ments can be insufficient. Box 4-2 provides examples of why uncertainty and variability are important to risk assessment.

UNCERTAINTY AND VARIABILITY 95 FIGURE 4-1  Illustration of key components evaluated in human health risk assessment, tracking pol- lutants from environmental release to health effects. In the sections below, the committee first reviews approaches to address uncertainty and variability and comments on whether and how the approaches have been applied to EPA risk assessments. The committee then focuses on uncertainty and variability as applied to each of the stages of the risk-assessment process (as illustrated in Figure 4-1, which expands beyond the four steps from the Red Book to consider subcomponents of risk assessment). The chapter concludes by articulating principles for uncertainty and variability analysis, leaving detailed recommendations on specific aspects of the risk-assessment process to Chapters 5 through 7. The committee notes that elements of exposure assessment are not addressed extensively

96 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT BOX 4-1 Terminology Related to Uncertainty and Variabilitya Accuracy: Closeness of a measured or computed value to its “true” value, where the “true” value is obtained with perfect information. Owing to the natural heterogeneity and stochastic nature of many biologic and environmental systems, the “true” value may exist as a distribution rather than a discrete value. Analytic model: A mathematical model that can be solved in closed form. For example, some model algorithms that are based on relatively simple differential equations can be solved analytically to provide a single solution. Bias: A systematic distortion of a model result or value due to measurement technique or model structure or assumption. Computational model: A model that is expressed in formal mathematics with equations, statistical relationships, or a combination of the two and that may or may not have a closed-form representa- tion. Values, judgment, and tacit knowledge are inevitably embedded in the structure, assumptions, and default parameters, but computational models are inherently quantitative, relating phenomena through mathematical relationships and producing numerical results. Deterministic model: A model that provides a single solution for the stated variables. This type of model does not explicitly simulate the effects of uncertainty or variability, as changes in model outputs are due solely to changes in model components. Domain (spatial and temporal): The limits of space and time that are specified in a risk assessment or risk-assessment component. Empirical model: A model that has a structure based on experience or experimentation and does not necessarily have a structure informed by a causal theory of the modeled process. This type of model can be used to develop relationships that are useful for forecasting and describing trends in behavior but may not necessarily be mechanistically relevant. Empirical dose-response models can be derived from experimental or epidemiologic observations. Expert elicitation: A process for obtaining expert opinions about uncertain quantities and probabili- ties. Typically, structured interviews and questionnaires are used in such elicitation. Expert elicitation may include “coaching” techniques to help the expert to conceptualize, visualize, and quantify the quantity or understanding being sought. Model: A simplification of reality that is constructed to gain insights into select attributes of a particu- lar physical, biologic, economic, or social system. Mathematical models express the simplification in quantitative terms. a  ompiled C or adapted from NRC (2007d) and IPCS (2004). in further chapters, as compared with other steps in the risk-assessment process, given our judgment that previous reports had sufficiently addressed many key elements of exposure assessment and that the exposure-assessment methods that EPA has developed and used in recent risk assessments generally reflect good technical practice, other than the overarching issues related to uncertainty and variability analysis and decisions about the appropriate analytic scope for the decision context.

UNCERTAINTY AND VARIABILITY 97 Parameters: Terms in a model that determine the specific model form. For computational models, these terms are fixed during a model run or simulation, and they define the model output. They can be changed in different runs as a method of conducting sensitivity analysis or to achieve calibration goals. Precision: The quality of a measurement that is reproducible in amount or performance. Measure- ments can be precise in that they are reproducible but can be inaccurate and differ from “true” values when biases exist. In risk-assessment outcomes and other forms of quantitative information, precision refers specifically to variation among a set of quantitative estimates of outcomes. Reliability: The confidence that (potential) users should have in a quantitative assessment and in the information derived from it. Reliability is related to both precision and accuracy. Sensitivity: The degree to which the outputs of a quantitative assessment are affected by changes in selected input parameters or assumptions. Stochastic model: A model that involves random variables (see definition of variable below). Susceptibility: The capacity to be affected. Variation in risk reflects susceptibility. A person can be at greater or less risk relative to the person in the population who is at median risk because of such characteristics as age, sex, genetic attributes, socioeconomic status, prior exposure to harmful agents, and stress. Variable: In mathematics, a variable is used to represent a quantity that has the potential to change. In the physical sciences and engineering, a variable is a quantity whose value may vary over the course of an experiment (including simulations), across samples, or during the operation of a system. In statistics, a random variable is one whose observed outcomes may be considered outcomes of a stochastic or random experiment. Their probability distributions can be estimated from observations. Generally, when a variable is fixed to take on a particular value for a computation, it is referred to as a parameter. Variability: Variability refers to true differences in attributes due to heterogeneity or diversity. Variability is usually not reducible by further measurement or study, although it can be better characterized. Vulnerability: The intrinsic predisposition of an exposed element (person, community, population, or ecologic entity) to suffer harm from external stresses and perturbations; it is based on variations in disease susceptibility, psychological and social factors, exposures, and adaptive measures to antici- pate and reduce future harm, and to recover from an insult. Uncertainty: Lack or incompleteness of information. Quantitative uncertainty analysis attempts to analyze and describe the degree to which a calculated value may differ from the true value; it some- times uses probability distributions. Uncertainty depends on the quality, quantity, and relevance of data and on the reliability and relevance of models and assumptions. UNCERTAINTY in risk assessment Uncertainty is foremost among the recurring themes in risk assessment. In quantitative assessments, uncertainty refers to lack of information, incomplete information, or incorrect information. Uncertainty in a risk assessment depends on the quantity, quality, and relevance of data and on the reliability and relevance of models and inferences used to fill data gaps.

98 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT BOX 4-2 Some Reasons Why It Is Important to Quantify Uncertainty and Variability Uncertainty • Characterizing uncertainty in risk informs the affected public about the range of possible risks from an exposure that they may be experiencing. Risk estimates sometimes diverge widely. • Characterizing the uncertainty in risk associated with a given decision informs the decision- maker about the range of potential risks that result from the decision. That helps in evaluating any decision alternative on the basis of the possible risks, including the most likely and the worst ones; it also informs the public. • Mathematically, it is often not possible to understand what may occur on average without understanding what the possibilities are and how probable they are. • The value of new research or alternative research strategies can be assessed by considering how much the research is expected to reduce the overall uncertainty in the risk estimate and how the reduction in uncertainty leads to different decision options. • Although the committee is not aware of any research to prove it, there is a strong sense among risk assessors that acknowledging uncertainty adds to the credibility and transparency of the decision- making process. Variability • Assessing variability in risk enables the development of risk-management options that focus on the people at greatest risk rather than on population averages. For example, the risk from exposures to particular vehicle emissions varies in a population and can be much higher in those close to roadways than the population average. That has implications for zoning and school-siting decisions. • Understanding how the population may vary in risk can facilitate understanding of the shape of the dose-response curve (see Chapter 5). Greater use of genetic markers for factors contributing to variability can support this effort. • It is often not possible to estimate an average population risk without knowing how risk varies among individuals in the population. • On the basis of understanding how different exposures may affect risk, people might alter their own level of risk, for example, by filtering their drinking water or eating fewer helpings of swordfish (which is high in methyl mercury). • The aims of environmental justice are furthered when it becomes clear that some community groups are at greater risk than the overall group and policy initiatives are undertaken to rectify the imbalance. For example, the quantity, quality, and relevance of data on dietary habits and a pesticide’s fate and transport will affect the uncertainty of parameter values used to assess population variability in the consumption of the pesticide in food and drinking water. The assumptions and scenarios applied to address a lack of data on how frequently a person eats a particular food affect the mean and variance of the intake and the resulting risk distribution. It is the risk assessor’s job to communicate not only the nature and likelihood of possible harm but the uncertainty in the assessment. One of the more significant types of uncertainties in EPA risk assessments can be characterized as “unknown unknowns”—factors that the assessor is not aware of. These uncertainties cannot be captured by standard quantitative uncertainty analyses, but can only be addressed with an interactive approach that allows timely and effective detection, analysis, and correction. EPA’s practices in uncertainty analysis are reviewed below. The discussion of practice begins by considering EPA’s use of defaults. An expanded treatment of uncertainty beyond

UNCERTAINTY AND VARIABILITY 99 defaults requires additional techniques. Specific analytic techniques that EPA has used or could use in these contexts are discussed below, including Monte Carlo analysis for quantita- tive uncertainty analysis, expert elicitation, methods for addressing model uncertainty, and addressing uncertainty in risk comparisons. In parallel, the conduct of assessments (including uncertainty analysis) that are appropriate in complexity for risk-management decisions is discussed with considerations for uncertainty analyses used to support risk-risk, risk-benefit, and cost-benefit comparisons and tradeoffs. The Environmental Protection Agency’s Use of Available Methods for Addressing Uncertainty EPA’s treatment of uncertainty is evident both in its guidance documents and from a review of important risk assessments that it has conducted (EPA 1986, 1989a,b, 1997a,b,c, 2001, 2004a, 2005b). The agency’s guidance follows in large part from recommendations in the Red Book (NRC 1983) and other National Research Council reports (for example, NRC 1994, 1996). Use of Defaults As described in the Red Book, because of large inherent uncertainties, human health risk assessment “requires judgments to be made when the available information is incomplete” (NRC 1983, p. 48). To ensure that the judgments are consistent, explicit, and not unduly influenced by risk-management considerations, the Red Book recommended that so-called “inference guidelines,” commonly referred to as defaults, be developed independently of any particular risk assessment (p. 51). Science and Judgment in Risk Assessment (NRC 1994) reaffirmed the use of defaults as a means of facilitating the completion of risk assessments. EPA often relies on default assumptions when “the chemical- and/or site-specific data are unavailable (i.e., when there are data gaps) or insufficient to estimate parameters or resolve paradigms . . . to continue with the risk assessment” (EPA 2004a, p. 51). Defaults which are the focus of controversy and debate are often needed to complete cancer-hazard iden- tification and dose-response assessment. Because of their importance and the need to ad- dress some of the above concerns, the committee devotes Chapter 6 to default assumptions. Consideration is given to how risk assessments can use emerging methods to characterize uncertainties more explicitly while conveying the information needed to inform near-term risk-management decisions. Some approaches based on defaults lead to confusion about levels of uncertainty. For example, EPA estimates cancer risk from the results of animal studies based on default as- sumptions and then applies likelihood methods to fit models to tumor data and character- izes the dose-response relationship with the lower 95% confidence bound typically on a dose that causes a 10% tumor response beyond background (see Chapter 5). In the past, it estimated the upper 95% confidence bound in the linear term in the multistage polynomial, that is, the “cancer potency.” It usually does not show the opposite bound or other points in the distribution. EPA’s approach is reasonable, but it can lead to misunderstanding when the bounds on the final risk calculations are overinterpreted, for example, when bounds are discussed as characterizing the full range of uncertainty in the assessment. When a new study shows a higher upper bound on the potency or a lower bound on the risk-specific dose, it may appear that uncertainty has increased with further study. From a strictly Bayes- ian perspective, additional information can never increase uncertainty if the underlying distributional structure of uncertainty is correctly specified. However, when mischaracter-

100 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT ized and misunderstood, the framework for defaults used by EPA can make it appear that uncertainty is increasing. For example, suppose that there was an epidemiologic study of the effects of an environmental contaminant, and suppose that the degree of overall uncertainty is incorrectly characterized by the parameter uncertainty in fitting a dose-response slope to the results of that single study. If a second study caused EPA to select an alternative value for the dose-response slope, the risk estimate would change. The uncertainty conditional on one or the other causal model may or may not change. Chapters 5 and 6 suggest approaches to establishment of defaults and uncertainty characterization that may encourage research that could reduce key uncertainties. Quantitative Uncertainty Analysis In a quantitative uncertainty analysis (QUA), both uncertainty and variability in differ- ent components of the assessment (emissions, transport, exposure, pharmacokinetics, and dose-response relationship) are combined by using an uncertainty-propagation method, such as Monte Carlo simulation, with two-stage Monte Carlo analysis utilized to separate uncertainty and variability to the extent possible. This approach has been referred to as probabilistic risk assessment, but the committee prefers to avoid this term because of its association with fault-tree analysis in engineering. The use of the term QUA to encompass variability as well as uncertainty is awkward, but we use this term going forward to be consistent with its usage elsewhere. In the federal government, an early user of QUA was the Nuclear Regulatory Com- mission. In the mid-1970s, the Nuclear Regulatory Commission used QUA that involved considerable use of expert judgment to characterize the likelihood of nuclear reactor failure (USNRC 1975). QUA became more commonly used in EPA in the late 1980s. EPA has since been encouraging the use of QUA in many programs, and the computational methods required have become more readily available and practicable. An example of the evolution of the use of QUA in EPA is its risk-assessment guidance for Superfund. The 1989 Risk Assessment Guidance for Superfund (RAGS), Volume 1 (EPA 1989a) and supporting guidance describe a point-estimate (single-value) approach to risk as- sessment. The output of the risk equation is a point estimate that could be a central-tendency exposure estimate of risk (for example, the mean or median risk) or reasonable-maximum- exposure (RME) estimate of risk (for example, the risk expected if the RME occurred), depending on the input values used in the risk equation. But RAGS, Volume 3, Part A (EPA 2001) describes a probabilistic approach that uses probability distributions for one or more variables in a risk equation to characterize variability and uncertainty quantitatively. The common practice of choosing high percentile values (ensuring one-sided confidence) for multiple uncertain variables provides results that are probably above the median but still at an unknown percentile of the risk distribution (EPA 2002a). QUA techniques, such as those in RAGS, Volume 3, can address this issue in part, but a few major concerns regard- ing their use in EPA remain. First, they require training to be used appropriately. Second, even if they are used appropriately, their outputs may not be easily understood by deci- sion-makers. So training is recommended not only for risk assessors but for risk managers (see recommendations in Chapter 2). Third and perhaps most important, in many contexts, the data may not be available to characterize all input distributions fully, in which case the assessment either involves subjective judgments or systematically omits key uncertainties. For formal QUA to be most informative, the treatment of uncertainty should, to the extent feasible, be homologous among components of the risk assessment (exposure, dose, and dose-response relationship).

UNCERTAINTY AND VARIABILITY 101 The differential treatment of uncertainty among components of a risk assessment makes the communication of overall uncertainty difficult and sometimes misleading. For example, in EPA’s regulatory impact analysis for the Clean Air Interstate Rule (EPA 2005c), formal probabilistic uncertainty analysis was conducted with the Monte Carlo method, but this considered only sampling variability in epidemiologic studies used for dose-response functions and in valuation studies. EPA used expert elicitation for a more comprehensive characterization of dose-response relationship uncertainty, but this was not integrated into a single output distribution. Within the quantitative uncertainty analysis, emissions and fate and transport modeling outputs were assumed to be known with no uncertainty. Although EPA explicitly acknowledged the omitted uncertainty in a qualitative discussion, it was not addressed quantitatively. The 95% confidence intervals reported did not reflect the actual confidence level, because the important uncertainties in other components were not included. The training mentioned above therefore should not only be related to the mechanical aspects of software packages but address issues of interpretability and the goal of treating uncertainty consistently among all components of risk assessment. An earlier National Research Council committee (NRC 2002) and the EPA SAB (2004) also raised concerns about the inconsistent approach to uncertainty characterization. How- ever, it is important to recognize that there are some uncertainties in environmental and health risk assessments that defy quantification (even by expert elicitation) (IPCS 2006; NRC 2007d) and that inconsistency in approach will be an issue to grapple with in risk charac- terization for some time to come. The call for homologous treatment of uncertainty should not be read as a call for “least-common-denominator” uncertainty analysis, in which the difficulty of characterizing uncertainty in one dimension of the analysis leads to the omission of formal uncertainty analysis in other components. Use of Expert Judgment It often happens in practice that empirical evidence on some components of a risk as- sessment is insufficient to establish uncertainty bounds and evidence on other components captures only a fraction of the total uncertainty. When large uncertainties result from a com- bination of lack of data and lack of conceptual understanding (for example, a mechanism of action at low dose), some regulatory agencies have relied on expert judgment to fill the gaps or establish default assumptions. Expert judgment involves asking a set of carefully selected experts a series of questions related to a specific array of potential outcomes and usually providing them with extensive briefing material, training activities, and calibration exercises to help in the determination of confidence intervals. Formal expert judgment has been used in risk analysis since the 1975 Reactor Safety Study (USNRC 1975), and there are multiple examples in the academic literature (Spetzler and von Holstein 1975; Evans et al. 1994; Budnitz et al. 1998; IEc 2006). EPA applications have been more limited, perhaps in part because of institutional and statutory constraints, but interest is growing in the agency. The 2005 Guidelines for Carcinogen Risk Assessment (EPA 2005b, p. 3-32) state that “these cancer guidelines are flexible enough to accommodate the use of expert elicita- tion to characterize cancer risks, as a complement to the methods presented in the cancer guidelines.” A recent study of health effects of particulate matter used expert elicitation to characterize uncertainties in the concentration-response function for mortality from fine particulate matter (IEc 2006). Expert elicitation can provide interesting and potentially valuable information, but some   xpert E judgment is analogous to the term expert elicitation.

102 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT critical issues remain to be addressed. It is unclear precisely how EPA can use this informa- tion in its risk assessments. For example, in its regulatory impact analysis of the National Ambient Air Quality Standard for PM2.5 (particulate matter no larger than 2.5 μm in aero- dynamic diameter), EPA did not use the outputs of the expert elicitation to determine the confidence interval for the concentration-response function for uncertainty propagation but instead calculated alternative risk estimates corresponding to each individual expert’s judg- ment with no weighting or combining of judgments (EPA 2006b). It is unclear how that type of information can be used productively by a risk manager, inasmuch as it does not convey any sense of the likelihood of various values, although seeing the range and commonality of judgments of individual experts may be enlightening. Formally combining the judgments can obscure the degree of their heterogeneity, and there are important methodologic debates on the merits of weighing expert opinions on the basis of their performance on calibration exercises (Evans et al. 1994; Budnitz et al. 1998). Two other problems are the need to com- bine incompatible judgments or models and the technical issue of training and calibration when there is a fundamental lack of knowledge and no opportunity for direct observation of the phenomenon being estimated (for example, the risk of a particular disease at an environmental dose). Although methods have been developed to address various biases in expert elicitation, expert mischaracterization is still expected (NRC 1996; Cullen and Small 2004). Some findings about judgment in the face of uncertainty that can apply to experts are provided in Box 4-3. Other practical issues are the cost of and time required for expert elicitation, management of conflict of interest, and the need for a substantial evidence base on which the experts can draw to make expert elicitation useful. Given all of those limitations, there are few settings in which expert elicitation is likely to provide information necessary for discriminating among risk-management options. The com- mittee suggests that expert elicitation be kept in the portfolio of uncertainty-characterization BOX 4-3 Cognitive Tendencies That Affect Expert Judgment Availability: The tendency to assign greater probability to commonly encountered or frequently men- tioned events. Anchoring and adjustment: The tendency to be over-influenced by the first information seen or pro- vided in an initial problem formulation. Representativeness: The tendency to judge an event by reference to another that in the eye of the expert resembles it, even in the absence of relevant information. Disqualification: The tendency to ignore data or strongly discount evidence that contradicts strongly held convictions. Belief in “law of small numbers”: The tendency of scientists to believe small samples from a popula- tion to be more representative than is justified. Overconfidence: The tendency of experts to overestimate the probability that their answers are correct. Source: Adapted from NRC 1996; Cullen and Small 2004.

UNCERTAINTY AND VARIABILITY 103 options available to EPA but that it be used only when necessary for decision-making and when evidence to support its use is available. The general concept of determining the level of sophistication in uncertainty analysis (which could include expert elicitation or complex QUA) based on decision-making needs is outlined in more detail below. Level of Uncertainty Analysis Needed The discussion of the variety of ways in which EPA has dealt with uncertainty—from defaults to standard QUA to expert elicitation—raises the question of the level of analysis that is needed in any given problem. A careful assessment of when a detailed assessment of uncertainty is needed may avoid putting additional analytic burdens on EPA staff or limiting the ability of EPA staff to complete timely assessments. Formal QUA is not necessary and not recommended for all risk assessments. For example, for a risk assessment conducted to inform a choice among various control strategies, if a simple (but informative and com- prehensive) evaluation of uncertainties reveals that the choice is robust with respect to key uncertainties, there is no need for a more formal treatment of uncertainty. More complex characterization of uncertainty is necessary only to the extent that it is needed to inform specific risk-management decisions. It is important to address the extent and nature of uncertainty analysis needed in the planning and scoping phase of a risk assessment (see Chapter 3). For many problems, an initial sensitivity analysis can help determine those parameters whose uncertainty might most impact a decision and thus require a more detailed uncertainty analysis. One valuable approach involves utilizing tornado diagrams, in which individual parameters are permitted to vary while all other uncertain parameters are held fixed. The output of this exercise provides a graphical plot of parameters that have the largest influence on the final risk calculation. This both provides a visual representation of the sensitivity analysis, helpful for communication to risk managers and other stakeholders, and determines the subset of parameters that could be carried forward in more sophisticated QUA. “Tiers” or “levels” of sophistication in QUA in risk assessment have been discussed. Paté-Cornell (1996) proposed six levels ranging from level 0 (hazard detection and failure- mode identification) to level 5 (QUA with multiple risk curves reflecting variability at dif- ferent levels of uncertainty). Similarly, in its draft report on the treatment of uncertainty in exposure assessment, the International Programme on Chemical Safety (IPCS 2006) has pro- posed four tiers for addressing uncertainty and variability in exposure assessment, from the use of default assumptions to sophisticated QUA. The IPCS tiers are shown in Box 4-4. BOX 4-4 Levels of Uncertainty Analysis Tier 0: Default assumptions—single value of result. Tier 1: Qualitative but systematic identification and characterization of uncertainty. Tier 2:  uantitative evaluation of uncertainty making use of bounding values, interval analysis, and Q sensitivity analysis. Tier 3:  robabilistic assessment with single or multiple outcome distributions reflecting uncertainty P and variability. Source: IPCS 2006.

104 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT The committee does not endorse any specific ranking approaches but favors the up-front consideration of levels of sophistication in uncertainty analyses and notes that there is a continuum of approaches rather than a number of discrete options. The characterization of uncertainty and variability in a risk assessment should be planned and managed and matched to the needs of the stakeholders involved in risk-informed decisions. In evaluating the trad- eoff between the higher level of effort needed to conduct a more sophisticated analysis and the need to make timely decisions, EPA should take into account both the level of technical sophistication needed to identify the optimal course of action and the negative impacts that will result if the optimal course of action is incorrectly identified. If a relatively simple analysis of uncertainty (for example, a nonprobabilistic assessment of bounds) is sufficient to identify one course of action as clearly better than all the others, there is no need for further elucida- tion. In contrast, when the best choice is not so clear and the consequences of a wrong choice would be serious, EPA can proceed in an iterative manner, making the analysis more and more sophisticated until the optimal choice is sufficiently clear. In so doing, EPA should be mindful that one of the greatest costs of more sophisticated analysis can be the time involved, during which populations may continue to be exposed to an agent or costs may be incurred unnecessarily. Related to these issues, in planning the uncertainty analysis and interpreting lower-tier uncertainty analyses, it is preferable to have up-front agreement on terms of refer- ence. For example, calls for “central tendencies,” “best estimates,” or “plausible” upper or lower bounds of risk are of little value if these terms are not clearly defined. EPA has an opportunity and responsibility to develop guidelines for uncertainty analysis both to define terms of reference and to offer insight into appropriate tailoring of sophis- tication and level of practice to individual risk-management decisions. EPA has limited re- sources and should not be expected to treat all issues using a single approach or process. The tiered approach to uncertainty analysis provides EPA the opportunity to match the degree of sophistication in uncertainty analysis to the level of concern for a specific risk problem and to the decision-making needs to address that problem. Lower-tier uncertainty analysis methods can be used in a screening step to determine whether the information is adequate to make decisions and to identify situations in which more intensive quantitative methods would be necessary. Special Concerns about Uncertainty Analysis for Risk or Cost-Benefit Tradeoffs In making risk comparisons or cost-benefit determinations, consistency in addressing un- certainty in the risks, costs, and benefits being compared is particularly important, and fuller descriptions of uncertainty than provided by an upper confidence limit are also important. The approaches described above are typically applied to develop confidence bounds and a probability distribution for a single risk. Although assessors commonly analyze one risk at a time, many assessments are done to support analyses of various options for controlling a hazard. They can involve considering more than one uncertain quantity at the same time with respect to • Which of several risks deserves higher priority. • The net risk of an environmental control action (reduction in risk less any increases in risk because of substitution or risk transfer). • The net benefits of an action (reduction in risk less any costs incurred). • The total benefits of an action (the monetized reduction in risk in light of the baseline level of risk even if costs are ignored).

UNCERTAINTY AND VARIABILITY 105 Two issues make uncertainty analyses for risk-risk and risk-benefit or cost compari- sons more informative but also more difficult to do properly than single-item QUA. First, uncertainty in multiple risks means that simply stating that one risk is or is not larger than another risk, or that the benefits are or are not larger than the costs, is not a well-formulated comparison; the key is to determine the probability that one risk is larger or one action is preferable. Second, there is the question of how large the uncertainty is when comparing multiple with individual risks (Finkel 1995b). If the uncertainties in each of the items being compared are related, the uncertainty in the comparison can be less than that in an indi- vidual risk. But usually the uncertainties will be independent and not related. For example, uncertainty in risk based on estimating exposure and addressing toxicologic information will generally be completely independent of cost estimates for reducing the risk, which may be based on consumer and producer behavior. As a result, uncertainties in a comparison can exceed the uncertainty in items being compared, an important issue that has implications in developing and using risk estimates. Box 4-5 provides a simple but informative example about comparing two uncertain quan- tities. These quantities are risks, but they could be any measurable quantities of interest. The examples include a comparison of discrete and continuous probabilities. This simple example reveals the need to address confidence intervals both when assessing risk and when comparing risk. This discussion illustrates that statements regarding risk comparisons, or costs vs ben- efits, would be made better in probabilistic than in deterministic terms. The question “Do the benefits exceed the costs?” can be given an unequivocal yes answer only if virtually all possible values of the net benefit distribution are positive. This does not necessarily imply that EPA must utilize sophisticated QUA whenever risk-risk or benefit-cost comparisons are required. An iterative approach as proposed earlier can allow for a determination of whether benefits clearly exceed costs (or vice versa) using a relatively simple analysis of uncertainty, or whether more detailed analyses would be required to make this comparison interpretable. These efforts would benefit from EPA guidance on uncertainty and the concept of statistical significance as applied to cost-benefit and risk comparison analyses, with a specific emphasis on the use of a tiered uncertainty analysis approach in this context. Model Uncertainty One of the dimensions of uncertainty that is difficult to capture quantitatively (or even qualitatively) involves model uncertainty. The National Research Council (NRC 2007d) noted that there is a range of options for performing model-uncertainty analysis. One com- putationally intense option is to represent all model uncertainties probabilistically, including the uncertainties associated with a choice between alternative models or alternative model assumptions. Another option is to use a scenario or sensitivity assessment that might consider model results for a small number of plausible cases. A third option is to address uncertainty with default parameters and a “default model such that there is no explicit quantification of model uncertainty.” The first option has the problem of demanding detailed probabilistic analyses among one or more models that include potentially large numbers of parameters whose uncertainties must be estimated, often with little information. Such problems are compounded when models are linked into a highly complex system. In the second option noted above, when scenario assessment and sensitivity analysis are used to evaluate model uncertainty without making explicit use of probability, such a deterministic approach is easy to implement and understand but typically does not include what is known about each scenario’s likelihood. In many situations, some combination of these first two approaches is

106 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT BOX 4-5 Examples of Uncertainties for Comparisons of Discrete and Continuous Possibilities Example 1: Discrete Consider two quantities, A and B—they could be two disparate risks being compared, a “target” risk and an “offsetting” risk, or a benefit estimate (A) and the corresponding cost estimate (B). In any case, we are fairly confident (80%) that A has the value 20, but believe with 10% probability each that we might have over- or underestimated A by a factor of 2 (that is, A can be 10 with probability 0.1, or 40 with probability 0.1). Similarly, we are fairly confident (80%) that B has the value 15, but with 10% probability it could be a factor of 3 higher or lower. Given the 3 possible discrete values of A, and the 3 possible values of B, there are 9 possible true values of the ratio (A/B), as given in the following table. Assigning A and B as independent random vari- ables with the marginal distributions specified, for example, P(A=10)=P(A=40)=0.10 and P(A=20)=0.80, leads immediately to the joint distribution specified below since the joint distribution of independent random variables is the product of their marginal distributions. Ratio of A to B for different values and probabilities of A and B Value of A [prob(A)] 10 (10%) 20 (80%) 40 (10%) Value of B [prob(B)] A/B prob(A/B) A/B prob(A/B) A/B prob(A/B) 5 (10%) 2 1% 4 8% 8 1% 15 (80%) 0.67 8% 1.3 64% 2.7 8% 45 (10%) 0.22 1% 0.44 8% 0.89 1% In this case, although the highest possible value of A differs from its lowest possible value by a factor of 4, and the extreme values of B differ from each other by a factor of 9, the ratio A/B can be as low as 0.22 or as high as 8, a factor of 36 difference. The uncertainty in the comparison exceeds the uncertainty in either quantity. A is “probably” greater than B, but for four of the nine possibilities, with a total likelihood of 18%, B is in fact greater than A. Example 2: Continuous Now suppose A and B are both lognormally distributed, and each have the exact same PDF but are uncorrelated with one another. Assume that the median value is 10, and the logarithmic standard deviation is 1.0986 (a geometric standard deviation of exactly e1.0986, or 3). In this case, the PDF for (A/B) has an exact solution: it too is lognormal, with a median of 1.0 (the median of A divided by the median of B), and a logarithmic standard deviation of 1.554 (which is the square root of the sum of [1.09862 plus 1.09862]). In this case, we could say that on the basis of median values, A and B are equal, but that statement would be highly uncertain. In fact, there is a 5 percent chance that (A/B) is equal to 12.9 or largera, and a corresponding chance that (A/B) is equal to 0.078 or smaller. Note that while the 90th percentile width for A alone spans a factor of 37, as does the 90th percentile width for B alone, the ratio is even more uncertain: (12.9) divided by (0.078) equals 165. Even though the typical values of the two risks are “equal,” it would be incorrect to report that they are equal (or that the net benefit is zero, or that the substitution risk cancels out the primary risk). In fact, this analysis tells us that we cannot confidently determine which quantity is greater, which is quite different from being able to pronounce them as equal. aThis number is equal to the median (1) times exp[(1.554)(1.645)], the upper 95th percentile point.

UNCERTAINTY AND VARIABILITY 107 appropriate. The balance between detailed probabilistic modeling and scenario and sensitiv- ity evaluation is determined by the purpose of the model and the specific needs of a given risk assessment—another matter that would benefit from guidance. Finally, with respect to the third option of default modeling, the National Research Council (NRC 2007d, pp. 26-27) observed that models of natural systems are necessarily never complete and that in regulatory modeling “assumptions and defaults are unavoidable as there is never a complete data set to develop a model.” It also noted that the fundamental uncertainties and limitations, although “critical to understand when using environmental models . . . do not constitute reasons why modeling should not be performed. When done in a manner that makes effective use of existing science and that is understandable to stake- holders and the public, models can be very effective for assessing and choosing amongst environmental regulatory activities and communicating with decision-makers and the pub- lic.” The present committee agrees. Committee Observations Regarding the Treatment of Uncertainty Although EPA has developed methods for addressing parameter uncertainty, particu- larly for exposure assessment, the remaining challenge is to address uncertainties that are difficult to capture with probability distributions and to provide guidance for the level of detail needed to capture and communicate key uncertainties. Many decision-makers tend to believe that with sufficient resources, science and technology will provide an obvious and cost-effective solution to the problems of protecting human health and the environment. In reality, however, there are many sources of uncertainty, and many uncertainties cannot be reduced or even quantified (see Box 4-6 for a discussion of model and parameter uncertainty). The committee’s review of uncertainty reveals that developing quantitative risk estimates in the face of substantial uncertainty and appropriately characterizing the degree of confidence in the results are recurring challenges in risk assessment that must be addressed over the coming decade. As noted above, there are different strategies (or levels of sophistication) for addressing uncertainty. Regardless of which level is selected, it is important to provide the decision- maker with information to distinguish reducible from irreducible uncertainty, to separate individual variability from true scientific uncertainty, to address margins of safety, and to consider benefits, costs, and comparable risks when identifying and evaluating options. To make risk assessment consistent with such an approach, EPA should incorporate formal and transparent treatment of uncertainties in each component of the risk-characterization process and develop guidelines to advise assessors on how to proceed. The methods of addressing uncertainty vary widely in their implementation, their ex- pected formality, and their cost and time requirements. The options for uncertainty analysis vary considerably in their ability to be understood by decision-makers and other parties. Although it is not stressed in the technical literature on uncertainty analysis, it is worth remembering that the product of risk assessment is in the end primarily a communication product (see Chapter 3). Therefore, perhaps the most appropriate measure of quality in the uncertainty analysis is whether it improves the capacity of the primary decision-maker to make informed decisions in the presence of substantial, inevitable, and irreducible uncer- tainty. Another important measure of quality is whether it improves the understanding of other stakeholders and thus fosters and supports the broader public interests in the deci- sion-making process. The choice of methods of expressing uncertainty is important and is clearly a design problem that requires careful attention to objectives.

108 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT BOX 4-6 Expressing and Distinguishing Model and Parameter Uncertainty Choosing which uncertainties to leave unaddressed and which to express and deciding how best to express them can be daunting tasks. As a simple example of expressing uncertainty, consider two distinct sources of uncertainty in generating an estimate of risk. • Fundamental causal uncertainty: uncertainty about the existence of critical cause-effect rela- tionships, for example, uncertainty about whether a particular compound causes cancer. • Uncertainty in the strength of the causal relationship: the degree to which the cause results in the effect, for example, how much cancer is caused by a given dose of the compound. The latter uncertainty is typically more easily expressed than the former in quantitative terms, with a probability distribution. But it should be noted that there are quantitative aspects for the casual uncertainty (hazard) in that there are statistical thresholds around positive findings from toxicity experi- ments. The two types of uncertainty can be addressed in a cause-effect model that takes on a value of zero to represent the lack of existence of a causal relationship and nonzero values to characterize the strength of the relationship. With such a representation, the outcomes of the overall model can have a multimodal distribution in which some finite probability at zero represents no causal relationship and a range of nonzero values represents the uncertainty in the strength of the relationship. That could be made more complex while allowing different mathematical forms to represent different possible ways that the effect is caused, for example, whether the compound causes the effect by a mechanism that is linear or nonlinear at low doses. It is often difficult to assign probabilities to different mathematical relationships. As an alternative, causal scenarios could be used, with each scenario representing distinct theories of causality. In the example here, one scenario would be no causal relationship, another would be a linear dose-response relationship, and a third would be a nonlinear dose-response relationship. Each scenario would have a corresponding conditional uncertainty analysis. Each model would be assumed true, and the likely range of model values in it could be derived. In this scenario approach, the individual uncertainty analy- ses are much simpler and may be more widely applied and understood. However, decision-making that is directed toward reducing important sources of uncertainty may be misguided by a focus on readily quantifiable uncertainties (for example, How much water is consumed by specific subpopulations?) when the global uncertainty may well be dominated by causal uncertainties whose collective impact is not quantified (for example, Are children disproportionately sensitive to the contaminant? Which of many possible adverse effects does the contaminant cause? Is exposure by inhalation an important contribu- tor to total risk?). Efforts to measure a subset of readily quantifiable uncertainties when fundamental causal uncertainties dominate the overall uncertainty may therefore not be justifiable. Variability AND VULNERABILITY IN RISK ASSESSMENT There are important variations among individuals in a population with respect to sus- ceptibility and exposure. Many of the statistical techniques and general concepts described above in relation to uncertainty analysis are applicable to variability analysis. For example, probabilistic approaches, such as Monte Carlo methods, can be used to propagate variability throughout all components of a risk assessment, expert elicitation can be used to character- ize various percentiles in a distribution, and the level of analytic sophistication should be matched to the problem at hand. But the key difference between uncertainty analysis and variability analysis is that variability can only be better characterized, not reduced, so it often must be addressed with strategies different from those used to address uncertainty. For example, the strategy that a policy-maker uses to address uncertainty about whether a rodent carcinogen is a human carcinogen differs from the strategy to address the variability in cancer susceptibility between children and adults. The latter is a case where the variability

UNCERTAINTY AND VARIABILITY 109 can be represented by a probability distribution, but likely a mixed (bimodal) distribution rather than a standard normal distribution. This section briefly describes key concepts and methods, EPA’s treatment of variability in general, and the basis of the committee’s recom- mendations related to variability in each component of risk assessment. People differ in susceptibility to the toxic effects of a given chemical exposure because of such factors as genetics, lifestyle, predisposition to diseases and other medical conditions, and other chemical exposures that influence underlying toxic processes. Examples of fac- tors that affect susceptibility are shown in Table 4-1 along with some estimates of increased TABLE 4-1 Examples of Factors Affecting Susceptibility to Effects of Environmental Toxicants Ratio of Sensitive Case Reference to “Normal” Genetic 10:1 “While the risk of cancer following irradiation may ICRP 1998; Tawn 2000 be elevated up to 100-fold in some heritable cancer disorders a single best estimate of a 10-fold increase in risk is appropriate for the purposes of modeling radiological impact.” >10:1 Wilson’s heterozygotes (about 1% of population) and NRC 2000 copper sensitivity Predisposing exposures 20:1 Greater sensitivity to arsenic-induced lung cancer in CDHS 1990 smokers than in nonsmokers. 10-20:1 Greater sensitivity to lung cancer due to radon in ATSDR 1992 smokers than in nonsmokers. 20-100:1 Suggestive evidence that low-iodide female smokers Blount et al. 2006 are much more sensitive to perchlorate-induced thyroid hormone disruption than “normal” adults. 10-30:1 Liver-cancer risk from aflatoxin in those with vs Wu-Williams et al. 1992 without hepatitis. Physiologic and Pharmacokinetic >10:1 Difference in sensitivity to 4-aminobiphenyl Bois et al. 1995 (median vs upper 2 percentile of population) due to physiologic and pharmacokinetic differences (modeled). Lifestage 5-10:1 Breast-cancer risk. Radiation exposure of pubescent Bhatia et al. 1996 girls and those before first completed pregnancy vs younger girls. Stochastic 100:1 Estimated with two-stage clonal model. Increased Heidenreich 2005 liver-cancer risk due to stochastic effects (in 0.1% of population compared with median). Overall 50:1 Modeled heterogeneity in cancer risk—95th percentile Finkel 1995a, 2002 compared with median—from age-specific incidence curves for two most common human tumors (lung and colorectal). 2-110:1 Differences between median vs 98th percentile in Hattis et al. 1999 noncancer effects at site of contact, responses differ with end point and toxicant.

110 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT sensitivity that have been reported in the literature. The factors are similar to effect modi- fiers in epidemiology, in that they modify the effect of another factor on a disease. The first column in Table 4-1 should be interpreted with caution, as there are notable differences in the percentiles used to characterize the size of the susceptible population. Susceptibility fac- tors are broadly considered to include any factor that increases (or decreases) the response of an individual to a dose relative to a typical individual in the population. The distribution of disease in a population can result not only from differences in susceptibility but from dis- proportionate distributions of exposures of individuals and subgroups in a population. Taken together, variations in disease susceptibility and exposure potential give rise to potentially important variations in vulnerability to the effects of environmental chemicals. Figure 4-2 illustrates how variations in exposure result in variations in risk. Individuals may be more vulnerable than others because they have or are exposed to • Factors that increase biologic sensitivity or reduce resilience to exposures (such as age, pre-existing disease, and genetics). • Prior or concurrent exposures to substances that increase a person’s susceptibility to the effects of additional exposures. • Factors that contribute to greater potential for exposure, including personal behavior patterns, the built environment, and modified environmental conditions in locations where time is spent (such as community, home, work, and school). • Social and economic factors that may influence exposure and biologic responses. Variability can be more important when independent susceptibility factors can interact to increase susceptibility. For example, genetic and other predisposing conditions interact in ultraviolet-radiation-induced melanoma. Low DNA-repair capacity itself measured in lymphocytes was not observed to increase the risk of melanoma, but statistically significant interactions and large increases in the risk of melanoma were observed in people with low DNA-repair capacity and either low tanning capacity or dysplastic nevi (Landi et al. 2002). Variability in susceptibility Variability in (endogenous factors) exposure Age, gender potential Genetics Pre-existing disease Variability in susceptibility (exogenous factors) Exposures to other agents Overall variability in risk relative to a median or baseline risk for a population FIGURE 4-2  Factors contributing to variability in risk in the population. Figure 4-2.eps

UNCERTAINTY AND VARIABILITY 111 Alcohol consumption, obesity, and diabetes can affect the expression of metabolizing en- zymes, such as CYP2E1, whose expression is also under the influence of genetic factors (Ingelman-Sundberg et al. 1993, 1994; Micu et al. 2003; Sexton and Hattis 2007). Interac- tions are expected to be common but unknown in many diseases caused or exacerbated by environmental chemicals. Environmental Protection Agency’s Approach to Variability in Health-Effects Assessments EPA’s approach to variability assessment is described in its recent Risk Assessment Prin- ciples and Practices: Staff Paper (EPA 2004a) and guidelines. The staff paper emphasizes that EPA focuses on characterizing variability in exposure, particularly high-end exposures, using as an example the maximally exposed individual in its hazardous air pollutant program. The committee observes that over the last several years some EPA programs have advanced considerably in their efforts to characterize variability in exposure. However, variability in susceptibility and vulnerability has received less detailed evaluation in most EPA health- effects assessments, although there are notable exceptions such as lead, ozone, and sulfur oxides. EPA efforts are considered and options for further improvements presented below. To address variability in vulnerability to noncancer end points, EPA assumes popula- tion-threshold dose-response behavior and assigns uncertainty (adjustment) factors. EPA also endorses such an approach for low-dose nonlinear cancer end points but has been incon- sistent in whether and how it is applied. For human-to-human variability in noncancer end points, the default “uncertainty” factor is typically 10, but it can be reduced or increased with sufficient supporting data often by partitioning it into pharmacokinetic and pharma- codynamic factors. The agency has done that with a few assessments based on human data. Only six cases in the Integrated Risk Information System (IRIS) database rely on human occupational data; of these, three had a human intraspecies factor of 10, two had a factor of 3, and one, beryllium, had a factor of 1 because it was assumed that the most sensitive group was included in the occupational study. Thus, in all but four cases in IRIS, a default human intraspecies factor of 10 was assumed, but 10 was the highest value assumed in all cases (EPA 2007a). The 2005 Guidelines for Carcinogen Risk Assessment (EPA 2005b) recognize a number of the factors in Table 4-1 as contributing to cancer susceptibility. Indeed, the guidelines call for the derivation of “separate estimates for susceptible populations and life stages so that these risks can be explicitly characterized” (p. 3-27). The guidelines also lay out a number of reasons why risk estimates derived from occupational studies may not be representative of the general population, including the healthy-worker effect, lack of representation of some subpopulations (for example, fetuses and the young), and underrepresentation of others (for example, women). Guidance in addressing the generalizability of risk estimates derived from occupational studies to the general population is not provided. Similarly the 2005 guidelines point out that animal studies are conducted in relatively homogeneous groups, in contrast with the heterogeneous human population to which the study results are applied. To address variability in susceptibility, the 2005 guidelines (EPA 2005b) call for • Development of a separate risk estimate for those who are susceptible “when there is an epidemiologic study or animal bioassay that reports quantitative results for susceptible individuals” (p. 3-28). • Adjustment of the general population estimate for susceptible individuals based on risk-related parameters, for example, pharmacokinetic modeling using pharmacokinetic parameters corresponding to susceptible groups compared with the general population.

112 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT • Use of general information in the absence of agent-specific information about early life-stage susceptibility as outlined in Supplemental Guidance for Assessing Susceptibility from Early-Life Exposure to Carcinogens (EPA 2005a) and whatever updates follow. Committee Observations and Comments on Environmental Protection Agency’s Approach to Variability The guidelines provide a useful starting point, but given the agency’s limited experience in implementing the 2005 guidelines it is unclear how EPA practice will develop to account for variability. The committee has some concerns based on the guideline language and recent EPA assessments and draft guidance (EPA 2004a, 2005a,b). With regard to life stages, the 2005 guidelines note that in nature susceptibility differs among various life stages, and the committee agrees that this should be given formal consider- ation. In an example of late and early life-stage susceptibilities, repair of ultraviolet-damaged DNA declines at 1% per year in subjects 20-60 years old (Grossman 1997), but misrepair in those overexposed when very young has a much longer time to be manifested as cancer. The 2005 guidelines and supplemental guidance that developed generic factors for early-life susceptibility was a step in the right direction. The supplemental guidance provides weighting factors for exposures to mutagenic compounds in the early postnatal and juvenile period. However, in utero periods and nonmutagenic chemicals were not covered, and in practice EPA treats the prenatal period as devoid of sensitivity to carcinogenicity, although it has funded research to explore this issue (Hattis et al. 2004, 2005). That stands in contrast with the language in the 2005 guidelines: “Exposures that are of concern extend from concep- tion through adolescence and also include pre-conception exposures of both parents” (EPA 2005b, p. 1-16). EPA needs methods for explicitly considering in cancer risk assessment in utero exposure and chemicals that do not meet the threshold of evidence that the agency is considering for judging whether a chemical has a mutagenic mode of action (EPA 2005b). Special attention should be given to hormonally active compounds and genotoxic chemicals that do not meet the threshold of evidence requirements. The committee encourages EPA to quantify more explicitly variations in exposure and in dose-response relationships. The tiered approach to variability assessment discussed in the 2005 guidelines, with multiple risk descriptions for different susceptible subgroups, is a step in the right direction but falls short of what is needed. The guidelines embrace a default of no variability in the absence of chemical-specific evidence to the contrary. When there is evidence, the focus is on differences between groups. It is important at a minimum to address people who fall into groups that have identified susceptibility. But the guidelines adopt the rather narrow view that variation comes solely from the identified factors that are used to “group” people (for example, a polymorphism) and that are established as important for the chemical under study but not other factors, such as age, ethnic group, socioeconomic status, or other attributes that affect individuals and only incidentally make them part of a new “group.” But it will also be important to describe and estimate variability among individuals and the extent of individual differences. Thus, there is a need for a nonzero default to address the variation in the population expected in the absence of chemical-specific data. The reliance on agent-specific data for all but the early-life assessments of susceptibility is problematic. Because of lack of data, formally addressing variability in cancer risk assessment is feasible only for the most data- rich compounds. That echoes the concern raised earlier about the need to develop more simplified approaches for uncertainty analysis that are tailored to the problems under study: more generalized approaches must be developed to address variability in cancer risk to avoid

UNCERTAINTY AND VARIABILITY 113 analyses in which uncharacterized sources of variability are implicitly presumed to have zero effect on individual and population risk. In Chapter 5, the committee proposes an alterna- tive framework for both cancer and noncancer end points that accounts more explicitly for variations in susceptibility and background disease processes and that includes approaches for compounds without substantial data. The framework provides the needed quantitative descriptions of variability in risk for both cancer and noncancer end points. UNCERTAINTY AND VARIABILITY IN Specific COMPONENTS OF RISK ASSESSMENT Each component of a risk assessment includes uncertainty and variability, some explicitly characterized and some unidentified. For each component, current approaches used by EPA to characterize uncertainty and variability are discussed below, and potential improvements are considered. Hazard Identification Hazard assessment makes a classification regarding toxicity, for example, whether a chemical is “carcinogenic to humans” or “likely to be” (EPA 2005b), is a neurotoxicant (EPA 1998), or is a potential reproductive hazard (EPA 1996). This gives rise to both quantitative and qualitative uncertainties in hazard characterization. Hazard-identification activities at EPA and other agencies (such as the International Agency for Research on Cancer) focus on protocols for making consistent and transparent classifications but not on a formal treatment of uncertainty. In contrast with the other components of risk assessment, the hazard-identi- fication stage often involves uncertainty about the existence of critical cause-effect relation- ships that lead to categorically distinct classifications. This type of uncertainty is distinct from uncertainty about such factors as dose-response or exposure-source relationships that have an inherent confidence interval. In this case, one element of an uncertainty analysis involves the issue of misclassification, that is, assigning the wrong outcome to a substance. EPA and the International Agency for Research on Cancer (IARC) have relied on weight-of-evidence classifications (IARC: 1, 2A, 2B, 3, and 4; EPA: “likely to be carcinogenic to humans”) to express uncertainty in hazard classifications. Because hazard assessment typically involves a statement or classification regarding the potential for harm, the uncertainty in hazard is not captured well by probability distributions. A formal analysis of hazard uncertainty often requires expert elicitation and discrete probability to communicate uncertainty. Another option is the use of fuzzy sets (Zadeh 1965) or possibility theory (Dubois and Prade 2001), which is a special case of fuzzy set theory. Fuzzy sets and possibility theory were introduced to represent and manipulate data that have “membership” uncertainty. An element of a fuzzy set, such as a toxic characteristic, has a grade of membership, for example, membership in the set “carcinogen” or “not carcinogen.” The grade of membership is different in concept from probability. Membership is a quantitative noncommittal measure of imperfect knowl- edge. The advantage of these methods is that they can characterize nonrandom uncertainties arising from vagueness or incomplete information and give an approximate estimate of the uncertainties. The limitations of fuzzy methods are that they: (1) cannot provide a precise estimate of uncertainty but only an approximate estimation, (2) might not be applicable to situations involving uncertainty resulting from random sampling error, and (3) create dif- ficulties in communicating because set membership or possibilities do not necessarily add to 1. The committee does not endorse any of these specific methods to address uncertainty in hazard assessment but notes in Chapter 3 the need to consider the impact on the overall

114 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT use of risk information in the fine distinctions between labels describing uncertainty in the weight-of-evidence classification (for example, known vs likely). Emissions The first key step in linking pollutant sources to impact in risk assessments, particularly those used to discriminate among various control options, involves characterizing emissions by relevant sources both under baseline conditions and with implementation of controls. In a few situations (for example, in evaluating sulfur dioxide emissions from power plants in the Acid Rain Program), continuous monitoring data are readily available and can be used to characterize baseline emissions with little uncertainty and to characterize the benefits of controls with relatively low uncertainty. But in most cases, there are few source-specific emission measurements, so risk assessors must rely on interpretations based on limited data and emission models. For example, EPA provides emission factors for stationary sources through the AP-42 database (EPA 2007b). Typically, information on source configuration, fuel composition, control technologies, and other items is used to determine an emission factor based on ex- trapolation from a limited number of field measurements and known characteristics of the fuel and technology. Uncertainty is included through an emission-factor quality rating, scaled from A to E, that is not quantitatively interpretable and conflates uncertainty and variability. For example, an emission-factor quality rating of A (excellent) is awarded when data are taken from many randomly selected facilities in the source category. But the degree of uncer- tainty related to measurement techniques is ignored, and the variability among facilities is not carried forward to the overall risk characterization. Because information on variability is not retained and uncertainty is not quantified, EPA treats emission estimates in effect as known quantities in risk assessments. That leads to multiple problems, including mischaracterization of total uncertainty or variability in the assessment and an inability to determine whether improvements in emission estimation are necessary to inform risk-management decisions bet- ter (that is, within a value-of-information context). More generally, the AP-42 database has many entries that have not been updated in decades, and this raises the question of whether the emission factors accurately capture current technologies (and adds an unacknowledged source of uncertainty). A final issue is the difficulty of estimating how emissions will change once a risk-management decision is applied; this requires an assessment of the performance of the regulated parties with regard to compliance and noncompliance. Many risk assessments in EPA use emission models other than those found in AP-42, but most emission estimates suffer from similar issues related to limitations of validation and unacknowledged uncertainty and variability. For example, traffic emissions are characterized with models, such as MOBILE6, in which the estimates are derived from traffic-flow data and calibrated with dynamometer studies on specific vehicles. However, that may not represent true driving-cycle conditions, and some pollutants (such as particulate matter) may be more uncertain than others. In spite of the potentially larger uncertainties associated with emission models, in such analyses as the regulatory impact analysis of nonroad diesel emissions (EPA 2004b), the benefits of controls are presented with up to six significant digits of precision, and no uncertainty is incorporated into the benefits analysis; indeed, in a table titled “Primary Sources of Uncertainty in Benefits Analysis” (EPA 2004b, Table 9A-17), emissions are not even mentioned as a source of uncertainty. EPA and other practitioners should take care to present data with an appropriate number of significant figures, no greater than the smallest number of significant figures reasonably available in the input data, and should formally address emissions as a key source of uncertainty.

UNCERTAINTY AND VARIABILITY 115 For emission characterization, the committee sees an important opportunity for EPA to address variability and uncertainty about emissions explicitly and quantitatively. It will require EPA to evaluate existing models to characterize the uncertainty and variability of individual emission estimates better. The committee recognizes that site-specific emissions data on many situations are lacking and this results in continued reliance on emission models, but it encourages EPA to pursue emission-evaluation studies when plausible and to make more regular refinements in emission-model structures. Transport, Fate, and Exposure Assessment Exposure assessment is the process of measuring and modeling the magnitude, frequency, and duration of contact between the potentially harmful agent and a target population, including the size and characteristics of that population (IPCS 2000; Zartarian et al. 2005). For risk assessments, exposure assessment should characterize the sources, routes, pathways, and the attendant uncertainties linking source to dose. It is common for assessors to pose exposure scenarios to define plausible pathways for human contact. Recognition of the mul- tiple possible exposure pathways highlights the importance of a multimedia, multipathway exposure framework. In a multipathway exposure framework, the omission of key exposure pathways (potentially due to data limitations) can contribute to an exposure assessment uncertainty that is often difficult to formally quantify. Given the framework of exposure assessment in the context of risk assessment, critical inputs include emissions data (described above), fate and transport models to characterize environmental concentrations (both indoors and outdoors), and methods for estimating human exposure given assumed or estimated concentrations. It is also necessary to relate exposure to intake and intake to dose. Further analytic efforts related to modeling human dose are considered later. The number of transport, fate, and exposure models in active use in EPA or elsewhere is too large to evaluate them individually or to make general statements about their utility and reliability (see the Council for Regulatory Environmental Modeling Web site for a current list [EPA 2008]). Transport, fate, and exposure models can vary substantially in their level of detail, geographic scope, and geographic resolution. Some models are based on environ- mental parameters that are “archetypal” and provide values that are typical of some regions or populations but not representative of any specific geographic area. These models are used to understand the likely behavior of pollutants as a function of basic chemical properties (Mackay 2001; McKone and MacLeod 2004) and are typically used for comparative as- sessments of pollutants and for interpreting how partitioning properties and degradability determine transport and fate. Site-specific models apply to releases at specific locations and often track pollutant transport with much more spatial and temporal detail than regional mass-balance models. They are used in a broad array of decision-support activities, including screening-level assessments; setting goals for air emissions, water quality, and soil-cleanup standards; assessing the regional and global fate of persistent organic chemicals; and assess- ing life-cycle impacts. There have been many more performance evaluations of transport, fate, and exposure models than of emission models (see, for example, Cowan et al. 1995; Fenner et al. 2005). Although their reliability can vary widely among chemicals considered and the spatial and temporal scale of application, a large literature, methods, and software are available to characterize their uncertainty and sensitivity when they are used in risk assessments. A critical insight that should be recognized by EPA and other practitioners is that there is no “ideal” transport, fate, or exposure model that can be used under all circumstances.

116 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT Some models may be considered to have greater fidelity than others, given the degree to which they capture theoretical constructs and have been evaluated against field measure- ments, but this does not necessarily imply that the more detailed model should be used under all circumstances. A model with lower resolution (and more uncertainty) but more timely outputs may have greater utility in some decision contexts, especially if the uncer- tainty can be reasonably characterized to determine its influence on the decision process. Similarly, a model that is highly uncertain with respect to maximum individual exposure but can characterize population-average exposures well may be suitable if the risk management decision is driven by the latter. That reinforces a recurring theme of this report regarding the selection of the appropriate risk-assessment methods in light of the competing demands and constraints described in Chapter 3. With respect to human exposure modeling, EPA has placed increasing emphasis over the last 25 years on quantitative characterization of uncertainty and variability in its ex- posure assessments. Exposure assessments and exposure models have evolved from simple assessments that addressed only conditions of maximum exposure to assessments that focus explicitly on exposure variation in a population with a quantitative uncertainty analysis. For example, EPA guidelines for exposure assessment issued in 1992 (EPA 1992) called for both high-end and central-tendency estimates for the population. The high end was con- sidered as what could occur for the 90th percentile or higher of exposed people, and the central tendency might represent an exposure near the median or mean of the distribution of exposed people. Through the 1990s, there was increasing emphasis on an explicit and quantitative characterization of the distinction between interindividual variability and uncer- tainty in exposure assessments. There was also growing interest in and use of probabilistic simulation methods, such as those based on Monte Carlo or closely related methods, as the basis of estimation of differences in exposure among individuals or, in some cases, of the uncertainty associated with any particular exposure estimate. That effort has been aided by a number of comprehensive studies in the United States and Europe that have used individual personal monitoring in conjunction with ambient and indoor measurements (Wallace et al. 1987; Özkaynak et al. 1996; Kousa et al. 2001, 2002a,b). Expanded use of biomonintoring will provide an opportunity both to evaluate and expand the characterization of exposure variability in human populations. The committee anticipates expanded efforts by EPA to quantify uncertainty in exposure estimates and to separate uncertainty and population variability in these estimates. Decisions about controlling exposures are typically based on protecting a particular group of people, such as a population or a highly exposed subpopulation (for example, children), because different individuals have different exposures (NRC 1994). The transparency afforded by probabilistic characterization and separation of uncertainty and variability in exposure as- sessment offers potential benefits for increasing common understanding as a basis of greater convergence in methodology (IPCS 2006). To date, however, probabilistic exposure assessments have focused on the uncertainty and variability associated with variables in an exposure-assessment model. Missing from the EPA process are guidelines for addressing how model uncertainty and data limitations affect overall uncertainty in exposure assessment. In particular, probabilistic methods have provided estimates of exposure to a compound at the 99th percentile of variability in the population, for example, but have often not considered how model uncertainty affects the reliability of the estimated percentiles. That is an important subject for improvement in future efforts. EPA should also strive for continual enhancement of databases used in expo- sure modeling, focusing attention on evaluation (that is, personal exposure measurements vs predicted exposures) and applicability to subpopulations of interest. Such documents as

UNCERTAINTY AND VARIABILITY 117 the Exposure Factors Handbook (EPA 1997d) provide crucial data for such analyses and should be regularly revised to reflect recommended improvements. Dose Assessment Assessment of doses of chemicals in the human population relies on a wide array of tools and techniques with varied applications in risk assessment. Monitoring and model- ing approaches are used for dose assessment, and important uncertainties and variability are linked to them. Many of the above conclusions for exposure assessment are applicable to dose assessment, but with the recognition that there will be greater variability in doses than exposures across the population as well as greater uncertainty in characterizing those doses. For monitoring, there have been limited but important efforts in recent years to de- velop comprehensive databases of tissue burdens of chemicals in representative samples of the human population (for example, the National Health and Nutrition Examination Survey [NHANES], the Center for Health Assessment of Mothers and Children of Salinas, the National Children’s Study). There are also efforts to conduct systematic biomonitoring programs in the European Union and in California. Biomonitoring data can provide valu- able insight into the degree of variability in internal doses in the population, and analyses of these data can help to determine factors that contribute to dose variability or that modify the exposure-dose relationship. But there are limits to how much variability can be assessed from these data. For example, NHANES is a database of representative samples for the entire U.S. population, but does not capture any geographic subgroups. A discussion of the limita- tions of NHANES can be found in NRC (2006a). Even with these emerging biomonitoring data, it is still a challenge to assess the contribution of a single source or set of sources to measures of internal dose, which can limit the risk management applicability of these data. In addition there is the challenge of interpreting what the biomonitoring data mean in terms of potential risk to human health (NRC 2006a). Issues related to the value of data obtained through biomonitoring programs are considered in more detail in Chapter 7 in the context of cumulative risk assessment. Dose modeling is commonly based on physiologically-based pharmacokinetic (PBPK) models. PBPK models are used as a means of addressing species, route, and dose-dependent differences in the ratio of tissue-specific dose to applied dose and thus serve as an alterna- tive to default assumptions for extrapolation that link dose to outcome. PBPK models may address some of the uncertainty associated with extrapolating dose-response data from an animal model to humans, but they often fail to fully capture variability of pharmacokinetics and dose in human populations. Toxicologic research can be used to suggest the structure of PBPK models. And sensitive subpopulations or differing senstivities within the popula- tion might be described in terms of some attributes through pharmacokinetic modeling (see Chapter 5, 4-aminobiphenyl case study). A number of issues related to uncertainty and variability in pharmacokinetic models were addressed in a 2006 workshop (EPA 2006a; Barton et al. 2007). Because the present committee determined that that was a timely and comprehensive review of issues, key find- ings of the workshop are summarized here. The 2006 workshop considered both short-term and long-term goals for incorporating uncertainty and variability into PBPK models. In par- ticular, Barton et al. (2007) reported the following short-term goals: multidisciplinary teams to integrate deterministic and nondeterministic statistical models; broader use of sensitivity analyses, including those of structural and global (rather than local) parameter changes; and enhanced transparency and reproducibility through more complete documentation of

118 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT model structures and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. The longer-term needs reported by Barton et al. (2007) included theoretical and practical methodologic improvements for nondeterministic and statistical modeling; better methods for evaluating alternative model structures; peer- reviewed databases of parameters and covariates and their distributions; expanded coverage of PBPK models for chemicals with different properties; and training and reference materi- als, such as cases studies, tutorials, bibliographies and glossaries, model repositories, and enhanced software. Many recent examples of PBPK models applied in toxicology have been for volatile organic chemicals and have used similar structures. PBPK models are needed for a broader array of chemical species (for example, from low to high volatility and low to high log Kow). Methods for comparing alternative model structures rapidly with available data would fa- cilitate testing of new structural ideas, provide perspective on model uncertainty, and help to address chemicals on which data are sparse. Ultimately, the recognition that models of various degrees of complexity may all describe the available data reasonably will encourage the acquisition of data to differentiate between competing models. Mode of Action and Dose-Response Models Many of the most substantial issues related to both uncertainty and variability can be seen in the realm of dose-response assessment for both cancer and noncancer end points. Historically, risk assessments for carcinogenic end points have been conducted very dif- ferently from noncancer risk assessments. In reviewing the issue of mode of action, the committee recognized a clear and important need for a consistent and unified approach in dose-response modeling. For carcinogens, it has generally been assumed that there is no threshold of effect, and risk assessments have focused on quantifying their potency, which is the low-dose slope of the dose-response relationship. For noncancer risk assessment, the prevailing assumption has been that homeostatic and other repair mechanisms in the body result in a population threshold or low-dose nonlinearity that leads to inconsequential risk at low doses, and risk assessments have focused on defining the reference dose or concentration that is sufficiently below the threshold or threshold-like dose to be deemed safe (“likely to be without an appreciable risk of deleterious effects”) (EPA 2002b, p. 4-4). Noncancer risk assessments simply compare observed or predicted doses with the reference dose to yield a qualitative conclusion about the likelihood of harm. The committee finds substantial deficiencies in both approaches with respect to core con- cepts and the treatment of uncertainty and variability. Cancer risk assessments often provide estimates of the population burden of disease or fraction of the population likely to be above a defined risk level. But there is no explicit treatment of uncertainty associated with such factors as interspecies extrapolation, high-dose to low-dose extrapolation, and the limitations of dose-response studies to capture all relevant information. Moreover, there is essentially no consideration of variations in the population in susceptibility and vulnerability other than consideration of the increased susceptibility of infants and children. The noncancer risk-as- sessment paradigm remains one of defining a reference value with no formal quantification of how disease incidence varies with exposure. Human heterogeneity is accommodated with a “default” factor, and it is often unclear when the evidence is sufficient to deviate from such defaults. The structure of the reference dose also omits any formal quantification of uncer-   ow is the octanol-water partition coefficient or the ratio of the concentration of a chemical in octanol and K in water at equilibrium and at a specified temperature.

UNCERTAINTY AND VARIABILITY 119 tainty. And the current approach does not address compounds for which thresholds are not apparent (for example, fine particulate matter and lead) or not expected (for example, in the case of background additivity). To address the issue of improving dose-response modeling, both from the perspective of uncertainty and variability characterization and in the context of new information on mode of action, the committee has developed a unified and consistent approach to dose-response modeling (Chapter 5). Beyond toxicologic studies of chemicals, there are multiple examples where uncertainty and variability have been more explicitly treated. For example, two National Research Council reports prepared by the Committee on Biological Effects of Ionizing Radiation (NRC 1999, 2006b) have provided examples for addressing dose-response uncertainty for ionizing radiation. Both the BEIR VI report dealing with radon (NRC 1999) and the BEIR VII report dealing with low linear energy transfer (LET) ionizing radiation (NRC 2006b) provided a quantitative analysis of the uncertainties associated with estimates of radiation cancer risks. More generally, epidemiologic studies provide enhanced mechanisms for characteriz- ing uncertainty and variability, sometimes providing information that is more relevant for human health risk assessment than dose-response relationships derived by extrapolating laboratory-animal data to humans. Emerging disciplines such as health tracking, molecular epidemiology, and social epidemiology provide opportunities to improve resolution in link- ing exposure to disease, which may enhance the ability of epidemiologists to uncover both main effects and effect modifiers, providing greater insight about human heterogeneity in response. A more detailed discussion of the role of these emerging epidemiologic disciplines from the perspective of cumulative risk assessment is provided in Chapter 7. An additional consideration in the treatment of uncertainty and variability in dose-re- sponse modeling is related to approaches to combine information across multiple publica- tions, especially in the context of epidemiologic evidence. Various meta-analytic techniques have been employed both to provide pooled central estimates with uncertainty bounds and to evaluate factors that could explain variability in findings across studies (Bell et al. 2005; Ito et al. 2005; Levy et al. 2005). While these approaches will not be applicable in most contexts, because they require a sufficiently large body of epidemiologic literature to allow for pooled analyses, these methods can be utilized to reduce uncertainty associated with selection of a single epidemiologic study for a dose-response function, to characterize uncertainty associated with application of a pooled estimate to a specific setting, and to de- termine factors that contribute to variability in dose-response functions. EPA should consider these and other meta-analytic techniques, especially for risk management applications tied to specific geographic areas. Principles for Addressing Uncertainty and Variability EPA and policy analysts are not constrained by a lack of methods for conducting un- certainty analysis but can be paralyzed by the absence of guidance on what levels of detail and rigor are needed for a particular risk assessment. That creates situations that splinter the parties involved into those who favor application of the most sophisticated methods to all cases and those who would rather ignore uncertainty completely and simply rely on point estimates of parameters and defaults for all models. But risk assessment often requires something in between. To confront the issue, EPA should develop guidance for conducting and establishing the level of detail in uncertainty and variability analyses that is required for various risk assessments. To foster optimal treatment of variability in its assessments, the agency could develop general guidelines or further supplemental guidance to its health-effects

120 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT BOX 4-7 Recommended Principles for Uncertainty and Variability Analysis 1. Risk assessments should provide a quantitative, or at least qualitative, description of un- certainty and variability consistent with available data. The information required to conduct detailed uncertainty analyses may not be available in many situations. 2. In addition to characterizing the full population at risk, attention should be directed to vulnerable individuals and subpopulations that may be particularly susceptible or more highly exposed. 3. The depth, extent, and detail of the uncertainty and variability analyses should be commen- surate with the importance and nature of the decision to be informed by the risk assessment and with what is valued in a decision. This may best be achieved by early engagement of assessors, managers, and stakeholders in the nature and objectives of the risk assessment and terms of reference (which must be clearly defined). 4. The risk assessment should compile or otherwise characterize the types, sources, extent, and magnitude of variability and substantial uncertainties associated with the assessment. To the extent feasible, there should be homologous treatment of uncertainties among the different components of a risk assessment and among different policy options being compared. 5. To maximize public understanding of and participation in risk-related decision-making, a risk assessment should explain the basis and results of the uncertainty analysis with sufficient clarity to be understood by the public and decision-makers. The uncertainty assessment should not be a significant source of delay in the release of an assessment. 6. Uncertainty and variability should be kept conceptually separate in the risk characterization. (for example, EPA 2005a) and exposure guidance used in its various programs. To support the effort, the committee offers the principles presented in Box 4-7. The principles in Box 4-7 are consistent with and expand on the “Principles for Risk Analysis” originally established in 1995, noted as useful by the National Research Council (NRC 2007c), and recently re-released by the Office of Management and Budget and the Office of Science and Technology Policy (OMB/OSTP 2007). They are derived from the more detailed discussions above. In particular, they are based on the following issues. • Qualitative thinking about uncertainty that reveals that despite the uncertainty, one can have confidence in which risk-management option to pick and not need to quantify further. • A need to ensure that uncertainty and variability are addressed by ensuring that the risk is not underestimated. • Characterization of a variety of risks and their corresponding confidence intervals. Depending on the risk-management options, a quantitative treatment of uncertainty and variability may be needed to differentiate among the options for making an informed decision. Uncertainty analysis is important for both data-rich and data-poor situations, but confidence in the analysis will vary according to the amount of information available. Because resources are limited in EPA, it is important to match the level of effort to the extent to which a more detailed analysis may influence an important decision. If an uncertainty analysis will not substantially influence outcomes of importance to the decision- maker, resources should not be expended on a detailed uncertainty analysis (for example, two-dimensional Monte Carlo analysis). In developing guidance for uncertainty analysis, EPA first should develop guidelines that “screen out” risk assessments that focus on risks that do not warrant the use of substantial analytic resources. Second, the guidelines should

UNCERTAINTY AND VARIABILITY 121 describe the level of detail that is warranted for “important” risk assessments. Third, the analysis should be tailored to the decision-rule outcome by addressing what is valued in a decision; for example, if the decision-maker is interested only in the 5% most-exposed or most at-risk members of a population, there is little value in structuring an uncertainty analysis that focuses on uncertainty and variability in the full population. The risk assessor should consider the uncertainties and variabilities that accrue in all stages of the risk assessment—in emissions or environmental concentration data, fate and exposure assessment, dose and mechanism of action, and dose-response relationship. It is important to identify the largest sources of uncertainty and variability and to determine the extent to which there is value in focusing on other components. This approach should be based on a value-of-information (VOI) strategy even when resources for a fully quantitative VOI analysis are limited (see discussion in Chapter 3). For example, when uncertainty gives rise to risk estimates that are spread across one or more key decision points, such as a range that includes acceptable and unacceptable levels of risk, then there is value in addressing uncertainty in other components when this information provides more insight on whether one choice of action for reducing risk is better than another. When the goal of a risk assessment is to discriminate among various options, the uncer- tainty analysis supporting the evaluation should be tailored to provide sufficient resolution to make the discriminations (to the extent that it can). It is important to distinguish when and how to engage an uncertainty analysis to characterize one-sided confidence (confidence that the risk does not exceed X or confidence that all or most individuals are protected from harm, and so on) or richer descriptions of the uncertainty (for example, two-sided confidence bounds, or the full distribution). Depending on the options being considered, a fuller description may be needed to understand tradeoffs. When a “safe” level of risk is being established, without consideration of costs or countervailing risks, a single-sided (bounding) risk estimate or lower-bound acceptable dose may be sufficient. Recommendations This chapter addressed the need to consider uncertainty and variability in an interpre- table and consistent manner among all components of a risk assessment and to communicate them in the overall risk characterization. The committee focused on more detailed and trans- parent methods for addressing uncertainty and variability, on specific aspects of uncertainty and variability in key computational steps of risk assessment, and on approaches to help EPA to decide what level of detail to use in characterizing uncertainty and variability to support risk-management decisions and public involvement in the process. The committee recognizes that EPA has the technical capability to do two-stage Monte Carlo and other very detailed and computationally intensive analyses of uncertainty and variability. But such analyses are not necessary in all decision contexts, given that transparency and timeliness are also desirable attributes of a risk assessment, and given that some decisions can be made with less complex analyses. The question is not often about better ways to do these analyses, but about developing a better understanding of when to do these analyses. To address those issues, the committee provides the following recommendations: • EPA should develop a process to address and communicate the uncertainty and variability that are parts of any risk assessment. In particular, this process should encour- age risk assessments to characterize and communicate uncertainty and variability in all key

122 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT computational steps of risk assessment—emissions, fate-and-transport modeling, exposure assessment, dose assessment, dose-response assessment, and risk characterization. • EPA should develop guidance to help analysts determine the appropriate level of detail needed in uncertainty and variability analyses to support decision-making. The prin- ciples of uncertainty and variability analysis above provide a starting point for development of this guidance, which should include approaches both for analysis and communication • In the short term, EPA should adopt a “tiered” approach for selecting the level of detail used in uncertainty and variability assessment. A discussion of the level of detail used for uncertainty analysis and variability assessment should be an explicit part of the problem formulation and planning and scoping. • In the short term, EPA should develop guidelines that define key terms of reference used in the presentation of uncertainty and variability, such as central tendency, average, expected, upper bound, and plausible upper bound. In addition, because risk-risk and benefit- cost comparisons pose unique analytic challenges, guidelines could provide insight into and advice on uncertainty characterizations to support risk decision-making in these contexts. • Improving characterization of uncertainty and variability in risk assessment comes at a cost, and additional resources and training of risk assessors and risk managers will be required. In the short term, EPA should build the capacity to provide guidance to address and implement the principles of uncertainty and variability analysis. References ATSDR (Agency for Toxic Substances and Disease Registry). 1992. Case Studies in Environmental Medicine: Radon Toxicity. U.S. Department of Health and Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry, Atlanta, GA. Barton, H.A., W.A. Chiu, R. Woodrow Setzer, M.E. Andersen, A.J. Bailer, F.Y. Bois, R.S. Dewoskin, S. Hays, G. Johanson, N. Jones, G. Loizou, R.C. MacPhail, C.J. Portier, M. Spendiff, and Y.M. Tan. 2007. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: State of the science and needs for research and implementation. Toxicol. Sci. 99(2):395-402. Bell, M.L., F. Dominici, and J.M. Samet. 2005. A meta-analysis of time-series studies of ozone and mortality with comparison to the National Morbidity, Mortality and Air Pollution Study. Epidemiology 16(4):436-445. Bhatia, S., L.L. Robison, O. Oberlin, M. Greenberg, G. Bunin, F. Fossati-Bellani, and A.T. Meadows. 1996. Breast cancer and other second neoplasms after childhood Hodgkin’s disease. N. Engl. J. Med. 334(12):745-751. Blount, B.C., J.L. Pirkle, J.D. Osterloh, L. Valentin-Blasini, and K.L. Caldwell. 2006. Urinary perchlorate and thyroid hormone levels in adolescent and adult men and women living in the United States. Environ. Health Perspect. 114(12):1865-1871. Bois, F.Y., G. Krowech, and L. Zeise. ������������������������������������������������������������������������� 1995. Modeling human interindividual variability in metabolism and risk: The example of 4-aminobiphenyl. Risk Anal. 15(2):205-213. Budnitz, R.J., G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell, and P.A. Morris. 1998. Use of technical expert panels: Applications to probabilistic seismic hazard analysis. Risk Anal. 18(4):463-469. CDHS (California Department of Health Services). 1990. Report to the Air Resources Board on Inorganic Arsenic. Part B. Health Effects of Inorganic Arsenic. Air Toxicology and Epidemiology Section. Hazard Identification and Risk Assessment Branch. Department of Health Services. Berkeley, CA. Cowan, C.E., D. Mackay, T.C.J. Feijtel, D. Van De Meent, A. Di Guardo, J. Davies, and N. Mackay, eds. 1995. The Multi-Media Fate Model: A Vital Tool for Predicting the Fate of Chemicals. Pensacola, FL: Society of Environmental Toxicology and Chemistry. Cullen, A.C., and H.C. Frey. 1999. The Use of Probabilistic Techniques in Exposure Assessment: A Handbook for Dealing with Variability and Uncertainty in Models and Inputs. New York: Plenum Press. Cullen, A.C., and M.J. Small. 2004. Uncertain risk: The role and limits of quantitative analysis. Pp. 163-212 in Risk Analysis and Society: An Interdisciplinary Characterization of the Field, T. McDaniels, and M.J. Small, eds. Cambridge, UK: Cambridge University Press. Dubois, D., and H. Prade. 2001. Possibility theory, probability theory and multiple-valued logics: A clarification. Ann. Math. Artif. Intell. 32(1-4):35-66.

UNCERTAINTY AND VARIABILITY 123 EPA (U.S. Environmental Protection Agency). 1986. Guidelines for Carcinogen Risk Assessment. EPA/630/R-00/004. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. September 1986 [online]. Available: http://www.epa.gov/ncea/raf/car2sab/guidelines_1986.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 1989a. Risk Assessment Guidance for Superfund, Vol. 1. Human Health Evaluation Manual Part A. EPA/540/1-89/002. Office of Emergency and Remedial Response, U.S. Environmental Protection Agency, Washington, DC. December 1989 [online]. Available: http://rais.ornl. gov/homepage/HHEMA.pdf [accessed Jan. 11, 2008]. EPA (U.S. Environmental Protection Agency). 1989b. Interim Procedures for Estimating Risks Associated with Ex- posures to Mixtures of Chlorinated Dibenzo-p-Dioxins and Dibenzofurans (CDDs and CDFs): 1989 Update. EPA/625/3-89/016. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. EPA (U.S. Environmental Protection Agency). 1992. Guidelines for Exposure Assessment. EPA600Z-92/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://cfpub. epa.gov/ncea/raf/recordisplay.cfm?deid=15263 [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 1996. Guidelines for Reproductive Toxicity Risk Assessment. EPA/630/R-96/009. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. Octo- ber 1996 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/repro51.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 1997a. Guiding Principles for Monte Carlo Analysis. EPA/630/ R-97/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 1997 [online]. Available: http://www.epa.gov/ncea/raf/montecar.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 1997b. Policy for Use of Probabilistic Analysis in Risk Assessment at the U.S. Environmental Protection Agency. Science Policy Council, U.S. Environmental Protection Agency, Washington, DC. May 15, 1997 [online]. Available: http://www.epa.gov/osp/spc/probpol.htm [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 1997c. Guidance on Cumulative Risk Assessment, Part 1. Planning and Scoping. Science Policy Council, U.S. Environmental Protection Agency, Washington, DC. July 3, 1997 [online]. Available: http://www.epa.gov/brownfields/html-doc/cumrisk2.htm [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 1997d. Exposure Factors Handbook. National Center for Environ- mental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. August 1997 [online]. Available: http://www.epa.gov/ncea/efh/report.html [accessed Aug. 5, 2008]. EPA (U.S. Environmental Protection Agency). 1998. Guidelines for Neurotoxicity Risk Assessment. EPA/630/ R-95/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. April 1998 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/neurotox.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 2001. Risk Assessment Guidance for Superfund (RAGS): Vol. 3 - Part A: Process for Conducting Probabilistic Risk Assessment. EPA 540-R-02-002. Office of Emergency and Remedial Response, U.S. Environmental Protection Agency, Washington, DC. December 2001. http://www. epa.gov/oswer/riskassessment/rags3a/ [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2002a. Calculating Upper Confidence Limits for Exposure Point Concentrations at Hazardous Waste Sites. OSWER 9285.6-10. Office of Emergency and Remedial Response, U.S. Environmental Protection Agency, Washington, DC. December 2002 [online]. Available: http://www. hanford.gov/dqo/training/ucl.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2002b. A Review of the Reference Dose and Reference Concentration Processes. Final report. EPA/630/P-02/002F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. December 2002 [online]. Available: http://www.epa.gov/iris/RFD_FINAL%5B1%5D.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2004a. Risk Assessment Principles and Practices: Staff Paper. EPA/100/B-04/001. Office of the Science Advisor, U.S. Environmental Protection Agency, Washington, DC. March 2004 [online]. Available: http://www.epa.gov/osa/pdfs/ratf-final.pdf [accessed Jan. 9, 2008]. EPA (U.S. Environmental Protection Agency). 2004b. Final Regulatory Analysis: Control of Emissions from Nonroad Diesel Engines. EPA420-R-04-007. Office of Transportation and Air Quality, U.S. Environmental Protection Agency. May 2004 [online]. Available: http://www.epa.gov/nonroad-diesel/2004fr/420r04007a. pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2005a. Supplemental Guidance for Assessing Susceptibility from Early-Life Exposures to Carcinogens. EPA/630/R-03/003F. Risk Assessment Forum, U.S. Environmental Pro- tection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay. cfm?deid=160003 [accessed Jan. 4, 2008]. EPA (U.S. Environmental Protection Agency). 2005b. Guidelines for Carcinogen Risk Assessment. EPA/630/P- 03/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=116283 [accessed Jan. 15, 2008].

124 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT EPA (U.S. Environmental Protection Agency). 2005c. Regulatory Impact Analysis for the Final Clean Air Interstate Rule. EPA-452/R-05-002. Air Quality Strategies and Standards Division, Emission, Monitoring, and Analysis Division and Clean Air Markets Division, Office of Air and Radiation, U.S. Environmental Protection Agency. March 2005 [online]. Available: http://www.epa.gov/CAIR/pdfs/finaltech08.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2006a. International Workshop on Uncertainty and Variability in Physiologically Based Pharmacokinetic (PBPK) Models, October 31-November 2, 2006, Research Triangle Park, NC [online]. Available: http://www.epa.gov/ncct/uvpkm/ [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 2006b. Regulatory Impact Analysis (RIA) of the 2006 National Am- bient Air Quality Standards for Fine Particle Pollution. Air Quality Strategies and Standards Division, Office of Air and Radiation, U.S. Environmental Protection Agency. October 6, 2006 [online]. Available: http://www. epa.gov/ttn/ecas/regdata/RIAs/Executive%20Summary.pdf [accessed Nov. 17, 2008]. EPA (U.S. Environmental Protection Agency). 2007a. Integrated Risk Information System (IRIS). Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC [online]. Available http://www. epa.gov/iris/ [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 2007b. Emissions Factors & AP 42. Clearinghouse for Inventories and Emissions Factors, Technology Transfer Network, U.S. Environmental Protection Agency [online]. Avail- able: http://www.epa.gov/ttn/chief/ap42/index.html [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 2008. EPA’s Council for Regulatory Environmental Modeling (CREM). Office of the Science Advisor, U.S. Environmental Protection Agency. October 23, 2008 [online]. Available: http://www.epa.gov/crem/ [accessed Nov. 20, 2008]. EPA SAB (U.S. Environmental Protection Agency Science Advisory Board). 2004. EPA’s Multipmedia Multipath- way and Multireceptor Risk Assessment (3MRA) Modeling System: A Review by the 3MRA Review Panel of the EPA Science Advisory Board.. EPA-SAB-05-003. U.S. Environmental Protection Agency, Science Advi- sory Board, Washington, DC. October 22, 2004 [online]. Available: http://yosemite.epa.gov/sab/sabproduct. nsf/99390EFBFC255AE885256FFE00579745/$File/SAB-05-003_unsigned.pdf [accessed Sept. 9, 2008]. Evans, J. S., G.M. Gray, R.L. Sielken, A.E. Smith, C. Valdez-Flores, and J.D. Graham. 1994. Use of probabilistic expert judgment in uncertainty analysis of carcinogenic potency. Regul. Toxicol. Pharmacol. 20(1):15-36. Fenner, K., M. Scheringer, M. MacLeod, M. Matthies, T. McKone, M. Stroebe, A. Beyer, M. Bonnell, A.C. Le Gall, J. Klasmeier, D. Mackay, D. van de Meent, D. Pennington, B. Scharenberg, N. Suzuki, and F. Wania. 2005. ������ Comparing estimates of persistence and long-range transport potential among multimedia models. Environ. Sci. Technol. 39(7):1932-1942. Finkel, A.M. 1990. Confronting Uncertainty in Risk Management: A Guide for Decision Makers. Washington, DC: Resources for the Future. Finkel, A.M. 1995a. A quantitative estimate of the variations in human susceptibility to cancer and its implications for risk management. Pp. 297-328 in Low-Dose Extrapolation of Cancer Risks: Issues and Perspectives, S.S. Olin, W. Farland, C. Park, L. Rhomberg, R. Scheuplein, and T. Starr, eds. Washington, DC: ILSI Press. Finkel, A.M. 1995b. Toward less misleading comparisons of uncertain risks: The example of aflatoxin and alar. Environ. Health Perspect. 103(4):376-385. Finkel, A.M. 2002. The joy before cooking: Preparing ourselves to write a risk research recipe. Hum. Ecol. Risk Assess. 8(6):1203-1221. Greer, M.A., G. Goodman, R.C. Pleus, and S.E. Greer. 2002. Health effects assessment for environmental perchlo- rate contamination: The dose response for inhibition of thyroidal radioiodine uptake in humans. Environ. Health Perspect. 110(9):927-937. Grossman, L. 1997. Epidemiology of ultraviolet-DNA repair capacity and human cancer. Environ. Health Perspect. 105(Suppl. 4):927-930. Hattis, D., P. Banati, and R. Goble. 1999. Distributions of individual susceptibility among humans for toxic ef- fects: How much protection does the traditional tenfold factor provide for what fraction of which kinds of chemicals and effects? Ann. NY Acad. Sci. 895:286-316. Hattis, D., R. Goble, A. Russ, M. Chu, and J. Ericson. 2004. Age-related differences in susceptibility to carcinogenesis: A quantitative analysis of empirical animal bioassay data. Environ. Health Perspect. 112(11):1152-1158. Hattis, D., R. Goble, and M. Chu. 2005. Age-related differences in susceptibility to carcinogenesis. II. Approaches for application and uncertainty analyses for individual genetically acting carcinogens. Environ. Health Per- spect. 113(4):509-516. Heidenreich, W.F. 2005. Heterogeneity of cancer risk due to stochastic effects. Risk Anal. 25(6):1589-1594. ICRP (International Commission on Radiological Protection). 1998. Genetic Susceptibility to Cancer. ICRP Publica- tion 79. Annals of the ICPR 28(1-2). New York: Pergamon.

UNCERTAINTY AND VARIABILITY 125 IEc (Industrial Economics, Inc). 2006. Expanded Expert Judgment Assessment of the Concentration-Response Relationship Between PM2.5 Exposure and Mortality. Prepared for the Office of Air Quality Planning and Standards, U.S. Environmental Protection Agency, Research Triangle Park, NC, by Industrial Economics Inc., Cambridge, MA. September, 2006 [online]. Available: http://www.epa.gov/ttn/ecas/regdata/Uncertainty/ pm_ee_report.pdf [accessed Jan. 14, 2008]. Ingelman-Sundberg, M., I. Johannson, H. Yin, Y. Terelius, E. Eliasson, P. Clot, and E. Albano. 1993. Ethanol- inducible cytochrome P4502E1: Genetic polymorphism, regulation, and possible role in the etiology of alco- hol-induced liver disease. ���������������������� Alcohol 10(6):447-452. Ingelman-Sundberg, M., M.J. Ronis, K.O. Lindros, E. Eliasson, and A. Zhukov. 1994. ����������������������� Ethanol-inducible cyto- chrome P4502E1: Regulation, enzymology and molecular biology. Alcohol Suppl. 2:131-139. IPCS (International Programme on Chemical Safety). 2000. Human exposure and dose modeling. Part 6 in Hu- man Exposure Assessment. Environmental Health Criteria 214. Geneva: World Health Organization [online]. Available: http://www.inchem.org/documents/ehc/ehc/ehc214.htm#PartNumber:6 [accessed Jan. 15, 2008]. IPCS (International Programme on Chemical Safety). 2004. IPCS Risk Assessment Terminology Part 1: IPCS/OECD Key Generic Terms used in Chemical Hazard/Risk Assessment and Part 2: IPCS Glossary of Key Exposure Assessment Terminology. Geneva: World Health Organization [online]. Available: http://www.who.int/ipcs/ methods/harmonization/areas/ipcsterminologyparts1and2.pdf [accessed Jan. 15, 2008]. IPCS (International Programme on Chemical Safety). 2006. Draft Guidance Document on Characterizing and Com- municating Uncertainty of Exposure Assessment, Draft for Public Review. IPCS Project on the Harmonization of Approaches to the Assessment of Risk from Exposure to Chemicals. Geneva: World Health Organization [online]. Available: http://www.who.int/ipcs/methods/harmonization/areas/draftundertainty.pdf [accessed Jan. 15, 2008]. Ito, K., S.F. DeLeon, and M. Lippmann. 2005. Associations between ozone and daily mortality: Analysis and meta- analysis. Epidemiology 16(4):446-457. Kavlock, R. 2006. Computational Toxicology: New Approaches to Improve Environmental Health Protection. Presentation on the 1st Meeting on Improving Risk Analysis Approaches Used by the U.S. EPA, November 20, 2006, Washington, DC. Kousa, A., C. Monn, T. Totko, S. Alm, L. Oglesby, and M.J. Jantunen. 2001. Personal exposures to NO2 in the ������������������������������ EXPOLIS study: Relation to residential indoor, outdoor, and workplace concentrations in Basel, Helsinki, and Prague. Atmos. Environ. ����������������� 35(20):3405-3412. Kousa, A., J. Kukkonen, A. Karppinen, P. Aarnio, and T. Koskentalo. ��������������������������������������������� 2002a. A model for evaluating the population exposure to ambient air pollution in an urban area. Atmos. Environ. 36(13):2109-2119. ����������������� Kousa, A., L. Oglesby, K. Koistinen, N. Kunzli, and M. Jantunen. 2002b. Exposure chain of urban air PM2.5- ������������������������������������� associations between ambient fixed site, residential outdoor, indoor, workplace, and personal exposures in four European cities in EXPOLIS study. Atmos. Environ. 36(18):3031-3039. Krupnick, A., R. Morgenstern, M. Batz, P. Nelsen, D. Burtraw, J.S. Shih, and M. McWilliams. 2006. Not a Sure Thing: Making Regulatory Choices under Uncertainty. Washington, DC: Resources for the Future. February 2006 [online]. Available: http://www.rff.org/rff/Documents/RFF-Rpt-RegulatoryChoices.pdf [accessed Nov. 22, 2006]. Landi, M.T., A. Baccarelli, R.E. Tarone, A.Pesatori, M.A. Tucker, M. Hedayati, and L. Grossman. 2002. DNA repair, dysplastic nevi, and sunlight sensitivity in the development of cutaneous malignant melanoma. J. Natl. Cancer Inst. 94(2):94-101. Levy, J.I., S.M. Chemerynski, and J.A. Sarnat. 2005. ���������������������������������������������������� Ozone exposure and mortality: An empiric Bayes meta- regression analysis. Epidemiology 16(4):458-468. Mackay, D. 2001. Multimedia Environmental Models: The Fugacity Approach, 2nd Ed. Boca Raton: Lewis. McKone, T.E., and M. MacLeod. 2004. Tracking multiple pathways of human exposure to persistent multimedia pollutants: Regional, continental, and global scale models. ��������������������������������������� Annu. Rev. Environ. Resour. 28:463-492. Micu, A.L., S. Miksys, E.M. Sellers, D.R. Koop, and R.F. Tyndale. 2003. Rat hepatic CYP2E1 is induced by very �������������������������������������� low nicotine doses: An investigation of induction, time course, dose response, and mechanism. J. Pharmacol. Exp. Ther. 306(3):941-947. Morgan, M.G., M. Henrion, and M. Small. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantita- tive Risk and Policy Analysis. Cambridge: Cambridge University Press. NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press. NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. NRC (National Research Council). 1996. Understanding Risk: Informing Decisions in a Democratic Society. Wash- ington, DC: National Academy Press.

126 SCIENCE AND DECISIONS: ADVANCING RISK ASSESSMENT NRC (National Research Council) 1999. Health Effects of Exposure to Radon BEIR VI. Washington, DC: National Academy Press. NRC (National Research Council). 2000. Copper in Drinking Water. Washington, DC: National Academy Press. NRC (National Research Council). 2002. Estimating the Public Health Benefits of Proposed Air Pollution Regula- tions. Washington, DC: The National Academies Press. NRC (National Research Council). 2006a. Human Biomonitoring of Environmental Chemicals. Washington, DC: The National Academies Press. NRC (National Research Council). 2006b. Health Risks from Exposures to Low Levels of Ionizing Radiation BEIR VII. Washington, DC: The National Academies Press. NRC (National Research Council). 2007a. Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment. Washington, DC: The National Academies Press. NRC (National Research Council). 2007b. Toxicity Testing in the Twenty-First Century: A Vision and a Strategy. Washington, DC: The National Academies Press. NRC (National Research Council). 2007c. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. NRC (National Research Council). 2007d. Models in Environmental Regulatory Decision Making. Washington, DC: The National Academies Press. OMB/OSTP (Office of Management and Budget/Office of Science and Technology Policy). 2007. Updated Prin- ciples for Risk Analysis. Memorandum for the Heads of Executive Departments and Agencies, from Susan E. Dudley, Administrator, Office of Information and Regulatory Affairs, Office of Management and Budget, and Sharon L. Hays, Associate Director and Deputy Director for Science, Office of Science and Technology Policy, Washington, DC. September 19, 2007 [online]. Available: http://www.whitehouse.gov/omb/memoranda/ fy2007/m07-24.pdf [accessed Jan. 4, 2008]. Özkaynak, H., J. Xue, J. Spengler, L. Wallace, E. Pellizari, and P. Jenkins. 1996. Personal exposure to airborne particles and metals: Results from the particle TEAM study in Riverside, California. J. Expo. Anal. Environ. Epidemiol. 6(1):57-78. Paté-Cornell, M.E. 1996. Uncertainties in risk analysis: Six levels of treatment. Reliab. Eng. Syst. Safe. 54(2):95-111. Sexton, K., and D. Hattis. 2007. Assessing cumulative health risks from exposure to environmental mixtures: Three fundamental questions. Environ. Health Perspect. 115(5):825-832. Spetzler, C.S., and C.S. von Holstein. 1975. Probability encoding in decision analysis. Manage. Sci. 22(3):340-358. Tawn, E.J. 2000. Book Reviews: Genetic Susceptibility to Cancer (1998) and Genetic Heterogeneity in the Popula- tion and its Implications for Radiation Risk (1999). J. Radiol. Prot. 20:89-92. USNRC (U.S. Nuclear Regulatory Commission). 1975. The Reactor Safety Study: An Assessment of Accident Risk in U.S. Commercial Nuclear Power Plants. WASH-1400. NUREG-75/014. U.S. Nuclear Regulatory Commission, Washington, DC. October 1975 [online]. Available: http://www.osti.gov/energycitations/servlets/ purl/7134131-wKhXcG/7134131.PDF [accessed Jan. 15, 2008]. Wallace, L.A., E.D. Pellizzari, T.D. Hartwell, C. Sparacino, and R. Whitmore. 1987. TEAM (Total Exposure As- sessment Methodology) study: Personal exposures to toxic substances in air, drinking water, and breath of 400 residents of New Jersey, North Carolina, and North Dakota. Environ. Res. 43(2):290-307. Wu-Williams, A.H., L. Zeise, and D. Thomas. 1992. Risk assessment for aflatoxin B1: A modeling approach. Risk Anal. 12(4):559-567. Zadeh, L.A. 1965. Fuzzy sets. Inform. Control 8(3):338-353. Zartarian, V., T. Bahadori, and T. McKone. 2005. Adoption of an official ISEA glossary. J. Expo. Anal. Environ. Epidemiol. 15(1):1-5. Zenick, H. 2006. Maturation of Risk Assessment: Attributable Risk as a More Holistic Approach. Presentation on the 1st Meeting on Improving Risk Analysis Approaches Used by the U.S. EPA, November 20, 2006, Washington, DC.

Next: 5 Toward a Unified Approach to Dose-Response Assessment »
Science and Decisions: Advancing Risk Assessment Get This Book
×
 Science and Decisions: Advancing Risk Assessment
Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Risk assessment has become a dominant public policy tool for making choices, based on limited resources, to protect public health and the environment. It has been instrumental to the mission of the U.S. Environmental Protection Agency (EPA) as well as other federal agencies in evaluating public health concerns, informing regulatory and technological decisions, prioritizing research needs and funding, and in developing approaches for cost-benefit analysis.

However, risk assessment is at a crossroads. Despite advances in the field, risk assessment faces a number of significant challenges including lengthy delays in making complex decisions; lack of data leading to significant uncertainty in risk assessments; and many chemicals in the marketplace that have not been evaluated and emerging agents requiring assessment.

Science and Decisions makes practical scientific and technical recommendations to address these challenges. This book is a complement to the widely used 1983 National Academies book, Risk Assessment in the Federal Government (also known as the Red Book). The earlier book established a framework for the concepts and conduct of risk assessment that has been adopted by numerous expert committees, regulatory agencies, and public health institutions. The new book embeds these concepts within a broader framework for risk-based decision-making. Together, these are essential references for those working in the regulatory and public health fields.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!