2

Risk Assessment and Uncertainty

As discussed in Chapter 1, a number of factors play a role in the decisions made by the U.S. Environmental Protection Agency’s (EPA’s) decisions. This chapter discusses the uncertainty in the data and the analyses associated with one of those factors, human health risk estimates. There has been a great deal of progress over the past few decades in developing methods to assess and quantify uncertainty in estimates of exposure, adverse effects, and overall risks (EPA, 2004). In this chapter the committee provides a broad overview of the nature of the main uncertainties in the characterization of risks. Later chapters offer discussions of how EPA should incorporate those uncertainties into its decisions and communicate them. This chapter begins with background information on risk assessments and then summarizes various approaches to characterizing the uncertainties in risk estimates. Examples of EPA’s risk assessments and the uncertainty analyses in them are then discussed.

RISK ASSESSMENT

The mandate of EPA is broad. It includes regulating the releases and human exposures arising at any stage of manufacture, distribution, use, and disposal of any substances that pose environmental risks. In the context of the EPA’s mandate, the various risks to health arise because of the presence of chemicals and other agents, such as radiation-emitting substances and pathogenic microorganisms, in different media, including air, water, and soils. The chemicals of interest include industrial products of diverse types and by-products of chemical manufacturing, chemical use, and energy



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 47
2 Risk Assessment and Uncertainty A s discussed in Chapter 1, a number of factors play a role in the deci- sions made by the U.S. Environmental Protection Agency’s (EPA’s) decisions. This chapter discusses the uncertainty in the data and the analyses associated with one of those factors, human health risk estimates. There has been a great deal of progress over the past few decades in devel- oping methods to assess and quantify uncertainty in estimates of exposure, adverse effects, and overall risks (EPA, 2004). In this chapter the commit- tee provides a broad overview of the nature of the main uncertainties in the characterization of risks. Later chapters offer discussions of how EPA should incorporate those uncertainties into its decisions and communicate them. This chapter begins with background information on risk assessments and then summarizes various approaches to characterizing the uncertainties in risk estimates. Examples of EPA’s risk assessments and the uncertainty analyses in them are then discussed. RISK ASSESSMENT The mandate of EPA is broad. It includes regulating the releases and human exposures arising at any stage of manufacture, distribution, use, and disposal of any substances that pose environmental risks. In the context of the EPA’s mandate, the various risks to health arise because of the presence of chemicals and other agents, such as radiation-emitting substances and pathogenic microorganisms, in different media, including air, water, and soils. The chemicals of interest include industrial products of diverse types and by-products of chemical manufacturing, chemical use, and energy 47

OCR for page 47
48 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY BOX 2-1 Definitions Human health risk assessment is a systematic framework within which sci- entific information relating to the nature and magnitude of threats to human health is organized and evaluated. The typical goal of a human health risk assessment is to develop a statement regarding the likelihood, or probability, that exposures arising from a given source, or in some cases from multiple sources, will harm human health (NRC, 1983). The risks to a given population are a function of the hazards of a given chemical and the exposure that the population experiences. Risk communication “is an interactive process of exchange of information and opinion among individuals, groups, and institutions. It involves multiple mes- sages about the nature of risk and other messages, not strictly about risk, that express concerns, opinions, or reactions to risk messages or to legal and institu- tional arrangements for risk management” (NRC, 1989, p. 21). Risk management refers to the process whereby the results of a risk assess­ ment are considered, together with the results of other technical analyses and nonscientific factors, to reach a decision about the need for and extent of risk reduction to be sought in particular circumstances and of the means for achiev- ing and maintaining that reduction (NRC, 1983). At the EPA, risk management is typically linked to a regulatory decision, whereas risk assessment involves the evaluation of the scientific evidence about risks that inform that regulatory deci- sion. As discussed in Science and Decisions: Advancing Risk Assessment (NRC, 2009), a conceptual distinction between risk assessment and risk management is maintained as it is “imperative that risk assessments used to evaluate risk- management options not be inappropriately influenced by the preferences of [decision makers]” (p. 12). production. The EPA uses a health risk assessment and risk-management model to identify the nature and estimate the magnitude of risks from chemicals and other agents and to determine the best way to manage or mitigate those risks (EPA, 2004). As discussed in Chapter 1, the process of risk assessment and using it for regulatory decisions was first described in the seminal 1983 National Academy of Sciences report Risk Assessment in The Federal Government: Managing the Process (NRC, 1983) (here­ fter a the Red Book)1 and in a series of expert reports issued since that time (NRC, 1994, 1996, 2007, 2009). All of those reports emphasize the need for a conceptual distinction between risk assessment and risk management. Box 2-1 offers descriptions of some of the important terms in this area. 1  TheNational Research Council study that led to the Red Book was congressionally mandated and was requested “to strengthen the reliability and objectivity of scientific assessment that forms the basis for federal regulatory policies applicable to carcinogens and other public health hazards” (NRC, 1983, p. iii).

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 49 The scientific information about the hazards used in risk assessments is derived largely from observational epidemiology and experimental ani- mal studies of specific substances or combinations of substances that are designed to identify their hazardous properties (that is, the types of harm they can induce in humans) and the conditions of exposure under which those harms are observed (that is, the dose and duration) (Box 2-2 provides BOX 2-2 Development of Estimates of Human Health Risks for Non-Cancer Endpointsa When assessing the risks to human health from a chemical for a non-cancer endpoint, EPA typically develops a reference dose (RfD).b EPA defines an RfD as an “estimate (with uncertainty spanning perhaps an order of magnitude) of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime” (EPA, 2012b). The RfD is based on the assumption that a certain dose must be exceeded before toxicity is expressed. The RfD is derived from a no-observed- adverse-effect level (NOAEL), a lowest-observed-adverse-effect level (LOAEL), or a benchmark dose from animal or epidemiology studies. The NOAEL is the “highest exposure level at which there are no biologically significant increases in the frequency or severity of adverse effect between the exposed population and its appropriate control” (EPA, 2012b). The LOAEL is the “lowest exposure level at which there are biologically significant increases in frequency or severity of adverse effects between the exposed population and its appropriate control group” (EPA, 2012b). The benchmark dose is a “dose or concentration that pro- duces a predetermined change in response rate of an adverse effect (called the benchmark response or BMR) compared to background” (EPA, 2012b). In general, NOAELs and LOAELs are derived from animal data; benchmark doses are derived from epidemiologic studies. In developing the RfD, the NOAEL, LOAEL, or benchmark dose is generally reduced downward by uncertainty factors (UFs) which are usually multiples of 10, to account for limitations and incompleteness in the data. Those limitations could include knowledge of interspecies variability, and the expectation that vari- ability in response in the general population is likely to be much greater than that present in the populations (human or animal) from which the NOAEL, LOAEL, or benchmark dose is derived. Whether standard defaults or data-based uncer- tainty factors are used, the accuracies of the UFs used are largely unknown, so quantitative characterization of the uncertainties associated with any given RfD is generally not possible. a The processes for non-cancer and cancer risk assessments are not static. A number of reports, including Science and Decision: Advancing Risk Assessment (NRC, 2009), have recommended harmonizing the processes used for non-cancer and cancer risk assessments. b A reference concentration (RfC) is developed for inhalation toxicants.

OCR for page 47
50 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY BOX 2-3 Development of Estimates of Human Health Risks for Cancer Endpointsa In March 2005, EPA updated its guidelines for estimating the human health risks associated with a carcinogen (EPA, 2005a).b Those guidelines are briefly summarized here. The first step in a cancer risk assessment is to characterize the hazard using a “weight of evidence narrative.”c The narrative describes the avail- able evidence, including its strengths and limitations, and “provides a conclusion with regard to human carcinogenic potential” p. 1-12). The data available for each tumor type are then used to derive a point of departure (POD), that is, “an esti- mated dose (usually expressed in human-equivalent terms) near the lower end of the observed range, without significant extrapolation to lower doses” (EPA, 2005a, p. 1-13). Data from epidemiology studies are used if available and of sufficient quality. In the absence of such epidemiology data, data from animal studies are used and, when possible and appropriate, toxicokinetic data are used to inform cross-species dose scaling to estimate the human-equivalent dose. The POD is generally “the lower 95% confidence limit on the lowest dose level that can be supported for modeling by the data” (EPA, 2005a). Once the POD is established, extrapolation is used to model the dose–­ response relationship at exposures lower than the POD. Depending on how much is known about the mode of actiond of the agent, one of two methods is used for the extrapolation: linear or nonlinear extrapolation. A linear extrapolation is used in the “absence of sufficient information on modes of action” or when “the mode of action information indicates that the dose- response curve at low dose is or is expected to be linear” (EPA, 2005a, p. 1-15). For a linear extrapolation, “a line should be drawn from the POD to the origin, corrected for background” (EPA, 2005a, p. 3-23). The slope of that line, called the slope factor, is considered “an upper-bound estimate of risk per increment of dose” (EPA, 2005a, p. 3-23) and is used to estimate risks at different exposure levels. a description of how those data are used for non-cancer endpoints, and Box 2-3 gives a description for cancer endpoints). Information from these stud- ies is used to develop the hazard identification and dose–response (where “response” is the harm or adverse effect) components of a risk assessment. The data used to develop these components typically arise from diverse sources and types of study designs and frequently lack strong consistency in methods so that reaching valid conclusions about them requires both careful scientific evaluations and experienced judgments. A hallmark of the modern risk-assessment framework is the expectation not only that the scientific evidence is described, but also that the evaluation of the evidence and any judgments about the quality and relevance of the evidence to the risk assessors are thoroughly and clearly described (OMB and OSTP, 2007).

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 51 A nonlinear approach is used “when there are sufficient data to ascertain the mode of action and conclude that it is not linear at low doses and the agent does not demonstrate mutagenic or other activity consistent with linearity at low doses.” “[F]or nonlinear extrapolation the POD is used in the calculation of a reference dose [RfD]or reference concentration [RfC]” (EPA, 2005a, p. 3-16) similar to how an RfD or RfC is estimated for non-cancer endpoints, as described in Box 2-2. Depending on the amount of information available about potential susceptible populations and susceptibility during different life stages, adjustments to the esti- mates or separate assessments are recommended in the guidelines. Concurrent with the release of the general cancer risk assessment guidelines, EPA released supplemental guidelines that provide “specific guidance on procedures for adjust- ing cancer potency estimates only for carcinogens acting through a mutagenic mode of action” (EPA, 2005a, p. 1-19). a The processes for non-cancer and cancer risk assessments are not static. A number of reports, including Science and Decision: Advancing Risk Assessment (NRC, 2009), have recommended harmonizing the processes used for non-cancer and cancer risk assessments. b The “cancer guidelines are not intended to provide the primary source of, or guidance for, the Agency’s evaluation of the carcinogenic risks of radiation” (EPA, 2005a, p. 1-6). c EPA recommends using one of the five standard hazard descriptors: Carcinogenic to Hu- mans, Likely to Be Carcinogenic to Humans, Suggestive Evidence of Carcinogenic Potential, Inadequate Information to Assess Carcinogenic Potential, and Not Likely to Be Carcinogenic to Humans (EPA, 2005a). d Mode of action “is defined as a sequence of key events and processes, starting with interaction of an agent with a cell, proceeding through operational and anatomical changes, and resulting in cancer formation. A “key event” is an empirically observable precursor step that is itself a necessary element of the mode of action or is a biologically based marker for such an element. Mode of action is contrasted with “mechanism of action,” which implies a more detailed understanding and description of events, often at the molecular level, than is meant by mode of action” (EPA, 2005a, p. 1-10). Assessing exposure requires an evaluation of the nature of the popula- tion that is incurring exposures to the substances of interest and the condi- tions of exposure that it is experiencing (such as the dose and duration of exposure) (NRC, 1991). In effect, risk to the exposed population is under- stood by examining the exposure the population experiences (its “dose”) relative to the hazard and dose–response information described above. Risk characterization consists of a statement regarding the “response” (risk of harm) expected in the population under its exposure conditions, together with a description of uncertainties (NRC, 1983). Risk assessments are fre- quently used by EPA to characterize health risks under existing exposure conditions and also to examine how risks will change if actions are taken to alter exposures (EPA, 2012b). A clear description of the confidence that can

OCR for page 47
52 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY be placed in the risk-assessment result—that is, a statement regarding the scientific uncertainties associated with the assessment—should be a feature of all risk assessments. UNCERTAINTY AND RISK ASSESSMENT Uncertainties are inherent in all scientific undertakings and cannot be avoided. The extent to which uncertainties in data and analyses can be measured and expressed in highly quantitative terms depends upon the types of investigations used to develop scientific knowledge. Highly controlled experiments, usually conducted in a laboratory or clinical set- ting, if well designed and conducted, can provide the clearest information regarding uncertainties. Even in many experimental studies, however, it is not always possible to quantify uncertainties. Controlled clinical trials, for example, still contain uncertainties and variability that cannot necessarily be predicted or accurately quantified. Using available knowledge with its inherent uncertainties to make predictions about as-yet unobserved—and perhaps inherently unobservable—states is even more uncertain, but it is critical to many important social decisions, including EPA’s decisions related to human health protection (EPA, 2012b). Risk assessments can address such questions as whether a risk to health will be reduced if certain actions are taken and, if so, by what magnitude and whether new risks might be introduced when such actions are taken. However, the scientific uncertain- ties associated with such predictive efforts include not only the uncertainty associated with the available knowledge but also uncertainty related to the predictive nature of estimates (for example, predicting how much of a decrease in air pollution different control technologies will produce or predicting how many lung cancer cases will be avoided by a given decrease in air pollution). The Red Book highlighted many of the unknowns in a risk assessment, including a lack of understanding of the mechanisms that underlie different adverse effects (NRC, 1983). The presence of uncertainty in data and analyses, however, is not unique to the chemical risk-assessment world and should not preclude a regulatory decision. For instance, drugs are often used even without a thorough understanding of their underlying mechanism of action. Understanding Risk: Informing Decisions in a Democratic Society (NRC, 1996) emphasizes the importance to decision making of recognizing uncertainties in risk assessments, pointing out that decision makers should attempt to consider “both the magnitude of uncertainty and its sources and character” (p. 5). The report further emphasizes, however, that “un- recognized sources of uncertainty—surprise and fundamental ignorance about the basic processes that drive risk—are often important sources of uncertainty” (NRC, 1996, p. 5). Because of that, the report argues that the

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 53 limitations in uncertainty analyses should be recognized and considered and that the focus of any such analysis should be on the uncertainties that most affect the decision, and it criticizes characterizations of risks that do not focus on the questions of greatest impact to the decision outcome. Uncertainties in data and analyses can enter the risk-assessment pro- cess at every step; the sources of the largest uncertainties include the use of observational studies, extrapolation from studies in animals to humans, extrapolation from high- to low-dose exposures, and interindividual vari- ability. Box 2-4, which briefly describes the evidence on the degreasing solvent trichloroethylene (TCE), provides an example of how uncertainties arise in risk assessments and of the challenges that those uncertainties pres- ent to decision makers. Studies in humans that evaluate whether exposure to a substance causes specific adverse effects can provide the most relevant information on haz- ards and dose response. Clinical trials have a greater chance of yielding unambiguous results regarding causality than do observational studies (Gray-Donald and Kramer, 1988). It is not ethical to intentionally expose people to chemicals at exposure concentrations that are likely to cause adverse effects, even following a short duration of exposure. Moreover, clinical trials are costly and typically are designed to capture the short-term effects of an intervention, whereas many adverse effects of chemicals can take decades to develop. Except under highly limited conditions, clinical trials should not be used to study the adverse health effects of substances regulated by EPA (NRC, 2004). Most studies evaluating risks in humans, therefore, are observational in nature; that is, they investigate some aspect of the physical world “as it is.” Observational studies can have significant limitations. Because many such studies do not provide evidence that meets the criteria typically used to establish causation rather than association—that is, the Hill criteria, such as demonstrating a dose response and a temporal relationship between expo- sure and effect (Hill, 1965)—the results from individual observational stud- ies on their own can, at best, be used to establish associations. For example, in many situations the only information is whether or not participants were exposed to a given chemical, and nothing is known about the magnitude of individual exposures or whether there was differential exposure among in- dividuals, which makes it very difficult to determine dose–response relation- ships. Observational studies often capture exposures and health outcomes retrospectively, so that the temporal relationship between the exposure and the outcome cannot be determined. Furthermore, regardless of the study type, inconsistent results in a group or body of studies examining a given chemical are common and contribute to uncertainties regarding causality. The types of uncertainties associated with the interpretation of results from observational studies may be described quantitatively—such as conducting

OCR for page 47
54 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY BOX 2-4 Trichloroethylene Risk Assessment: An Example of the Uncertainties Present in a Cancer Risk Assessment and How They Could Affect Regulatory Decisions Trichloroethylene (TCE) is a degreasing solvent used in many industries and a contaminant in all environmental media (air, water, soil). The issues in the TCE risk assessment illustrate several uncertainties and related choices that risk assessors and decision makers face when evaluating the risk potential of envi- ronmental carcinogens, as well as the delays that such uncertainties can lead to. They also highlight the resulting reliance on assumptions and models in the absence of definitive data, the need for choices among the options that exist due to unknowns and uncertainties, and the role of these uncertainties and choices in shaping regulatory decisions. The evidence related to TCE, described briefly below, has been summarized previously (EPA, 2009; NRC, 2005). Sources of uncertainty in the assessment include the following: • Although human studies provide evidence of associations between occupational exposures to TCE and liver and kidney cancers, and non- Hodgkin’s lymphoma (see EPA, 2009, Chapter 4), there is uncertainty about whether those associations are causal. • If the associations are assumed causal, there is uncertainty in the cancer potency (that is, the risk of cancer per unit of exposure to TCE). Estimates of cancer potency based on data from different human stud- ies differ by up to 100-fold (EPA, 2009; NRC, 2005). an analysis that estimates the effect and also quantitative assessments of the likelihood of that effect—but the uncertainties are usually expressed in qualitative language, such as describing the range of relative risks across studies and the quality of the individual studies. Data from experimental studies in animals and from a variety of in vi- tro systems are commonly used, in part, to overcome the limitations in ob- servational epidemiology studies. Experimental studies allow researchers to acquire information about hazards and dose response and, if well designed and well performed, can yield information about causality. Results from such studies, however, can have significant uncertainties regarding their generalizability to humans. Differences between the metabolism and the mode of action of a chemical in animals compared with humans underlie many of the differences between animals and humans, but uncertainty often exists about the magnitude of the differences. For example, it is not cur- rently possible to quantify the extent to which disease processes observed in animal experiments apply to humans, and differences in longevity have to

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 55 • Data from animal studies indicate that TCE can induce liver and lung cancer (mice) and kidney and testicular cancers (rats). Estimates of cancer potencies derived from the animal data differ by over 500-fold (EPA, 2009). • Potency differences based on animal data are explained in part by the use of different models for low-dose extrapolation, but the current under- standing of the biological mechanisms of cancer induction is too limited to allow a selection of the optimal model (EPA, 2009; NRC, 2005). • The biological reasons for the differences in response between animals and humans are only partially understood, resulting in uncertainty about which studies (animal or human), and which potency estimates (at the lower or higher end of the range) are more reliable and about the nature and extent of possible human risk in populations exposed through the environment (EPA, 2009). The choices risk assessors make when interpreting the data in light of the un- certainty influence the size of the risk estimate and, in turn, the decision whether or not to regulate TCE and, if so, the nature of the regulatory standards that are based on the risk assessment. For example, if assessors use potency values at the lower end of the range, the assessment may indicate a low likelihood of cancer risk in humans and obviate the need for regulatory action. By contrast, if assessors use potency values at the higher end of the range, the assessment may indicate a high likelihood of cancer risk in humans and be the basis for a more stringent regulatory standard. be taken into account when considering the duration of exposure for ani- mal studies. It is important to note, however, that despite those limitations enough is known about the similarities and differences between humans and experimental animals to make them relevant to and critical for assess- ing human health risks (EPA, 2011a). There is also uncertainty associated with exposure information. One such uncertainty comes from extrapolating from exposures in studies to the exposures experienced by the public. There are instances in which the exposure incurred by the population that is the subject of a risk assessment (that is, the target population) is close to, or even in the same range as, that for which hazard and dose–response data have been collected. For example, studies of exposures to the primary air pollutants ozone, lead, mono- nitrogen oxides, sulfur oxide gases, and particulate matter are often in the same ranges of exposures as occurs with the general population (Dockery et al., 1993; Pope et al., 1995). In many instances, however, the exposure incurred by the target population is only a small fraction—sometimes a

OCR for page 47
56 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY very tiny fraction—of the exposures for which it has been possible to collect hazard and dose–response information. Studies of occupational cohorts, for example, typically involve exposures well in excess of general population exposures, and animal studies similarly involve high-dose effects. For the risks to the target population to be described, a method or model must be used to extrapolate from the high-dose scientific findings to infer the risks at much lower doses. That extrapolation can create large uncertainties in risk assessment. The biological bases for selecting among different models for extrapolation are not well established, and different models can yield different estimates of low-dose risk. In other cases very little might be known about the actual exposures in the target population, adding additional uncertainty. Individuals within a population also vary with respect to both their exposures and their re- sponses to hazardous substances. Reliable, quantitative information that al- lows an understanding of the magnitudes of that variability can be difficult, if not impossible, to acquire (Samoli et al., 2005). Risk assessments need to account for possible differences in response between the populations that were studied to understand hazards and dose response in the target population, which typically is more diverse than the population studied (Pope, 2000). Studies of human exposure in limited populations cannot be used to apply to other, more diverse populations without considering the uncertainties from the different populations. Those uncertainties are part of almost all risk assessments conducted by EPA (EPA, 2004). Additional uncertainties related to the effects of chemicals at different life stages and different comorbidities, the effects of exposures to complex mixtures, and the effects of chemicals that have received very little toxicological study are also introduced in many assess- ments (EPA, 2004). In many cases the analyst or scientist who is conducting the risk assessment is able only to describe those uncertainties in largely qualitative terms, and formulating scientifically rigorous statements about the effects of these uncertainties on a risk result is beset with difficulties (EPA, 2004). THE HISTORY OF UNCERTAINTY ANALYSIS The 1983 NAS Report, Uncertainties and the Use of Defaults Given the EPA’s mandate to protect human health, the agency has had to find a way to make decisions taking into account the scientific uncer- tainty discussed above. The Red Book emphasized that the uncertainties inherent in risk assessment were so pervasive that virtually no risk assess- ment could be completed without the use of assumptions or some types of models for extrapolations (NRC, 1983). Moreover, it recognized that

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 57 there was little or no scientific basis for discriminating among the range of assumptions or models that might be used in a given case. Given that situ- ation, risk assessments were not likely to achieve any degree of consistency and, indeed, might be easily “tailored” to meet any risk-management objec- tive. The report argued that some degree of general scientific understanding, though limited, exists in each of the areas of uncertainty that attend risk assessment. It further argued that, in many of the areas of uncertainty, a range of plausible scientific inferences might be made, although none could be claimed to be generally correct (that is, correct for all or most specific cases). If the agencies conducting risk assessments could select, for each step where one was needed, the “best supported” option or inference and could apply that inference to all of its risk assessments, then it could be possible to be consistent in risk assessment and to minimize case-by-case manipula- tions. Determining the “best” option cannot be based upon science alone, but also requires a policy choice, and agencies needed to specify clearly the scientific and policy bases for their choices among available options. The report further stated that the selected set of inference options for risk assessment should not only be justified, but also be set down in written guidelines for the conduct of risk assessments, so that they could be visible to all (NRC, 1983). As recommended, EPA has developed guidelines for the conduct of risk assessments for many types of adverse effects, and those guidelines include recommendations about what uncertainty factors to use when there are specific uncertainties (EPA, 1986, 1992, 1997a,b,d, 1998a, 2004, 2005a). The selected sets of inference options have come to be called uncertainty factors, or defaults. In practice, in reviewing the scientific information available on specific substances or exposures, it becomes clear that there are significant gaps in knowledge or information; agency human health risk assessors adopt the relevant default specified in the guidelines. For example, to account for uncertainties in how to extrapolate from animal data to risks in humans, the default uncertainty factor is 10. EPA, therefore, divides the dose at which no effect is seen in animals by a factor of 10 to estimate a dose at which an effect would not be seen in humans. If there are data on the extent of toxicokinetic differences between animals and humans, then EPA might use a data-derived uncertainty factor rather than using the de- fault uncertainty factor. The Problems with Default-Driven Risk Assessments In addition to helping make risk assessments consistent across agen- cies, the use of prespecified, generic defaults has a number of advantages. First, although the uncertainties and limitations in the estimate should be characterized for the decision maker, the use of a default does allow the

OCR for page 47
62 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY Arsenic in Drinking Water The regulation of arsenic in drinking water illustrates the quantitative approach EPA has used in estimating health risks. In 1976 the EPA proposed an interim maximum contaminant level (MCL) for arsenic of 50 micrograms per liter (µg/L). In 1996, as part of a review of that MCL, EPA requested that “the National Research Council (NRC) independently review the arsenic toxicity data base and evaluate the scientific validity of EPA’s 1988 risk assessment for arsenic in drinking water” (NRC, 1999, p. 1). The resulting 1999 report, Arsenic in Drinking Water (NRC, 1999), concludes that “the current EPA MCL for arsenic in drinking water of 50 μg/L does not achieve EPA’s goal for public-health protection and, therefore, requires downward revision as promptly as pos- sible” (p. 9). It further recommended sensitivity analyses to examine the uncertainty from the choice of dose–response model, and to evaluate the uncertainty from measurement error, confounding, and nutritional factors. On January 22, 2001, EPA issued a final rule for arsenic in drinking water, with a pending standard for arsenic of 10 μg/L (EPA, 2001b). That standard was developed by relying on the scientific information in the 1999 NRC report, and was set based on risks of bladder and lung cancer. The agency estimated risks by using a linear extrapolation of data from an epidemiological study of exposures in southwestern Taiwan. To explore the uncertainty created by using different models of the dose–response relationship, EPA compared estimates calculated by Morales et al. (2000) using 10 different models and chose the model that did not result in a supralinear extrapolation because there was no biological basis for such an extrapolation. There was also uncertainty about exposures caused by variability in how much water people drink, including differences between the U.S. population and the Taiwanese population in the study and dif- ferences within the U.S. population. A Monte Carlo analysis was used to estimate distributions of water intake, accounting for age, sex, and weight and adjusting water intake to account for the high consumption of water from cooking in Taiwan, and the mortality data in Taiwan were converted to expected incident data in the United States. EPA also stated that because of the increased intake of water on a per-body-weight basis in infants fed formula, it intended to issue a health advisory for the use of low-arsenic water in the preparation of infant formula. On April 23, 2001, under a new administration, the agency announced that it would delay the effective date of the arsenic and drinking water rule and that it had asked the NAS to review the data, including any new data since the publication of Arsenic in Drinking Water (NRC, 1999) on the health effects of arsenic exposure (EPA, 2001c). The agency also asked the National Drinking Water Advisory Council to review the cost estimates for

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 63 the rule and its Science Advisory Council to review the arsenic rule benefits analysis. (See Chapter 3 for further discussion of cost and benefit analyses.) The NAS report, Arsenic in Drinking Water: Update 2001 (NRC, 2001) confirmed that EPA’s human health risk assessment should focus on bladder and lung cancer and that it should be based on the epidemiologic data from southwestern Taiwan. The report recommended using an “addi- tive Poisson model with a linear term used for dose” (p. 215) to extrapolate from the doses in the epidemiology study to the lower exposures seen in the United States. Based on a determination that the available information on mode of action did not indicate an appropriate method for extrapolating, it recommended that a default linear extrapolation should be used. It noted, however, that the choice to use a linear extrapolation is, in part, a policy decision (NRC, 2001). The report also discussed the effects of other uncertainties and evalu- ated the effect of using different studies (for example, one with data on populations in Chile); statistical models, including using a model-weighting approach; background incidence rates between different populations; and water intakes and measurement error. The report presented maximum- likelihood estimates (that is, central tendencies), not upper-bound or worst- case estimates. The report (NRC, 2001) concluded “that recent studies and analyses enhance the confidence in risk estimates” (p. 14), and that the re- sults of the updated assessment “are consistent with the results presented in the NRC’s 1999 Arsenic in Drinking Water report and suggest that the risks for bladder and lung cancer incidence are greater than the risk estimates on which EPA based its January 2001 pending rule” (p. 14). It also discussed the uncertainty that could come from variability in arsenic metabolism, dif- ferent exposures, nutritional parameters, and interactions between arsenic and smoking that could affect the dose–response curve. On October 31, 2001, EPA announced that it would set the arsenic in drinking water standard at 10 μg/L and not delay the implementation schedule first established in the January 22, 2001, regulation (EPA, 2001a). The example of arsenic in drinking water illustrates the broad spectrum of uncertainty and sensitivity analyses that can be conducted when estimat- ing human health risks. The effects of those evaluations provide a broader view of how uncertainty about background rates of cancer, water intake, model choice, and data from studies can affect the risk estimates. They also, however, indicate how those uncertainties do not always affect the estimates to an extent that would affect the overall decision. For example, despite the uncertainties listed and the different health risk estimates presented and additional work by a second NRC committee, those new data and analy- ses also supported the original 10 μg/L standard promulgated in January 2001. The example also illustrates the importance that political factors can play in EPA’s decisions. Despite the characterization and quantification of

OCR for page 47
64 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY uncertainty in the 1999 report (NRC, 1999) on which EPA’s January 2001 rule was based, a new administration called into question the scientific basis of the rule and required a reevaluation of the science. Clean Air Interstate Rule In 2005, EPA published its regulatory impact analysis (RIA) for CAIR, a rule developed to implement requirements of the Clean Air Act concern- ing the transport of air pollution across state boundaries (EPA, 2005b). A December, 2008 court ruling directed EPA to issue a new that rule, but did not vacate CAIR.2 In response to that ruling, in July 2011, EPA issued the Cross-State Air Pollution Rule (CSAPR) to implement the cross-state pol- lution transportation requirements of the CAA. In August 2012, the U.S. Court of Appeals for the D.C. Circuit ruled that the CSAPR violates Federal law and must be vacated because: (1) “EPA has used the good neighbor provision to impose massive emissions reduction requirements on upwind States without regard to the limits imposed by the statutory text” (p. 7); and (2) “when EPA quantified States’ good neighbor obligations, it did not al- low the States the initial opportunity to implement the required reductions with respect to sources within their borders. Instead, EPA quantified States’ good neighbor obligations and simultaneously set forth EPA-designed Fed- eral Implementation Plans, or FIPs, to implement those obligations at the State level. By doing so, EPA departed from its consistent prior approach to implementing the good neighbor provision and violated the Act” (p. 7).3 EPA is reviewing that court decision and CAIR remains in place (EPA, 2012a). The committee discusses the uncertainty analyses contained in the RIA below. In 2005, EPA published its RIA for CAIR in which it presented the benefits and the costs of the rule, and the comparative costs of implement- ing CAIR in 2010 and 2015 (EPA, 2005b). As discussed by Krupnick et al. (2006), EPA conducted a number of uncertainty and sensitivity analy- ses in support of that rulemaking. They used two different approaches to characterizing uncertainties in health benefits: one based on “the classical statistical error expressed in the under­ying health effects and economic l valuation studies used in the benefits modeling framework” (p. 1-6) and one using the results of a “pilot expert elicitation project designed to character- ize key aspects of uncertainty in the ambient PM2.5/mortality relationship, 2  State of North Carolina v. Environmental Protection Agency, 05-1244. U.S. App. D.C. (2008) (http://www.EPA.gov/airmarkets/progsregs/cair/docs/CAIRRemandOrder.pdf [accessed June 8, 2012]). 3  EME Homer City Generation L.P. v. Environmental Protection Agency, et al., 11-1302. U.S. App. D.C. (2012) (http://www.cadc.uscourts.gov/internet/opinions.nsf/19346B280C784 05C85257A61004DC0E5/$file/11-1302-1390314.pdf [accessed June 8, 2012]).

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 65 and augments the uncertainties in the mortality estimate with the statistical error reported for other endpoints in the benefit analysis” (EPA, 2005b, p. 1-6). EPA also used two different social discount rates (3 percent and 7 percent) to estimate the social benefits and costs of the rule. They point out a number of uncertainties that were not captured in the analyses, including model specification, emissions, air quality, the likelihood that particulate matter causes premature mortality, and other health effects. In reviewing the analysis and presentation of uncertainty in the RIA, Krupnick et al. (2006) pointed out that three pages of the executive summary of the RIA are devoted to discussing uncertainties, but criticized the report because “the summary tables do not include ranges for estimates of benefits or indicate that the reported numbers represent a mean of a distribution, nor does the section reporting out health benefits include any mention of uncertainty” (p. 58). They also point out that, as in many RIAs, EPA quali- tatively discusses “uncertainties in each section but leaving any quantitative information in the appendices” (Krupnick et al., 2006, p. 58). The uncertainty analyses, however, focus to a large extent on the uncertainties in the health benefits, and not the uncertainties in costs and technological factors. As EPA (2005) states, the cost estimates assume that all States in the CAIR region fully participate in the cap and trade programs that reduce SO2 and NOx emissions from EGUs. The cost projections also do not take into account the potential for advancements in the capabilities of pollution control technologies for SO2 and NOx removal and other compliance strategies, such as fuel switching or the reductions in their costs over time. EPA projections also do not take into account demand response (i.e., consumer reaction to electricity prices) be- cause the consumer response is likely to be relatively small, but the effect on lowering private compliance costs may be substantial. Costs may be understated since an optimization model was employed and the regulated community may not react in the same manner to comply with the rules. The Agency also did not factor in the costs and/or savings for the gov- ernment to operate the CAIR program as opposed to other air pollution compliance programs and transactional costs and savings from CAIR’s effects on the labor supply. (p. 1-5) Methylmercury Mercury (Hg) is converted to methylmercury by aquatic biota, and it bioaccumulates in aquatic food webs. Methylmercury can lead to neu- rotoxic effects in humans, and consumption of large, predatory fish is the major source of human exposure to methylmercury. Under the CAA Amendments of 1990,4 EPA had to determine whether it is “appropriate 4  CAA Amendments of 1990, Pub. L. No. 101-549 Sec. 112(n)(1)(A) (1990).

OCR for page 47
66 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY and necessary” to regulate the release of “air toxics” from electric-utility steam-generating units5 (hereafter, a power plant) prior to regulating the release of Hg from those plants. In 1997 EPA published a Mercury Study Report to Congress (EPA, 1997c), and in 1998 it published a Study of Hazardous Air Pollutant Emissions from Electric Utility Steam Generating Units (EPA, 1998b). The former examined “mercury emissions by source, the health and environmental implications of those emissions, and the avail- ability and cost of control technologies” (EPA, 1997c, p. O-1). The latter includes “(1) a description of the industry; (2) an analysis of emissions data; (3) an assessment of hazards and risks due to inhalation exposures to 67 hazardous air pollutants (HAPs); (4) assessments of risks due to multipa- thway (inhalation plus non-inhalation) exposures to four HAPs (radionu- clides, mercury, arsenic, and dioxins); and (5) a discussion of alternative control strategies” (EPA, 1998b, p. ES-2). However, because of gaps in the scientific data regarding Hg toxicity, Congress directed EPA6 to have the NAS conduct a study on the health effects of Hg. Specifically, NAS was to evaluate EPA’s RfD estimating the health effects of methylmercury. When NAS began its study, EPA, the U.S. Food and Drug Administra- tion (FDA), and the Agency for Toxic Substances and Disease Registry (ATSDR) all had published risk assessments that used different methods and relied on different studies for their estimates of health risks. The esti- mates of a “safe” level of exposure from the three different agencies were an RfD of 0.1 microgram/kg/day from EPA, an action level of 0.5 microgram/ kg/day from FDA, and a minimal risk level of 0.3 microgram/kg/day from ATSDR. In its evaluation NAS focused on three epidemiologic studies and evaluated their strengths and weaknesses in detail (NRC, 2000). Two studies—one conducted in the Faroe Islands (Grandjean et al., 1997) and one conducted in New Zealand (Kjellström et al., 1986, 1989)—concluded that there was an association between in utero exposure to methylmercury from maternal fish consumption and an increased risk of poor scores on neurobehavioral test batteries in early childhood. A third study—conducted in the Seychelles (Davidson et al., 1998)—concluded that no such associa- tion existed. NAS identified and analyzed a number of uncertainties in the scientific evidence, including the following (NRC, 2000): 5  An electric-utility steam-generating unit was defined as “any fossil-fuel–fired combustion unit of more than 25 megawatts electric (MWe) that serves a generator that produces electricity for sale.” 6  Departments of Veterans Affairs and Housing and Urban Development, and Independent Agencies Appropriations Act of 1999, Pub. L. No. 105-276 (1999).

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 67 • Uncertainty related to benchmark doses. To compare benchmark doses generated by the three different studies, the committee ana- lyzed the data for multiple endpoints from each of the three studies using the same statistical techniques and presented the range of benchmark doses generated from the different analyses. To com- pare the effect on the benchmark dose of using a single study versus analyzing data from all three studies together, the commit- tee estimated and presented a benchmark dose by conducting an integrative analysis using Bayesian statistical approaches. • Uncertainty related to default factors. To determine whether to use a default uncertainty-adjustment factor (a factor of 10 is the default) to account for variability among humans, the committee reviewed the toxicokinetic data on methylmercury measurements. After examining the scientific evidence the committee recom- mended against using the default uncertainty-adjustment factor for toxicokinetic variability and recommended a factor of two to three. • Uncertainty related to human variability among subpopulations. After looking at potentially sensitive populations, the committee highlighted the need to consider susceptible populations, including pregnant women and subsistence fishermen, in the assessment and subsequent decisions (NRC, 2000). Since the publication of that report, EPA has conducted a regulatory impact analysis and published a rule to regulate the release of mercury and other toxic substances from coal-fired power plants (EPA, 2011b). As de- tailed in EPA’s regulatory impact analysis in support of the final standards, the agency used a Bayesian hierarchical statistical model that integrates the data from the epidemiology studies for its dose–response model. That model, which was published by Axlerad et al. (2007), draws on the integra- tive analysis conducted by NAS (NRC, 2000). The analyses also show the effect of inclusion and exclusion of a potential outlier in one of the stud- ies. The analyses in the regulatory impact analyses also included risk and benefit calculations related to concomitant decreases in particles less than 2.5 micrometers in diameter (PM2.5) emissions from the emission-control technologies that would be put in place. Unlike the NAS report (NRC, 2000), however, EPA does not present the effects of various choices on the estimates of health risks. Given that in the regulatory impact analysis much of the monetized benefits come from co-benefits due to decreased PM2.5- related premature mortalities, the lack of detailed uncertainty analyses for mercury might be appropriate. The estimated benefits from PM2.5 reduc- tions are presented as a range ($37 billion to $990 billion + B), with the lower and higher benefits calculated using mortality estimates from two

OCR for page 47
68 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY different published studies, and B representing an amount from benefits that were not quantified. Although few detailed, quantitative uncertainty analyses are presented for the risk estimates for mercury exposures, the regulatory impact analysis does note a number of uncertainties in the analysis. Those uncertainties in- clude “selection of IQ as a primary endpoint when there may be other more sensitive endpoints, selection of the blood-to-hair ratio for mercury, [sic] the dose–response estimates from the epidemiological literature [, and c]on- trol for confounding from the potentially positive cognitive effects of fish consumption and, more specifically, omega-3 fatty acids” (EPA, 2011b, pp. E-17–E-18). The regulatory impact analysis also discusses, and in some cases ana- lyzes, uncertainties in factors other than health risk estimates that contrib- ute to EPA’s decisions, such as economic, technological, and social factors (EPA, 2011b); those are discussed in Chapter 3. KEY FINDINGS • Uncertainty in the data and analyses that are used in the assessment of risks is inescapable. Decision makers need to understand—either quantitatively or qualitatively—the types and magnitude of the uncertainty that are present in order to make an informed decision. • Consideration of uncertainty analyses for the human health risk assessment should begin during the initial stages of considering a decision to help ensure that the analyses are appropriate to the decision. • Although the use of agent-specific research-based adjustments is preferable, it is sometimes necessary and acceptable to use default adjustment factors to account for uncertainty in human health risk assessments. For example, defaults might need to be used when research-based analysis could lead to prolonged delays in regula- tory decisions. • Regardless of whether agent-specific research-based factors or default adjustment factors are used, communicating the basis of adjustment factors and their impact on human health risk esti- mates to decision makers and stakeholders is critical for regulatory decisions. • EPA has made great strides in assessing the uncertainties in risk estimates, for example, by developing and applying probabilistic techniques and Monte Carlo analysis to uncertainty analysis. • Although some uncertainty analyses are required by statute, the analyses conducted are not always helpful in agency decisions, and

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 69 in some cases, such as dioxin, striving to analyze every uncertainty might delay regulatory decisions. • Consideration of uncertainty analysis should include the perspec- tives of stakeholders, and should be useful to the decision makers. RECOMMENDATION 1 To better inform the public and decision makers, U.S. Environmental Protection Agency (EPA) decision documents7 and other communica- tions to the public should systematically • include information on what uncertainties in the health risk assess- ment are present and which need to be addressed, • discuss how the uncertainties affect the decision at hand, and • include an explicit statement that uncertainty is inherent in science, including the science that informs EPA decisions. REFERENCES Axelrad, D. A., D. C. Bellinger, L. M. Ryan, and T. J. Woodruff. 2007. Dose–response rela- tionship of prenatal mercury exposure and IQ: An integrative analysis of epidemiologic data. Environmental Health Perspectives 115(4):609–615. Davidson, P. W., G. J. Myers, C. Cox, C. Axtell, C. Shamlaye, J. Sloane-Reeves, E. Cernichiari, L. Needham, A. Choi, and Y. Wang. 1998. Effects of prenatal and postnatal methyl­ mercury exposure from fish consumption on neurodevelopment. Journal of the American Medical Association 280(8):701–707. Dockery, D. W., C. A. Pope, X. Xu, J. D. Spengler, J. H. Ware, M. E. Fay, B. G. Ferris Jr., and F. E. Speizer. 1993. An association between air pollution and mortality in six US cities. New England Journal of Medicine 329(24):1753–1759. EPA (Environmental Protection Agency). 1986. Guidelines for carcinogen risk assessment. Washington, DC: Risk Assessment Forum, EPA. ———. 1992. Guidelines for exposure assessment. Washington, DC: Risk Assessment Forum, EPA. ———. 1997a. Guidance on cumulative risk assessment: Part 1 planning and scoping. http:// www.EPA.gov/brownfields/html-doc/cumrisk2.htm (accessed January 14, 2012). ———. 1997b. Guiding principles for Monte Carlo analysis. Washington, DC: Risk Assess- ment Forum, EPA. ———. 1997c. Mercury study report to Congress. Washington, DC: EPA. ———. 1997d. Policy for use of probabilisitc analysis in risk assessment. http://www.EPA. gov/osa/spc/pdfs/probpol.pdf (accessed April 15, 2012). ———. 1998a. Guidelines for neurotoxicity risk assessment. Washington, DC: Risk Assess- ment Forum, EPA. ———. 1998b. Study of hazardous air pollutant emissions from electric utility steam generat- ing units—final report to Congress. 7  The committee uses the term “decision document” to refer to EPA documents that go from EPA staff to the decision maker and documents produced to announce an agency decision.

OCR for page 47
70 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY ———. 2001a. EPA announces arsenic standard for drinking water of 10 parts per billion, edited by EPA. Washington, DC: EPA. ———. 2001b. National primary drinking water regulations; arsenic and clarifications to com- pliance and new source contaminants monitoring. Federal Register 66(14):6976–7066. ———. 2001c. National primary drinking water regulations; arsenic and clarifications to com- pliance and new source contaminants monitoring. Federal Register 66(78):20580–20584. ———. 2004. An examination of EPA risk assessment principles and practices. Washington, DC: Office of the Science Advisor, EPA. ———. 2005a. Guidelines for carcinogen risk assessment. Washington, DC: Risk Assessment Forum, EPA. ———. 2005b. Regulatory impact analysis for the final Clean Air Interstate Rule. Washington, DC: Office of Air and Radiation, EPA. ———. 2009. Toxicological review of Trichloroethylene: In support of summary information on the Integrated Risk Information System (IRIS) [External Review Draft]. Washington, DC: EPA. ———. 2011a. Draft—guidance for applying quantitative data to develop data-derived ex- trapolation factors for interspecies and intraspecies extrapolation. ———. 2011b. Regulatory impact analysis for the final mercury and air toxics standards. Research Triangle, NC: Environmental Protection Agency, Office of Air Quality Planning and Standards, Health and Environmental Impacts Division. ———. 2012a. Clean Air Interstate Rule (CAIR). http://www.EPA.gov/cair/index.html (ac- cessed November 11, 2012). ———. 2012b. EPA risk assessment—basic information. http://EPA.gov/riskassessment/ basicinformation.htm#arisk (accessed June 12, 2012). GAO (Government Accountability Office). 2006. Human health risk assessment: EPA has taken steps to strengthen its process, but improvements needed in planning, data develop- ment, and training. Washington, DC: GAO. Grandjean, P., P. Weihe, R. F. White, F. Debes, S. Araki, K. Yokoyama, K. Murata, N. Sørensen, R. Dahl, and P. J. Jørgensen. 1997. Cognitive deficit in 7-year-old children with prenatal exposure to methylmercury. Neurotoxicology and Teratology 19(6):417–428. Gray-Donald, K., and M. Kramer. 1988. Causality inference in observational vs. experimental studies. American Journal of Epidemiology 127(5):885–892. Haber, L. T., J. S. Dollarhide, A. Maier, and M. L. Dourson. 2001. Noncancer risk assessment: Principles and practice in environmental and occupational settings. In Patty’s toxicology: John Wiley & Sons, Inc. Hill, A. B. 1965. The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine 58:295–300. Kjellström, T., P. Kennedy, S. Wallis, and C. Mantell. 1986. Physical and mental development of children with prenatal exposure to mercury from fish. Stage 1: Preliminary tests at age 4. Vol. Report 3080. Solna, Sweden: National Swedish Environmental Protection Board. Kjellström, T., P. Kennedy, S. Wallis, and C. Mantell. 1989. Physical and mental development of children with prenatal exposure to mercury from fish. Stage II: Interviews and psycho- logical tests at age 6. Solna, Sweden: National Swedish Environmental Protection Board. Krupnick, A., R. Morgenstern, M. Batz, P. Nelson, D. Burtraw, J. Shih, and M. McWilliams. 2006. Not a sure thing: Making regulatory choices under uncertainty. Washington, DC: Resources for the Future. Meek, M., A. Renwick, E. Ohanian, M. Dourson, B. Lake, B. Naumann, and V. Vu. 2002. Guidelines for application of chemical-specific adjustment factors in dose/concentration– response assessment. Toxicology 181:115–120. Morales, K. H., L. Ryan, T.-L. Kuo, M.-M. Wu, and C.-J. Chen. 2000. Risk of internal cancers from arsenic in drinking water. Environmental Health Perspectives 108(7):655–661.

OCR for page 47
RISK ASSESSMENT AND UNCERTAINTY 71 NRC (National Research Council). 1983. Risk assessment in the federal government: Manag- ing the process. Washington, DC: National Academy Press. ———. 1989. Improving risk communication. Washington, DC: National Academy Press. ———. 1991. Human exposure assessment for airborne pollutants: Advances and opportuni- ties. Washington, DC: National Academy Press. ———. 1994. Science and judgment in risk assessment. Washington, DC: National Academy Press. ———. 1996. Understanding risk: Informing decisions in a democratic society. Washington, DC: National Academy Press. ———. 1999. Arsenic in drinking water. Washington, DC: National Academy Press. ———. 2000. Toxicological effects of methylmercury. Washington, DC: National Academy Press. ———. 2001. Arsenic in drinking water: 2001 update. Washington, DC: National Academy Press. ———. 2002. Estimating the public health benefits of proposed air pollution regulations. Washington, DC: The National Academies Press. ———. 2004. Intentional human dosing studies for EPA regulatory purposes: Scientific and ethical issues. Washington, DC: The National Academies Press. ———. 2005. Risk and decisions about disposition of transuranic and high-level radioactive waste. Washington, DC: The National Academies Press. ———. 2006. Health risks from dioxin and related compounds. Washington, DC: The Na- tional Academies Press. ———. 2007. Scientific review of the proposed risk assessment bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. ———. 2009. Science and decisions: Advancing risk assessment. Washington, DC: The Na- tional Academies Press. ———. 2011. Review of the Environmental Protection Agency’s draft IRIS assessment of formaldehyde. Washington, DC: The National Academies Press. OMB (Office of Management and Budget) and OSTP (Office of Science and Technology Policy). 2007. Updated principles for risk analysis: Memorandum for the heads of ex- ecutive departments and agencies. http://www.whitehouse.gov/omb/memoranda/fy2007/ m07-24.pdf (accessed January 4, 2012). Pope, C. A., M. J. Thun, M. M. Namboodiri, D. W. Dockery, J. S. Evans, F. E. Speizer, and C. W. Heath, Jr. 1995. Particulate air pollution as a predictor of mortality in a prospec- tive study of us adults. American Journal of Respiratory and Critical Care Medicine 151(3):669–674. Pope III, C. A. 2000. Review: Epidemiological basis for particulate air pollution health stan- dards. Aerosol Science & Technology 32(1):4–14. Samoli, E., A. Analitis, G. Touloumi, J. Schwartz, H. R. Anderson, J. Sunyer, L. Bisanti, D. Zmirou, J. M. Vonk, and J. Pekkanen. 2005. Estimating the exposure–response relation- ships between particulate matter and mortality within the APHEA multicity project. Environmental Health Perspectives 113(1):88. Zeger, S., F. Dominici, A. McDermott, and J. M. Samet. 2004. Bayesian hierarchical modeling of public health surveillance data: A case study of air pollution and mortality. Monitoring the health of populations: Statistical principles and methods for public health surveil- lance. Oxford: Oxford University Press. Pp. 267–288.

OCR for page 47