Appendix A

Approaches to Accounting for Uncertainty

This appendix discusses a number of different approaches available for analyzing uncertainty and considering uncertainty in decisions. It provides greater detail on the approaches to analyzing or accounting for uncertainties in decisions that are discussed in Chapter 5. Most environmental problems require the use of multiple approaches to uncertainty analysis. For example, most environmental decisions involve variability and heterogeneity as well as model and parameter uncertainty. As a result, it is necessary to apply a mix of statistical analyses and expert judgments.

HEALTH ONLY

When assessing human health risks, the main uncertainties arise in projecting exposures and health effects in the baseline case—that is, absent a change in a risk management strategy—and in projecting the effects of a given management intervention (for example, a proposed regulatory action, such as the implementation of an emission standard). Variability and heterogeneity can occur because of variability both in exposures and in sensitivity to the exposure among subgroups of the population, and existing data may be inadequate to accurately characterize the underlying heterogeneity. Further uncertainty can arise when using models that combine multiple health effects into a single outcome measure.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 229
Appendix A Approaches to Accounting for Uncertainty T his appendix discusses a number of different approaches available for analyzing uncertainty and considering uncertainty in decisions. It provides greater detail on the approaches to analyzing or accounting for uncertainties in decisions that are discussed in Chapter 5. Most envi- ronmental problems require the use of multiple approaches to uncertainty analysis. For example, most environmental decisions involve variability and heterogeneity as well as model and parameter uncertainty. As a result, it is necessary to apply a mix of statistical analyses and expert judgments. HEALTH ONLY When assessing human health risks, the main uncertainties arise in projecting exposures and health effects in the baseline case—that is, absent a change in a risk management strategy—and in projecting the effects of a given management intervention (for example, a proposed regulatory ac- tion, such as the implementation of an emission standard). Variability and heterogeneity can occur because of variability both in exposures and in sensitivity to the exposure among subgroups of the population, and exist- ing data may be inadequate to accurately characterize the underlying het- erogeneity. Further uncertainty can arise when using models that combine multiple health effects into a single outcome measure. 229

OCR for page 229
230 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY THE USE OF DEFAULTS Many U.S. Environmental Protection Agency (EPA) decisions consider only health factors. EPA’s primary general approach for considering uncer- tainty in this class of problems has been to use safety or default adjustment factors. (See Chapter 2 for a discussion of the use of default adjustment factors, or defaults.) The decision rule for these approaches is to set a standard or regulation that is highly protective by applying defaults. These approaches are health protective in nature, widely used, and sometimes embodied in statutes. As discussed in Science and Decisions (NRC, 2009), many of the defaults that EPA uses were developed on a scientific basis and can be adequate and acceptable to use in some risk assessments. For example, in instances when there is not adequate information or when the potential uncertainties are such that the use of defaults compared to quantitative uncertainty analyses is unlikely to affect a decision, defaults can be used. One of the main objections that decision analysts have to using default factors is that they incorporate implicit judgments by analysts or scientists who do not make the regulatory decision. Furthermore, those judgments and their implications are not always independent and are not always ex- plained to decision makers, which makes it difficult for the decision makers to properly interpret the assessment in the context of other factors. Using health-protective (called conservative) analytic or default ap- proaches to account for multiple uncertainties can result in an overesti- mation of health risks and a level of precaution in excess of one based on expected values (Nichols and Zeckhauser, 1988; Viscusi et al., 1997). With this happens—a situation sometimes referred to as compounding conservatism—the precaution level for each individual analysis might be such that the marginal cost of precaution equals or slightly outweighs the marginal health benefit, but when multiple analyses use that level of precaution and are combined, the precaution level becomes such that the overall marginal cost far exceeds the overall marginal benefit. It is unclear, however, how extensive that problem is in reality, and, as discussed by the Government Accountability Office, EPA has taken steps to improve such analyses and avoid some of the problems of compounding conservatism (GAO, 2006). Cullen (1994) evaluated the effects of potential compound- ing conservatism and found that “there exist cases in which conserva- tism compounds dramatically, as well as those for which the effect is less notable” (p. 392). To the extent that the probability distribution function is flat and wide (that is, it has “fat tails”; see Farber, 2007, for discussion) rather than being tall and single-peaked, the safety factor will be high relative to the expected value. Conversely, if the probabilities of adverse outcomes are very

OCR for page 229
APPENDIX A 231 low, the cost may be low. Basing decisions on values in the left or right tail of a distribution rather than on the mean of the distribution will typically result in a suboptimal allocation of scarce resources, even in the presence of uncertainty. Such decisions can also lead to opportunity costs in the form of wasted resources as well as cynicism about the benefits of regulation. VARIABILITY AND HETEROGENEITY Variability and heterogeneity (that is, randomness) are often seen in environmental conditions, exposure levels, and the susceptibility of indi- viduals. When there is information about this randomness with which to conduct statistical analyses, sometimes including extreme value analyses, it is appropriate to use such analysis to generate distributions. Randomness typically occurs in an equation’s error term, in parameters describing the re- lationships, or both. There may be structural differences in the relationships among population subgroups (Table 5-1, first row, first column). There are important considerations that the statistician or epidemiologist should ad- dress when designing appropriate statistical models and procedures, such as the choice of the assumptions underlying a specific random process (for example, a binomial or Poisson process) or the assumptions of independent sampling. Those issues are well covered by textbooks on statistics. The use of statistics-based probabilistic risk analysis is an alternative to using safety or uncertainty factors. Probabilistic risk analysis uses probabil- ity distributions to quantify uncertainty at each step of the risk assessment. For example, a probabilistic risk analysis would quantify the uncertainties about a dose–response relationship for exposure to fine particulates by us- ing epidemiological studies to construct a probability distribution around the slope of the dose–response function. The typical decision rule in a probabilistic risk analysis is to select a standard or regulatory option that satisfies a specific probability criterion. Thus probabilistic risk analysis and decision analysis1 are well suited to deal with the challenge of ensuring an acceptable margin of safety. For example, if a distribution describes the lifetime cancer risk of a maximally exposed individual, analysts can provide the decision maker with the probability that cancer will occur in such an individual—for example, 1 chance in 100, 1 in 10,000, 1 in 100,000, 1 in 1 million, or 1 in 10 million—for a variety of different regulatory options. The decision maker can then decide which of those probabilities is acceptable and choose a regulatory option that reduces the probability to that level. The advantage of using such analytic 1  Decision analysis uses a systematic approach to make decisions in the face of uncertainty. In contrast, probabilistic risk analysis estimates risks, in this case human health risks, using probabilistic statistical methods.

OCR for page 229
232 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY approaches and such a decision rule is that it characterizes the whole range of the probability distribution of outcomes and there is an explicit choice of the level of risk that is tolerable. This enables the decision maker to explain the decision process, including his or her values, quite clearly. For example, the estimated health risks associated with multiple concentrations of a chemical could be presented, and a decision could then be made as to what health risk is acceptable. As discussed in Chapter 2, such an ap- proach was used in the National Research Council’s (NRC’s) report Arsenic in Drinking Water: Update 2001 (NRC, 2001). That report presented the estimated risks of bladder and lung cancer at 3, 5, 10, and 20 ppb of arsenic in drinking water, leaving it for EPA to decide, taking other factors such as costs into consideration, what cancer risk was acceptable and, therefore, what level of arsenic should be allowed in drinking water. Sometimes the standard tools and techniques for assessing uncertainty can be difficult to use when faced with the uncertainty depicted by the first two rows of Table 5-1 (that is, variability and heterogeneity, or model and parameter uncertainty). Probabilistic risk analysis often makes assump- tions about the underlying probability distributions that describe the risk (for example, whether a particular distribution is normal or log-normal). The possibility of nonregular probability distributions makes performing a probabilistic risk analysis more difficult, especially when the underlying distribution for a parameter has what is termed a “fat tail.” Distributions of extreme events can have fat tails (for discussion, see Farber, 2007), and under those circumstances the “fat tail” has to be modeled. For example, in the aftermath of Hurricane Katrina’s devastation of New Orleans, Southwell and von Winterfeldt (Southwell and von Winterfeldt, 2008) noted that when the U.S. Army Corps of Engineers was designing and building levees and floodwalls in the 1970s and 1980s, the corps estimated that a Category 4 hurricane would occur once in 100 years in New Orleans (U.S. Army Corps of Engineers, 1984). That estimate was made despite the fact that two hurricanes rated Category 3 or higher had hit Louisiana not far from New Orleans in the 20 years prior to that estimate being made: In 1965, while still in the Gulf of Mexico, Hurricane Betsy peaked out at a force that was rated just below a Category 5 storm, and when its eye made landfall southeast of New Orleans at Grand Isle, Louisiana, Betsy was rated as a Category 3; and in 1969 Hurricane Camille made landfall in Mississippi as a Category 5 storm. The fact that Hurri- cane Gustav, a Category 3 hurricane, hit New Orleans only 3 years after the near-Category 4 Hurricane Katrina raised more questions about the 100-year estimate for the occurrence of Category 4 hurricanes and about whether the statistical methods used for the prediction were appropriate. Statisticians (Cooke et al., 2011) and decision theorists (Bier et al., 1999) have proposed the use of methods that emphasize extremes in order

OCR for page 229
APPENDIX A 233 to adjust for a fat-tail distribution when dealing with certain rare but po- tentially catastrophic events. However, such an approach assumes that the inaccurate predictions are the result of a stable, underlying distribution that has been somehow mis-specified. But it is certainly possible that the inaccu- rate predictions are instead the result of a structural change and, therefore, that the probability distributions drawn from the historical record are no longer relevant. For example, the levels of protection provided by the levees and floodwalls around New Orleans decreased over time because of natural and man-made changes, and those decreases were not accounted for in the models. As a practical matter, however, it may not be possible to distinguish between the two sources of error. MODEL AND PARAMETER UNCERTAINTY Expert elicitations are often used when dealing with uncertainty about what statistical model to use and which parameters to use in the model (second row, first column in Table 5-1). An expert elicitation is “a formal systematic process to obtain quantitative judgments on scientific questions (to the exclusion of personal or social values and preferences)” (EPA, 2011, p. 23).2 For example, the U.S. Food and Drug Administration and its Food Safety and Inspection Service used expert elicitation to rank foods accord- ing to their ability to support the growth of Listeria monocytogenes (see Chapter 4 for discussion) (FDA, 2003). Expert elicitation and statistical analysis are not mutually exclusive, and a decision maker may choose to use both methods. Expert elicitation processes have been designed and used to quantify model and parameter uncertainties (Hora, 2007). For example, EPA had a formal expert elicita- tion conducted to incorporate “expert judgments into uncertainty analyses for the benefits of air pollution rules” concerning particles less than 2.5 micrometers in diameter using “carefully structured questions about the nature and magnitude of the relationship between changes in annual aver- age PM2.5 and annual, adult, all-cause mortality in the U.S.” (Industrial Economics Incorporated, 2006, p. ii). Expert elicitations can also be used in combination with Bayesian updating models. For instance, elicitations can be used to revise or set initial probability distributions rather than us- ing data to estimate posterior distributions (that is, to reflect the state of existing knowledge before incorporating new evidence through a Bayesian analysis). Although many issues are still disputed—such as whether to elicit experts individually or in groups, how to aggregate different opinions when experts are elicited individually, and how to combine expert opinion with the results of statistical analysis if both techniques are used (Leal et 2  Expert elicitation is described in detail elsewhere (see EPA, 2011; Slottje et al., 2008).

OCR for page 229
234 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY al., 2007)—the process of using experts and Bayesian updating is becom- ing more widely used (Choy et al., 2009; Kuhnert et al., 2010). Such an approach has been applied in various areas, including health care resource allocation (Griffin et al., 2008), conservation science (Martin et al., 2012; O’Leary et al., 2008), ecological models (Kuhnert, 2011), coral reef pro- tection (Bouma et al., 2011), and animal diseases (Garabed et al., 2008). The uncertainty that results from analytical model uncertainty—that is, not knowing what statistical model should be used to estimate relationships such as the dose–response relationship—is difficult to quantify, but in some cases it can have a more pronounced effect on estimates of health risk than parameter uncertainty. Rhomberg (2000), for example, showed the wide range or risk estimates that could result from applying different models to the results of studies of trichloroethylene exposures in mice. Similarly, the choice of which statistical model to use to extrapolate from a high-exposure occupational study to low-dose exposures can have an order-of-magnitude effect on estimates of health risk. As discussed by NRC (2006), to explore the effects of model choice, in its risk assessment EPA estimated the point of departure for determining an acceptable dose (in this case, one that pro- duces a 1 percent change in the risk of cancer, or ED01) using data from a dioxin occupational exposure study (Steenland et al., 2001) but using two different statistical models.3 The point of departure estimated using a power statistical model was 1.38 ng/kg, but the corresponding value estimated with a linear model was 18.6 ng/kg (NRC, 2006). The point of departure was more than an order of magnitude higher when the extrapolation to a low dose was done with linear model rather than a power model; this im- plies that choosing a linear extrapolation rather than a power extrapolation could result in a regulatory standard more than an order of magnitude more stringent. Disagreements about which model is appropriate for low-dose extrapolations of the cancer risks of dioxin have resulted in extensive delays in finalizing the dioxin health risk assessment. Although expert elicitation might be able to guide the decision of which model is most appropriate, there are only a handful of examples of using expert elicitation processes for this purpose in the environmental setting. One example is the work of Ye et al. (2008), who used expert elicitation to weight five models developed for the Death Valley regional flow system. Developing models to deal with multiple datasets is an active area of research. For example, the National Cancer Institute funds the Cancer In- tervention and Surveillance Modeling Network (CISNET), whose goal is to develop tools to assist in synthesizing cancer-related evidence (NCI, 2012). 3  Thepiecewise linear model function was ebx. The power model function was x/background (EPA, 2003; NRC, 2006).

OCR for page 229
APPENDIX A 235 In other instances expert committees have recommended that, in the absence of a biological rationale for choosing one particular model—such as a well-established mode or mechanism of action that indicates how a low dose of a chemical induces and initiates cancer—the choice of a model should be driven by the fit of the existing data to a statistical model and the biological plausibility of the model. For example, Arsenic and Drink- ing Water: 2001 Update (NRC, 2001) found the available information about how arsenic might cause cancer (that is, the mode-of-action data) “insufficient to guide the selection of a specific dose–response model,” and recommended an “additive Poisson model with a linear term in dose [because it] is a biologically plausible model that provides a satisfactory fit to the epidemiological data and represents a reasonable model choice for use in the arsenic risk assessment” (p. 209). That report also assessed the impact of model choice on risk estimates in order to provide information on whether model choice is an important source of uncertainty. If there is insufficient time, if there is insufficient consensus informa- tion for expert elicitation and subjective model weighting or averaging to estimate model uncertainty, or if such uncertainty analyses are not required given the context of the decision, then one can choose to use a structured system of model defaults and criteria for departures from those defaults. Science and Decisions (NRC, 2009) highlights the delays that have occurred because of disagreements about the “adequacy of the data to support a default or an alternative approach” (p. 7), and recommends that EPA “de- scribe specific criteria that need to be addressed for the use of alternatives to each particular default assumption” (p. 8). TECHNOLOGY AVAILABILITY As discussed in Chapters 1 and 3, in addition to considering estimates of health risks, some EPA decisions consider the availability—either cur- rent availability or availability that is expected in the foreseeable future—of technology necessary to achieve a desired exposure reduction and health outcome (second column in Table 5-1). In such cases, two questions arise in addition to the questions related to estimates of health risks. First, which technologies are available or likely to be available soon that can achieve the desired reductions in risk? Second, if several technologies might achieve the health objectives, which one or ones are most suitable? The choice of technology can depend on several factors, including current versus future availability, effectiveness in decreasing exposures and improving health outcomes, and the cost of investments in the new technology, irrespective of how these costs are borne. As discussed in Chapter 3, the availability of new technology is not independent of rulemaking. Rulemaking can, in effect, create a market for technologies. If entrepreneurs believe there will

OCR for page 229
236 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY be a market for a new product, they will be more likely to invest in the research and development of such a product. Questions about whether those potential markets will spur the development of new technologies by the time a regulation is implemented add to the uncertainty concerning the technology. Expert elicitations and expert judgments can provide much of the needed information concerning technology availability. In answering the first question—which appropriate technologies are available or soon will be—a key issue is the uncertainty about the likeli- hood that a relatively new technology can be successfully deployed and, if it is successfully deployed, how well it will perform. In discussing how to ameliorate the effects of the growing amounts of carbon dioxide in the atmosphere, for example, one might ask which carbon sequestration tech- nologies can be successfully deployed and, of those, which will have the best performance. Decision tree analysis, which is one of several analytic techniques used in the field of decision analysis (Clemen, 1998; Raiffa, 1968), is useful for answering such questions. In this case the branches of the decision tree would represent a particular technology being available or not available at some specified time period in the future, with probabilities attached to each outcome. Suppose that the decision involves mutually exclusive choices among technologies, not all of which are available at the time the decision is be- ing made. The decision tree lays out initial decision options (technologies), and for those technologies that do not yet exist the option is followed by nodes that represent the chance for success or failure in implementing the technology. Success and, separately, failure are each followed by additional decisions that represent, for example, adjustments to the technology that are likely to occur in response to the initial success or failure, which are followed by final chance nodes,4 such as indicators of actual performance and the utility or gain measured in some other way that is associated with an actual performance level. One technology may be more promising than another, in which case the actual performance level will be higher—as will the associated utility. EPA’s estimates of technological advances are further complicated because although the agency establishes standards and con- ducts analysis of the costs and benefits of technologies, other sectors (for ex- ample, industry) will develop and use the particular technology in question. When technologies vary on multiple dimensions—for example, cost, performance, and reliability—an analysis of the trade-offs between these dimensions is needed. Such analyses are discussed below. 4  A chance node is an event or point in a decision tree where a degree of uncertainty exists.

OCR for page 229
APPENDIX A 237 COST–BENEFIT COMPARISONS Cost–benefit trade-offs can be analyzed by cost-effectiveness or cost– benefit analysis (referred to as economic factors in Figure 1-2 and defined and discussed at length in Chapter 3; see also third row in Table 5-1). Cost- effectiveness analysis is used much more widely than cost–benefit analysis for decisions involving personal and public health (Gold, 1996; Sloan and Hsieh, 2012). By contrast, cost–benefit analysis is much more widely used for business5 and is the focus of EPA’s environmental applications, although the Office of Management and Budget has recommended that the EPA also use cost-effectiveness analyses (OMB, 2012). A major advantage of cost– benefit analysis is that nonhealth benefits can be included along with health benefits in the benefit calculation. A number of studies have evaluated the quality of cost–benefit analy- ses conducted to support regulatory decisions, including environmental decisions. Agency cost–benefit analyses have been criticized for not consis- tently providing a range of total benefits and costs, and information on net benefits (Ellig and McLaughlin, 2012; Hahn and Dudley, 2004; Hahn and Tetlock, 2007). A number of studies have assessed the extent to which the outcome of those analyses affected the regulatory decision for which they were performed, often finding that it was not clear exactly how the analyses were considered in making the decision (Hahn and Dudley, 2007; Hahn and Tetlock, 2008). It does seem, however, that such analyses are becoming more widely used in regulatory decisions (Ellig and McLaughlin, 2012). As discussed in Chapter 3, one uncertainty when evaluating costs and benefits is which costs and benefits to include in the analyses. It is impor- tant to determine during the problem-formulation phase at the start of the decision-making process which costs and benefits should be included. For example, there needs to be a decision on whether the economic analysis should include employment loss secondary to an environmental disaster. Here the analyst is reliant on the decision maker’s input in defining program objectives and limiting the scope of the analysis, a situation that highlights the importance of having both the decision makers and the analysts in- volved in this first phase. The decision rule in cost–benefit analyses is to select the regulatory option with the largest expected net social benefit. As with other decision rules, the decision maker is using predicted estimates for the future; the de- cisions, therefore, are based on estimates of future values. These estimates reflect the underlying probability distributions of potential costs and ben- efits, and the approaches require an explicit consideration of the underlying 5  Cost–benefit analyses for business applications typically are not made public and are con- ducted to provide information to maximize profits. Environmental cost–benefit analyses take social benefits and costs into account.

OCR for page 229
238 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY probabilities of various outcomes and of the costs and benefit or utilities associated with each outcome. When the outcomes of decisions are reasonably well defined and the underlying probability distributions are reasonably well characterized, these techniques work reasonably well. Problems may arise in practice because, for example, the heterogeneity of dose response is not adequately character- ized, issues arise concerning whether existing data can be generalized to the decision problem, or there is disagreement about how to value an endpoint. For most of these issues, additional research may be the answer. The value- of-information approach described below provides a formal approach for determining the benefit of further research. There are also circumstances in which the decision rules do not work well. First, there may be substantial investment costs associated with certain policy options. If for some reason the investment returns are much less than expected, the large cost of the investment must still be paid. The widely used analytic decision-making tools are quite useful when it is possible to make midcourse corrections in response to new information gained with experience. If the consequences of implementing a policy are irreversible or very costly to reverse, however, basing decisions on expected values may be highly inadvisable, and it may be more appropriate to give more weight in the decision to the nonfavorable (and irreversible) outcomes, which is a health-protective approach. The decision rule in multiattribute utility analysis is to select the regu- latory option that maximizes expected utility. It has been used in various health applications (Feeny et al., 2002; Orme et al., 2007), and multi­ attribute utility analysis and other multicriteria decision analysis tools have also been applied to decisions related to environmental issues (for a review, see Kiker et al., 2005). For example, Merkhofer and Keeney (1987) applied multiattribute utility analysis to help the Department of Energy determine a storage location for nuclear waste. It has also been used for decisions re- lated to the management of the spruce budworm in Canadian forests (Levy et al., 2000) and the selection of a management approach for the Missouri River (Prato, 2003). In multiattribute utility analysis, utility is a function of each attribute taken individually as well as in interaction with one another (Clemen and Lacke, 2001; Keeney and Raiffa, 1976; Morris et al., 2007). In the context of public policy decision making, attributes, or the probability of various attributes, are associated with each policy option. For instance, cleaning up a site may have several types of payoffs, such as improving various health outcomes and, if factors other than health are considered in the decision, fostering the development of new approaches for clean up, promoting local economic development, and providing recreational opportunities.

OCR for page 229
APPENDIX A 239 Multiattribute utility analysis is particularly useful when valid and reli- able utility-of-attribute weights are available from an existing source, as is the case for a number of health applications. Such utility weights may not be as readily available for environmental decision-making applications, in which case the weights would have to be derived as part of the analytic process. Such weights typically vary among individuals and groups, so there is the question of whose weights to use—those of the decision maker, of the stakeholders, or of the members of the public at large—and how best to elicit those weights from the various groups. The weights assigned might vary among different people and over time. For example, the weights for additional lives saved might vary for different people over time; the value of years of life saved would be different for, say, a 90-year-old than for a newborn baby. At the individual patient level, medical decision makers attempt to maximize expected utility for the pa- tient, rather than simply maximizing the patient’s expected life-years. At the population level, medical decision-making guidelines use quality-adjusted life-years (QALYs), computed as a population average, to weight lives saved (Gold et al., 2002). A number of characteristics of multiattribute utility analysis make it useful to environmental decision makers. First, it can explicitly address the uncertainties of the regulatory problem, including uncertainties concern- ing the performance of the technologies (discussed on page 238), the risks and the risk reduction that is achievable, and other factors important in the evaluation of technologies. Second, it uses judgments of decision mak- ers to quantify reasonable and defensible trade-offs among the impacts of technology options, which can be informed by—but are not necessarily equal to—those obtained from market studies and surveys. Third, it can account for risk aversion in length of life or other outcomes—that is, it uses a nonlinear utility function over outcomes. In other words, the analyst working with the decision maker can define the utilities for the analysis in a way that best reflects the decision maker’s preferences or the preferences of others who have an important voice in the decision. Even such concepts as equity can be assigned a utility value. Although cost–benefit analyses can use nonlinear utility functions, in practice a cost–benefit analysis typi- cally employs a linear utility function. Some research has explored ways to include inequality and inequity in benefits analysis (Levy et al., 2006). One problem with the use of aggregate analysis—and multiattribute utility analysis in particular—is that the output numbers can be difficult to interpret. Many judgments are typically buried within those numbers, such as the relative weights given to different parameters. The relative weights should be explicitly stated when using such analyses, but it is a challenge to figure out how to display that and other embedded information.

OCR for page 229
240 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY DEEP UNCERTAINTY Deep uncertainty is uniquely challenging for decision makers. The traditional analytic approaches discussed above, which focus on the prob- ability of certain consequences resulting from different regulatory decisions, cannot typically be used when too little is known about—or there is sub- stantial disagreement concerning—variability, heterogeneity, the appropri- ate model for the data, or the parameters that should be input into a model (Lempert, 2002). Cox (2012) emphasizes the shortcomings of those tradi- tional methods in cases when there is uncertainty or disagreement about (1) what regulatory alternatives are available; (2) the full range of possible consequences; (3) the correct model for a consequence, given a particular decision; and (4) the values and preferences that should be used to evaluate potential consequences, such as how much value should be given to future generations having a certain resource. In other words, those traditional analytical approaches are not particularly useful when deep uncertainty is pervasive, and judgment calls are necessary. Robust management strategies, including adaptive management strate- gies, can be useful when deep uncertainty is present (Cox, 2012; Flüeler, 2001; Lempert, 2002; Lempert and Collins, 2007). Adaptive management strategies characterize uncertainty by using multiple representations of the future rather than a single set of probability distributions, as in optimum expected utility analysis (Lempert and Collins, 2007). An adaptive strategy might give up some level of “optimal performance for less sensitivity to violated assumptions,” or be designed to “perform reasonably well over a wide range of plausible futures” (Lempert and Collins, 2007, p. 1016), or it might leave multiple options open. In other words, with an adaptive strategy, a decision maker might “choose strategies that can be modified to achieve better performance as one learns more about the issues at hand and how the future is unfolding” rather than choosing a strategy on the basis of a certain risk estimate (CCSP, 2009). Diversification of financial portfolios is one example of an adaptive strategy (CCSP, 2009). Another important characteristic of such strategies is that they are likely to be adaptable once additional information has been received (Lempert et al., 2003). Two tools that can provide information that will help decision makers develop adaptive strategies—scenarios and value-of-information assessments—are discussed below. THE USE OF SCENARIOS One of the methods for making decisions in the face of deep uncertainty is the use of scenarios that specify alternative outcomes based on alternative assumptions about the future (Lempert, 2002). As discussed by Jarke et

OCR for page 229
APPENDIX A 241 al. (1998), a scenario describes the set of events that could, within reason, take place. Developing scenarios stimulates participants to identify what situations might occur, the assumptions involved in those situations, what opportunities and risks are associated with the different situations, and what actions could be taken under the different situations. Rather than asking what is most likely to occur, as traditional analytic approaches do, scenarios explore “questions of what are the consequences and most appropriate responses under different circumstances” (Duinker and Greig, 2007, p. 209). In other words, traditional analytical approaches seek to estimate the likelihood of an event or consequence, while scenarios serve to replace unknowns with conceptually feasible but hypothetical events. The scenarios can span the range of possible future worlds, and, given the set of scenarios, analysts can use traditional methods to assess the likely risks and impacts under each scenario. Instead of maximizing expected value or expected utility, as is done when uncertainty can be quantified, the goal of a scenario analysis is to find a solution that per- forms well compared to alternative options under a number of dissimilar, albeit plausible, scenarios depicting the future. As discussed in Chapter 4, scenarios have been used to assess the expected results of different regula- tory options for controlling bovine spongiform encephalopathy and Listeria monocytogenes. In the scenario approach, attaching probabilities to individual scenarios is discouraged. Instead, the goal is to find regulatory solutions that assure that the risks will be contained even if the worst-case scenario comes true. However, even in a worst-case scenario the option selected may not be the most protective one because protection comes at a cost and complete containment may be wasteful. Some departments, such as the Department of Defense, have moved away from examining the worst-case scenario and focus instead on the more likely scenarios. By examining a number of dif- ferent scenarios in human health risk assessments, including scenarios using defaults, EPA could examine the effects of the different scenarios, and risk management decision makers could choose the scenario that produces their desired level of precaution for the decision context. The scenarios are constructed as part of the process of evaluating uncertainty. When there are disputed values—including when stakehold- ers disagree about what a value should be—the scenarios examined can include ones that incorporate disputed values, thus incorporating into the scenarios the uncertainty that surfaced during the deliberative processes with stakeholders. Computers make it possible to evaluate a large number of scenarios. This general approach has been used for assessing long-term global eco- nomic growth (Lempert et al., 2003), public and private investment in hydrogen and fuel-cell technologies (Mahnovski, 2007), managing the risk

OCR for page 229
242 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY of a catastrophic event involving pollution of a pristine lake (Lempert et al., 2006), and the potential effects of various climate-change assessments (Kandlikar et al., 2005). One of the limitations of using scenarios as a robust approach is that it typically leads to relatively conservative strategies because if a risk- management strategy is to be robust, it has to perform reasonably well even for worst-case scenarios. The approach also requires large computational capabilities compared with more traditional decision analysis methodolo- gies, and it requires the ability to determine the potential consequences of the different scenarios (Lempert et al., 2006). Furthermore, in some cases, depending on the nature of the decision and the evidence that is available, there are no robust solutions, that is, “no amount of effort will suggest strategies that perform reasonably well across all or most plausible states of the world” (Lempert et al., 2006, p. 238). In other words, scenarios and robust decision making cannot solve every problem. WHEN IS UNCERTAINTY DEEP UNCERTAINTY? Although in decision making it is useful to recognize when there is deep uncertainty, it is also important to remember that the line between deep uncertainty and other types of uncertainty is not always absolute and can change over time. For example, when dealing with nuclear waste manage- ment, the time horizons are on the order of 10,000 to 100,000 years, and it is not possible to know what will happen over that time frame, especially given the possibility of long-term geological issues, seismology issues, vol- canic activity, climate change, and future human intrusions (DOE, 2002). As time goes on, however, some of those uncertainties might become more or less deep. There is no operational definition for when a lack of consensus about the appropriate models for a particular decision-making problem becomes a case of deep uncertainty in which robust decision-making tools would be helpful. A solution that may work in some cases would be to use both tradi- tional decision-making analysis techniques (such as expected utility) and ro- bust decision making that is based on scenarios of possible future states of the world. However, all uncertainty analysis—and deep uncertainty analysis in particular—is costly. So as part of the initial problem-formulation phase of decision making, one should consider whether such uncertainty analyses are likely to affect the decision and thus be worth including in the process. Although this will not always identify the best method, in some instances it can eliminate some options from consideration.

OCR for page 229
APPENDIX A 243 REFERENCES Bier, V. M., Y. Y. Haimes, J. H. Lambert, N. C. Matalas, and R. Zimmerman. 1999. A survey of approaches for assessing and managing the risk of extremes. Risk Analysis 19(1):83–94. Bouma, J. A., O. Kuik, and A. G. Dekker. 2011. Assessing the value of earth observation for managing coral reefs: An example from the great barrier reef. Science of the Total Environment 409(21):4497–4503. CCSP (U.S. Climate Change Science Program). 2009. Best practice approaches for characteriz- ing, communicating, and incorporating scientific uncertainty in climate decision making. Washington, DC: National Oceanic and Atmospheric Administration. Choy, S. L., R. O’Leary, and K. Mengersen. 2009. Elicitation by design in ecology: Using expert opinion to inform priors for Bayesian statistical models. Ecology 90(1):265–277. Clemen, R. T. 1998. System models for decision making. In System models for decision mak- ing, edited by R. C. Dorf. Boca Raton, FL: CRC Press. Clemen, R. T., and C. J. Lacke. 2001. Analysis of colorectal cancer screening regimens. Health Care Management Science 4(4):257–267. Cooke, R. M., D. Nieboer, and J. Misiewicz. 2011. Fat-tailed distributions: Data diagnostics and dependence. Washington, DC: Resources for the Future. Cox, L. A. T., Jr. 2012. Confronting deep uncertainties in risk analysis. Risk Analysis 32(10):1607–1629. Cullen, A. C. 1994. Measures of compounding conservatism in probabilistic risk assessment. Risk Analysis 14(4):389–393. DOE (U.S. Department of Energy). 2002. Final environmental impact statement for a geo- logic repository for the disposal of spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nye County, Nevada (DOE/EIS-0250). Washington, DC: Department of Energy. Duinker, P. N., and L. A. Greig. 2007. Scenario analysis in environmental impact assess- ment: Improving explorations of the future. Environmental Impact Assessment Review 27(3):206–219. Ellig, J., and P. A. McLaughlin. 2012. The quality and use of regulatory analysis in 2008. Risk Analysis 32(5):855–880. EPA (U.S. Environmental Protection Agency). 2003. Part III: Integrated summary and risk char- acterization for 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and related compounds. In Exposure and human health reassessment of 2,3,7,8-tetrachlorodibenzo-p-dioxin (tcdd) and related compounds. National Academies review draft. Washington, DC: EPA. EPA. 2011. Expert Elicitation Task Force white paper. Washington, DC: EPA. http://www.epa. gov/stpc/pdfs/ee-white-paper-final.pdf (accessed January 3, 2013). Farber, D. A. 2007. Modeling climate change and its impacts: Law, policy, and science. Texas Law Review 86:1655. FDA (U.S. Food and Drug Administration). 2003. Quantitative assessment of relative risk to public health from foodborne Listeria monocytogenes among selected categories of ready-to-eat foods. Washington, DC: U.S. Department of Health and Human Services and U.S. Department of Agriculture. Feeny, D., W. Furlong, G. W. Torrance, C. H. Goldsmith, Z. Zhu, S. DePauw, M. Denton, and M. Boyle. 2002. Multiattribute and single-attribute utility functions for the Health Utilities Index Mark 3 system. Medical Care 40(2):113. Flüeler, T. 2001. Options in radioactive waste management revisited: A proposed framework for robust decision making. Risk Analysis 21(4):787–800.

OCR for page 229
244 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY GAO (U.S. Governmental Accountability Office). 2006. Human health risk assessment: EPA has taken steps to strengthen its process, but improvements needed in planning, data development, and training. Washington, DC: Governmental Accountability Office. Garabed, R., W. Johnson, J. Gill, A. Perez, and M. Thurmond. 2008. Exploration of asso- ciations between governance and economics and country level foot-and-mouth disease status by using Bayesian model averaging. Journal of the Royal Statistical Society: Series A (Statistics in Society) 171(3):699–722. Gold, M. R. 1996. Cost-effectiveness in health and medicine. New York: Oxford University Press. Gold, M. R., D. Stevenson, and D. G. Fryback. 2002. HALYs and QALYs and DALYs, oh my: Similarities and differences in summary measures of population health. Annual Review of Public Health 23(1):115–134. Griffin, S., K. Claxton, and M. Sculpher. 2008. Decision analysis for resource allocation in health care. Journal of Health Services Research and Policy 13(Suppl 3):23–30. Hahn, R., and P. Dudley. 2004. How well does the government do cost-benefit analysis? AEI– Brookings Joint Center Working Paper 04-01. https://www.law.upenn.edu/academics/ institutes/regulation/papers/hahn_paper.pdf (accessed January 3, 2013). ———. 2007. How well does the government do cost-benefit analysis? Review of Enivonmen- tal Economics 1(2):192–211. Hahn, R. W., and P. C. Tetlock. 2007. Has economic analysis improved regulatory decisions? AEI–Brookings Joint Center Working Paper 07-08. http://papers.ssrn.com/sol3/papers. cfm?abstract_id=982233 (accessed January 3, 2012). ———. 2008. Has economic analysis improved regulatory decisions? Journal of Economic Perspectives 22(1):67–84. Hora, S. 2007. Eliciting probabilities from experts. In Advances in decision analysis: From foundations to applications, edited by W. Edwards, R. F. Miles, and D. von Winterfeldt. Cambridge, UK: Cambridge University Press. Pp. 129–153. Industrial Economics Incorporated. 2006. Expanded expert judgment assessment of the concentration-response relationship between PM2.5 exposure and mortality. Research Triangle Park, NC: EPA. Jarke, M., X. T. Bui, and J. M. Carroll. 1998. Scenario management: An interdisciplinary approach. Requirements Engineering 3(3):155–173. Kandlikar, M., J. Risbey, and S. Dessai. 2005. Representing and communicating deep uncer- tainty in climate-change assessments. Comptes Rendus Geosciences 337(4):443–455. Keeney, R. L., and H. Raiffa. 1976. Decisions with multiple objectives. Cambridge, UK: Cambridge University Press. Kiker, G. A., T. S. Bridges, A. Varghese, T. P. Seager, and I. Linkov. 2005. Application of mul- ticriteria decision analysis in environmental decision making. Integrated Environmental Assessment and Management 1(2):95–108. Kuhnert, P. M. 2011. Four case studies in using expert opinion to inform priors. Environmet- rics 22(5):662–674. Kuhnert, P. M., T. G. Martin, and S. P. Griffiths. 2010. A guide to eliciting and using expert knowledge in Bayesian ecological models. Ecology Letters 13(7):900–914. Leal, J., S. Wordsworth, R. Legood, and E. Blair. 2007. Eliciting expert opinion for economic models: An applied example. Value in Health 10(3):195–203. Lempert, R. J. 2002. A new decision sciences for complex systems. Proceedings of the National Academy of Sciences of the United States of America 99(Suppl 3):7309–7313. Lempert, R. J., and M. T. Collins. 2007. Managing the risk of uncertain threshold re- sponses: Comparison of robust, optimum, and precautionary approaches. Risk Analysis 27(4):1009–1026.

OCR for page 229
APPENDIX A 245 Lempert, R. J., S. W. Popper, and S. C. Bankes. 2003. Shaping the next one hundred years: New methods for quantitative, long-term policy analysis. Santa Monica, CA: RAND. Lempert, R. J., S. W. Popper, D. Groves, and S. C. Bankes. 2006. A general, analytic method for generating robust strategies and narrative scenarios. Management Science 52(4):514–528. Levy, J. K., K. W. Hipel, and D. M. Kilgour. 2000. Using environmental indicators to quantify the robustness of policy alternatives to uncertainty. Ecological Modelling 130(1):79–86. Levy, J., S. Chemerynski, and J. Tuchmann. 2006. Incorporating concepts of inequality and inequity into health benefits analysis. International Journal for Equity in Health 5(1):2. Mahnovski, S. 2007. Robust decisions and deep uncertainty: An application of real options to public and private investment in hydrogen and fuel cell technologies. Santa Monica, CA: RAND. Martin, T. G., M. A. Burgman, F. Fidler, P. M. Kuhnert, S. Low–Choy, M. McBride, and K. Mengersen. 2012. Eliciting expert knowledge in conservation science. Conservation Biology 26(1):29–38. Merkhofer, M. W., and R. L. Keeney. 1987. A multiattribute utility analysis of alternative sites for the disposal of nuclear waste. Risk Analysis 7(2):173–194. Morris, S., N. J. Devlin, and D. Parkin. 2007. Economic analysis in health care. Hoboken, NJ: Wiley. NCI (National Cancer Institute). 2012. CISNet: Funding history and goals. http://cisnet.cancer. gov/about/history.html (accessed November 20, 2012). Nichols, A. L., and R. J. Zeckhauser. 1988. The perils of prudence: How conservative risk assessments distort regulation. Regulatory Toxicology and Pharmacology 8(1):61–75. NRC (National Research Council). 2001. Arsenic in drinking water: 2001 update. Washing- ton, DC: National Academy Press. ———. 2006. Health risks from dioxin and related compounds: Evaluation of the EPA reas- sessment. Washington, DC: The National Academies Press. ———. 2009. Science and decisions: Advancing risk assessment. Washington, DC: The Na- tional Academies Press. O’Leary, R. A., J. V. Murray, S. J. Low Choy, and K. L. Mengersen. 2008. Expert elicitation for Bayesian classification trees. Journal of Applied Probability and Statistics 3(1):95–106. OMB (Office of Management and Budget). 2012. Draft 2012 report to congress on the benefits and costs of federal regulations and unfunded mandates on state, local, and tribal entities. Washington, DC: Office of Management and Budget. Orme, M., J. Kerrigan, D. Tyas, N. Russell, and R. Nixon. 2007. The effect of disease, func- tional status, and relapses on the utility of people with multiple sclerosis in the UK. Value in Health 10(1):54–60. Prato, T. 2003. Multiple-attribute evaluation of ecosystem management for the Missouri River system. Ecological Economics 45(2):297–309. Raiffa, H. 1968. Decision analysis—Introductory lectures on choices under uncertainty. Read- ing, MA: Addison-Wesley. Rhomberg, L.R. 2000. Dose-response analyses of the carcinogenic effects of trichloroethylene in experimental animals. Environmental Health Perspectives 108(Suppl 2):343–358. Sloan, F. A., and C.-R. Hsieh. 2012. Health economics. Cambridge, MA: MIT Press. Slottje, P., J. Van der Sluijs, and A. B. Knol. 2008. Expert elicitation: Methodological suggestions for its use in environmental health impact assessments. Letter report 630004001/2008. Utrecht: RIVM (National Institute for Public Health and the Environment). http://www. nusap.net/downloads/reports/Expert_Elicitation.pdf (accessed January 4, 2012). Southwell, C., and D. von Winterfeldt. 2008. A decision analysis of options to rebuild the New Orleans flood control system. CREATE Research Archive. http://research.create.usc.edu/ cgi/viewcontent.cgi?article=1052&context=published_papers (accessed January 4, 2013).

OCR for page 229
246 ENVIRONMENTAL DECISIONS IN THE FACE OF UNCERTAINTY Steenland, K., G. Calvert, N. Ketchum, and J. Michalek. 2001. Dioxin and diabetes mellitus: An analysis of the combined NIOSH and Ranch Hand data. Occupational and Environ- mental Medicine 58(10):641–648. U.S. Army Corps of Engineers. 1984. Lake Pontchartrain, Louisiana, and vicinity hurricane protection project: Reevaluation study. New Orleans, LA: U.S. Army Corps of Engineers. http://www.iwr.usace.army.mil/docs/hpdc/docs/19840700_Reevaluation_Study_Vol_1_ Main_Report_Final_Sup_I_to_the_EIS.pdf (accessed January 4, 2013). Viscusi, W., J. Hamilton, and C. Dockins. 1997. Conservative versus mean risk assessments: Implications for Superfund policies. Journal of Environmental Economics and Manage- ment 34(3):187–206. Ye, M., K. F. Pohlmann, and J. B. Chapman. 2008. Expert elicitation of recharge model proba- bilities for the Death Valley regional flow system. Journal of Hydrology 354(1):102–115.