As outlined in Chapter 1, the committee focused on the uncertainty in three types of factors that can play a role in the decisions of the U.S. Environmental Protection Agency (EPA): health, technological, and economic. Historically, uncertainties in health estimates have received the most attention (see Chapter 2). Uncertainties in technological and economic factors have received less attention (see Chapter 3). In this chapter, the committee presents a framework to help EPA incorporate uncertainty in the three factors into its decisions. Where possible, the committee incorporates the lessons from other public health agencies discussed in Chapter 4.
Science and Decisions: Advancing Risk Assessment (hereafter Science and Decisions) (NRC, 2009) recommended a three-phase decision-making framework consisting of problem-formulation, assessment, and management phases. Both Science and Decisions and the framework it suggests emphasize the need to do a better job of linking the assessment of health risks to the particular problem that EPA is facing and also emphasize the importance of stakeholder involvement in each stage.
In this chapter this committee begins by building on that three-phase framework, incorporating into that framework uncertainty in the three factors (health risk estimates, technology availability, and economics) that play a role in EPA’s decisions. As with the framework from Science and Decisions (NRC, 2009) and other decision-making frameworks (see, for example, Gregory, 2011; Gregory et al., 1996; Spetzler, 2007), this committee’s framework emphasizes the importance of interactions between decision makers, stakeholders, and analysts. The modified framework is presented in Figures 5-1a and 5-1b and is discussed below. After introducing the
SOURCE: Modified from NRC, 2009.
framework, the report then discusses different approaches to handling uncertainty in health, technological, and economic factors and describes ways that stakeholder engagement may be encouraged.
The need for EPA to make a regulatory decision might arise from concerns about a potential environmental hazard, a legal requirement to review an existing or potential environmental regulation, or concerns following a specific event, such as an oil spill or the siting of a new source of pollution. Regardless of why a decision is needed, when approaching a regulatory decision EPA should first identify and characterize the question
1 Due to a production error, Figure 5-1b was inadvertently left out of the prepublication copy of this report.
or problem that underlies the regulatory decision. In other words, it first needs to perform a problem formulation and scoping.
Science and Decisions (NRC, 2009) highlights the importance of the problem formulation phase, which includes identifying the environmental concerns, planning and determining the scope of decision making, and identifying potential regulatory options and the criteria for selecting among those options. This committee agrees with the earlier report that planning for a risk assessment and anticipating issues in advance are key to conducting useful and high-quality assessments (such as assessments of human health risks, cost–benefit assessments, and assessments of technology availability), and the committee further emphasizes the importance of identifying uncertainties that affect the decision and determining how those uncertainties should be assessed and considered in the decision-making process. Identifying potential regulatory actions during this first phase will facilitate identifying the uncertainty surrounding the consequences of the regulatory actions, in order to plan any assessments of those uncertainties. Although not all stakeholders will necessarily agree with a regulatory decision—some will refuse to support any increase in regulation, for example, while others will refuse to support any decrease in regulation—an enhanced problem formulation will help to ensure that the different participants are aware of the different perspectives and that many of the potential uncertainties are identified. Identifying and characterizing the problem and potential regulatory options as well as planning for the uncertainty analysis are discussed below.
Identifying and Characterizing the Problem and Potential Regulatory Options
As discussed in Understanding Risk (NRC, 1996), the assessments of health risks and other factors should be decision driven, that is, driven by the context of the decision. Stakeholders, however, often have different views and perspectives on what problem underlies or caused the need for a decision, what information is available and should be considered when making a decision, and what uncertainties could affect a decision (Koppenjan and Klijn, 2004). Those different views and perspectives could, in part, determine the most appropriate way to assess the factors in the decision (such as health risks, costs, and technology). All participants, therefore, need to be aware of and understand the views and perspectives of others, as well as have a common understanding of the problem to be addressed, the purpose of the assessments, and the potential regulatory options.
Complex decisions that affect multiple stakeholders benefit from a formal process that ensures that the problem and the solutions are adequately
characterized and agreed upon by all parties. Problem-structuring methods for unstructured problems which—like many of the problems EPA faces, have multiple actors and perspectives, incommensurable or conflicting interests, important intangibles, and key uncertainties—provide a “way of representing the situation … that will enable participants to clarify their predicament, converge on a potentially actionable mutual problem or issue within it, and agree [on] commitments that will at least partially resolve it” (Mingers and Rosenhead, 2004, p. 531). The interaction among stakeholders that occurs with problem-structuring methods typically helps not only to build a consensus about a problem, but also to build social trust (see Chapter 6 for further discussion of social trust).
A small but growing literature from operations research provides guidance on problem structuring (see Gregory, 2011; Gregory and Keeney, 2002; Gregory et al., 1996; Hammond et al., 1984; Rosenhead, 1996; von Winterfeldt and Fasolo, 2009). According to this literature, in order to structure a problem one should (1) focus on the decision, that is, on the policy or regulatory choices and objectives; (2) maintain a broad perspective, that is, do not narrow down decision alternatives or objectives too early; and (3) involve a broad range of stakeholders to assist in identifying alternatives and objectives, thereby creating a legitimate framing of the policy or regulatory problem.
For environmental policy and regulation, the policy or regulatory objectives could include health risk reduction, an improvement of the environment, minimizing direct implementation costs, minimizing indirect and long-term socioeconomic impacts, or identifying a solution that maximizes the net benefit. Some studies favor structuring the problem in terms of the net benefits (that is, the total benefits minus the total costs from, for example, health improvements) rather than in terms of the risk reduction (Stinnett and Mullahy, 1998).
The planning of assessments should include not only assessments of health risks and benefits, but also assessments of the other factors that might be considered in a decision, in particular, technological and economic factors. Keeney and Raiffa (1976) and Keeney (1996) discussed how to generate a comprehensive set of objectives, including identifying which direct and indirect costs should be considered part of the objectives. Garber and Phelps’s (1992) work on the near equivalence of benefit–cost analysis and cost-effectiveness analysis, along with the method of cost acceptability curves (Fenwick et al., 2001), lead to a larger framework for analyzing uncertainty in both benefit–cost analysis and cost-effectiveness contexts. The metrics that will be used to measure the objectives should also be defined as part of this phase.
Planning for the Uncertainty Analysis
As can be seen in Figure 5-1a, planning for the analyses of uncertainty should begin during the problem-formulation phase. A major challenge is determining whether and how uncertainties should be quantified and how they should be taken into account in a regulatory decision. When considering how to analyze uncertainties, the type and complexity of the uncertainty analyses that are appropriate will depend on, among other things, the context of the decision (for example, if it is made in an emergency situation, the level of controversy and scientific disagreement around the decision, and whether the decision would be easily reversible), the nature of the risks and benefits (for example, if the human health risks involve minor adverse events, complex quantitative uncertainty analyses might not be warranted, whereas if they involve a fatal, nonreversible disease, such analyses might be warranted), the factors considered in the decision (for example, economic, technological, or social factors), and the type (for example, variability, model uncertainty, or deep uncertainty) and magnitude of the uncertainty. In particular, environmental statutes distinguish between decision contexts that are solely based on health considerations and those that consider technological feasibility or availability, cost–benefit trade-offs, or some combination of the three different types of considerations. It is important, therefore, that EPA identify in the problem-formulation phase of its decision-making process those factors it needs to consider in the decision and the nature or type of the uncertainty in those factors. That identification could involve providing a list of items that contribute to uncertainty, such as limited data, alternative models, or disagreements among the experts. For some factors, the process may include providing ranges of estimates from the literature or some preliminary representations of uncertainty, such as event trees, influence diagrams, or belief nets (see Box 5-1).
There is no “one-size-fits-all” approach for an agency to make decisions in the face of uncertainty, nor is a particular approach to uncertainty analysis appropriate for all decisions, but, in general, certain types of approaches and analyses lend themselves to certain types and sources of uncertainty. In Table 5-1, as a guide for EPA, the committee presents a typology of decision situations which indicates when different approaches to handling uncertainty might be appropriate. Those approaches are discussed in more detail in Appendix A.
The legal context determines which factors—health, technology, and economics—can be considered in EPA’s decisions and, therefore, should be assessed (shown in the columns in Table 5-1). Each of those three factors can exhibit any or all of the three types of uncertainty to different extents (shown in the rows in Table 5-1), and each combination of factor and type of uncertainty lends itself to a different type of uncertainty analysis.
Belief nets “represent the causal and noncausal structure of inferences going from data and inference elements to the main hypothesis or event about which an inference is made” (von Winterfeldt, 2007).
Event trees start with an initiating event and trace that to the “fault or problem event” (von Winterfeldt, 2007).
Influence diagrams are graphical or visual representations of a decision situation. Conventionally, uncertain variables, decision nodes, and value nodes are shown in ellipses, rectangles, and rounded rectangles, respectively (von Winterfeldt, 2007).
While the regulatory context specifies which factors EPA can consider in making decisions, many of EPA’s decisions will involve multiple types of uncertainty. EPA’s plans for assessing uncertainty, therefore, will involve multiple analyses and approaches. The committee does not present all possible analytic approaches; rather it presents a number of approaches as a starting point to indicate how EPA should plan its analyses during the first phase of its decision-making process.
Looking across the columns in Table 5-1 at the legal or regulatory context, if the context is narrow—such as cases in which only health effects are taken into account (first column, Table 5-1)—then the approaches to uncertainty would typically be limited to versions of using safety or default factors (see Chapter 2 for further discussion); health risk analysis, including extreme value analysis; and scenario analysis, depending on the type or nature of the uncertainty. If technological availability—such as the best available or best practicable technology—can be considered (second column, Table 5-1), then health effects analyses can be combined with an assessment of the availability or practicability of the technological option, estimated using direct assessments or technological choice/risk analyses, to reduce health effects. If cost–benefit factors are allowed (third column, Table 5-1), appropriate analytic approaches include cost-effectiveness analysis, cost–benefit analysis, and multiattribute utility analysis. Cost-effectiveness, cost–benefit, multiattribute utility analysis, and decision analysis do not differ if the uncertainties are in variability and heterogeneity or in models and parameters. In the case of deep uncertainty, there is a shift to scenario analysis and robust decision-making tools, but in the case of cost–benefit factors, this analysis would include deep uncertainty about all factors
|REGULATORY CONTEXT: FACTORS CONSIDERED IN THE DECISIONa|
|Health Effects Only||Technology Availability||Cost-Benefit|
|TYPE OF UNCERTAINTY|
|Variability and Heterogeneityb||
|Model and Parameter Uncertaintyc||
|Deep Uncertaintyd||Scenario analysis and robust decision-making methods|
NOTES: The most appropriate methods to evaluate, analyze, or account for uncertainty often depend on the types and sources of uncertainty that are present. The columns of the matrix show which methods are typically appropriate for different regulatory contexts, that is, what factors environmental laws and executive orders require the Environmental Protection Agency (EPA) to consider in a given decision. The rows of the matrix show the methods that are often appropriate for heterogeneity and variability, model and parameter uncertainty, and deep uncertainty.
aThe regulatory (or legal) context determines, to a large extent, what factors EPA considers in its regulatory decisions.
bThe goal of assessing uncertainty from variability and heterogeneity is to identify different populations (health), technology and facilities (technology), or regulatory options (cost-benefit tradeoffs) and to estimate (with uncertainty) the magnitude of the differences among them.
cThe goal of assessing model and parameter uncertainty is to estimate (with uncertainty) the effect of model choice and parameter values on assessments of health risks, technological factors, and the cost-benefit tradeoffs of different regulatory options.
dThe goal is to identify deep uncertainties in the assessments, their potential effects on a decision, whether to conduct research to decrease the uncertainties, and when decisions should be revisited in light of those uncertainties. Both variability and heterogeneity, and model and parameter uncertainty, can be deep uncertainty.
considered in the analysis. If model or parameter uncertainty is present, expert judgments or elicitations can be helpful in estimating human health risks, technology availability, and costs and benefits.
Looking down the columns in Table 5-1 shows that different types of uncertainty lend themselves to different approaches to assessing and considering uncertainty. Statistical methods are appropriate for situations involving large amounts of data that allow uncertainty assessments by fitting standard probability distributions to data, that is, when uncertainties are primarily related to statistical variability and population heterogeneity. Expert judgment techniques or safety or default factors are needed when models and their parameters are uncertain and when data are sparse, for example, when the slope or shape of the dose–response function is uncertain or when extrapolation from animal data to humans is necessary. When facing deep uncertainties, probabilistic methods are more limited in use, and scenario analysis, sometimes coupled with robust decision-making methods, can help (see further discussion later in this chapter). Robust decision-making methods are those that provide acceptable outcomes for a range of possible scenarios, including pessimistic ones.
The goals in the assessing the different types of uncertainty are also different. For variability or heterogeneity (first row, Table 5-1), the goal of the assessment approach is to identify the subpopulations that are differentially affected, estimate the magnitude of the differences in results in the different subpopulations and the within-subpopulation variability, and assess the uncertainty in those estimates. For model and parameter uncertainty (second row, Table 5-1), the goal of the assessment approach is to compare results based on specifications with different functional forms, and one should compare simulations using different assumptions about the parameters depicting relationships between key explanatory variables and the dependent variables. For deep uncertainty (third row, Table 5-1), the goal or purpose of the assessment approach is fundamentally different; scenarios of various adverse outcomes should be described, and an assessment should be made as to whether a proposed solution can eliminate the risks of those outcomes occurring.
When accounting for uncertainty in a regulatory decision, each analysis or approach is associated with a set of decision rules that identify the “best” regulatory decision if the decision maker were to follow the recommendations resulting from the analysis. For example, a decision rule for a cost–benefit analysis would be to select the regulatory option with the highest net social benefit.
Further details about the specific approaches to assessing and considering uncertainty in decisions are presented later in this chapter.
Once the decision makers, analysts, and stakeholders have a clear understanding of what assessments (human health risk assessments, economic analysis, or assessments of technology availability) are needed to inform a given decision, how those assessments should be conducted, and the uncertainties that need to be analyzed, the assessment phase begins. Assessment refers to the collection of data, modeling, and the estimation of impacts in order to determine how the regulatory options (including the status quo) perform with respect to the objectives specified in the problem-formulation stage (NRC, 2009). This is the factual part of the decision-making process and provides the analytic basis for the management phase, which involves evaluation, decision making, value-of-information analysis, and implementation.
The objective of the assessment phase is to analyze the available data or evidence and provide decision makers with the analyses in a way to inform the decision, including providing information about the uncertainties in the data and in the overall assessment. It is crucial that analysts do not lose sight of that objective when conducting uncertainty analyses. For example, they should not use extensive resources to analyze an uncertainty in a parameter or factor that has little relevance to the overall decision. It is also crucial that decision makers understand the implications of choices that analysts might make in the assessment process. For example, decision makers need to be aware of whether any default assumptions or models are embedded in an assessment and how those defaults might affect the assessment.
A main objective of EPA’s regulatory decisions is to reduce adverse human health and environmental outcomes. Human health risk assessment is a well-understood and mature activity at EPA and other regulatory agencies. As described in Risk Assessment in the Federal Government: Managing the Process (NRC, 1983), it includes hazard identification (determining which health and environmental impacts are pertinent to the decision under consideration, with more specificity than the broader objectives specified in the first phase), an exposure assessment (assessing the levels of exposure to environmental agents), a dose–response assessment (a quantitative analysis of the effect of a unit change in exposure to particular environmental agents on specific health and environmental outcomes), and risk characterization (the health and environmental outcomes expected at a specific level of exposure to an environmental hazard). Human health risk assessment is conducted for the base case (outcomes at a future date if no change in regulation is implemented) and for one or more regulatory options. Human health risk assessment is the tool that decision makers use to predict the degree of health improvement or protection expected from a decrease in one
or more exposures. Such risk assessments do not, however, indicate which intervention to use—that is, which is the best way to decrease exposure.
For the assessment phase, most previous NRC reports and EPA risk assessments have focused only on the assessment of health risks and their associated uncertainties. This committee, however, believes that the assessment phase should also include examinations of a number of nonhealth factors and their associated uncertainties. In particular, assessments should include technological factors and economic factors. The next section briefly describes where uncertainties can arise in the assessments of factors other than human health risks. For more details of assessments and assessment techniques for human health risks and for the various other factors, readers should refer back to Chapters 2 and 3, respectively.
It is worth noting that uncertainties are expressed as probabilities or probability distributions. While there is some discussion in the literature about the use of qualitative (verbal) vs. quantitative (numerical) expressions of probabilities (von Winterfeldt, 2007), most studies of environmental uncertainties use quantitative probabilities because they lend themselves to a wide array of statistical and other analyses. There are also different schools of thought about what these probabilities represent, including the classical or logical view, the frequentistic view, and the subjective or Bayesian view. Without taking a side in the debate among these schools of thought, the committee takes it for granted that probabilities are always based on logic, data, and judgment and that they should be informed and revised as new information is obtained.
Typically, EPA’s decisions take into consideration the costs incurred by private parties as well as by public agencies. Private parties bear the cost of mitigation to reduce health and environmental impacts, while the public sector bears the costs of monitoring and enforcement as well as other costs. Both public and private costs are likely to be uncertain for several reasons. On the private side, the costs of mitigation alternatives are often uncertain. Technological development, whose outcome is often uncertain, may be required in many cases, and the uncertainties related to that development add to the uncertainty in the eventual technology costs. On the public side, there are choices to be made concerning the level of regulatory enforcement. An air standard can be enforced with more or less effort devoted to detection or prosecution; each choice implies expending a different amount of public resources. Changes in the level of enforcement are likely to lead to a change in the levels of benefits of the policy. For example, an unenforced standard may be of no benefit except perhaps to signal that some decision maker is sympathetic to a particular cause.
At the same time, public expenditures on enforcement may vary from those projected at the time the policy was implemented because individuals and firms in the private sector respond differently to the policy than
originally anticipated. Given the very imperfect information that regulatory agencies have about the investments that private-sector organizations will make in response to regulatory rules, private-sector costs are likely to be subject to considerable variability and thus to uncertainty.
Regulatory options also need to be assessed with respect to non-environmental or health risk objectives. Even if the relevant statute requires that the regulation be based solely on health considerations, an analysis of other factors and their uncertainties can be useful for deciding among regulatory options that may have different associated costs, including such adverse consequences as loss of employment, or that are less or more feasible given current technologies or technologies that are likely to be developed in the foreseeable future.
The result of this state of affairs is that the assessments of both health risks and other factors are fraught with uncertainties. To help determine which assessments should explicitly include an uncertainty analysis, it is useful to first conduct a rudimentary, deterministic assessment using base-case values. In such an assessment it is assumed that model parameters and causal relationships are exact. For example, one might assume in the base-case scenario that a 10 percent increase in a pollution level leads to a 1 percent increase in a specific type of mortality. The sensitivity of the estimates of health risks and other estimates to changes in the base-case parameters and assumptions should then be explored. If the increase in exposure has a 95 percent likelihood of occurring, then the 10 percent increase in exposure would be associated with a mortality change in the range 0.5 to 1.5 percent. In sensitivity analysis we can explore mortality changes at specific values within this range. Realistically, the distribution is often not nearly as tight as the example implies. Because the final estimates may come from a chain of models, each of which has uncertainties, the potential errors in each link propagate through the chain, increasing the uncertainty of the estimated final effects.
Such sensitivity analysis serves two essential purposes. First, it helps identify those cases in which an analysis of uncertainty should be undertaken. For example, if there is a high likelihood that the credible interval2 is narrow, then the actual magnitude of the effect is unlikely to change a decision about whether to conduct an analysis of uncertainty. This analytic decision depends in part on the extent to which conclusions are robust to changes in baseline assumptions and parameters (that is, to the extent that the conclusion is unchanged by results of the sensitivity analysis). If assuming an effect of 1.5 percent instead of an effect of 0.5 percent does not
2 In this report the committee used the term credible interval when making a statement about a hypothesis given the data; the term confidence interval is used only when a statement is being made about the data, given a hypothesis.
change the decision, and if there is reasonably certainty that the confidence interval is correct, there is no reason to conduct an additional analysis. If, however, there is credible reason to suspect that the confidence interval may be much larger than that, it may be appropriate to carry out additional analysis in order to resolve the substantive issue. Second, to the extent that results are shown not to be robust, sensitivity analysis is helpful in identifying those assumptions and parameters that most influence the projected outcomes and therefore warrant further study. Although the above example assumes that there is only one parameter, typically there would be several parameters, each with associated confidence intervals, as well as parameters for interaction terms, all of which greatly complicate the problem.
The previous sections have described potential analytic approaches for dealing with various types of uncertainty. However, applying these approaches in the context of risk management and environmental decision making is as much an art as a science. Risk management consists of evaluating the assessment results, making a decision, and implementing the decision, including monitoring the effects or outcomes from that decision or regulatory action. As is the case the problem-formulation and assessment phases, uncertainty plays an important role in this phase.
Stakeholder engagement is also an important aspect of risk management. As discussed elsewhere in this report, considering the input of stakeholders, including both the variability in their views and how the uncertainty in different factors could affect different stakeholders, is important in environmental regulatory decision making. Another important aspect of stakeholder engagement during the management phase is communication of the decision and the estimated health risks, cost, and other consequences of the decision along with the uncertainty in those estimates. Communication is discussed further in Chapter 6.
Evaluation of Assessment Results
Decision making is more than the formal evaluation of alternatives; decision makers must consider the results of the assessments and the uncertainty in those results in the context of additional informal and non-quantifiable aspects of the decision problem. Their decision should depend not only on the results of the assessments (for example, the human health, technological, or economic assessment), but also on interpretation of those results in the context of the decision. Two components of the decision context that decision makers should consider are risk distribution and the potential consequences of the decision.
Government decision makers face the challenge of acting on risk information for populations that may have widely varying exposures and sensitivities, and they must be careful to make sure that highly consequential risks are equitably distributed across these populations. In particular, if they consider only the average risks across an entire population, they might overlook the fact that there is a high risk in one group being offset by a low risk in another group. The social factors that can affect health risk estimates should have been identified at the problem-formulation stage and been analyzed during the assessment stage, and they should be presented to and considered by decision makers during this management phase of the decision-making process.
For example, a small number of individuals living near a petroleum refinery (the so-called maximally exposed individuals, or MEIs), might incur relatively large exposures to air pollutants, whereas much larger numbers of individuals might be exposed to significantly smaller levels (for example, the average level in an exposure distribution). The exposures between those average individuals and MEIs will often differ by more than a factor of 10. When making regulatory decisions, EPA should consider whether the primary basis for action is the protection of the smaller number of individuals incurring the larger risk or of the much larger number incurring smaller risks, and it should also consider how the magnitude of uncertainty in the MEI estimate relative to the uncertainty in the average exposure estimate should affect the basis of the decision.
Furthermore, it is likely that because of sex, genetics, life stage, nutritional status, occupational status (for example, pesticide applicators), or other factors, some individuals within a population will be more susceptible to adverse effects than the average for the general population. Looking again at the trichloroethylene (TCE) example discussed in Chapter 2 (see Box 2-4 for details), the range of potential human health endpoints (including carcinogenicity, developmental toxicity, and reproductive toxicity), potential exposure scenarios (including air, water, and soil), and variability within populations (due, for example, to sex, age, or nutritional status) generates complex and variable estimates of exposure, each with its own uncertainties, for different subpopulations. Those differing scenarios raise the question of which scenario or scenarios should be used to assess risks and, eventually, be used as the primary basis for regulatory action and also raises the issue of whether the uncertainties in the different scenarios should be taken into account. Although risk assessors develop the risk estimates, it is up to the regulatory decision makers to decide which estimates provide the most appropriate basis for setting standards. For example, if one wished to protect highly exposed individuals who would not be fully protected by
a standard based on the lower estimates for the average population, one would choose to use exposure to the MEI population as the basis for the standard. Such a standard, however, would lead to more stringent regulatory controls and greater implementation costs than needed for the average population. By contrast, a standard geared to the average population might not provide adequate protection for the more highly exposed or more susceptible subpopulations. Decisions about which population is chosen to be the basis for setting the standard have both public health and economic consequences.
Potential Consequences of the Decision
Decisions that are made once and not reassessed are riskier than decisions that are revisited on a frequent, recurring basis. The ease with which a decision can be reversed or revisited at a later date or the degree to which a given decision precludes or enables additional choices at a later date will affect how uncertainty in risk estimates, cost–benefit analyses, technology assessments, and other factors are considered in the decision. The severity of the consequences of a decision—that is, whether the stakes of the decision are high or low—will also affect how that uncertainty is considered. For example, a decision that increases the likelihood of a nonsevere health outcome or that is not expensive (for example, with potential control costs that are small) will be easier to make in the face of uncertainty than a decision that increases the likelihood of a severe disease (such as cancer) or that will require expensive control technologies.
Using Uncertainty Analysis in Decision Making
Decision makers often want simple answers to questions like “Is this substance safe?” or “How can I set a regulatory standard that ensures absolute public safety?” Uncertainty analysis cannot provide unqualified answers to these questions. Instead, its results are usually stated in terms of ranges of numbers, likely or less likely outcomes, or probability distributions over health impacts. For example, an uncertainty analysis may result in an assessment that the most exposed individual has a very small risk, say 10–5, of contracting cancer over his or her lifetime, with a possible uncertainty range of 10–4 to 10–6. Another example may find that a population risk from exposure to fine particulates has a broad range, say, from 10 to 10,000 premature deaths, possibly further quantified by a probability distribution of outcomes over this range.
Because it lacks simple and unqualified answers, uncertainty analysis often complicates decision making. Nevertheless an honest expression of scientific uncertainty is an important part of any analysis to support
decision making. Morgan and Henrion (1990) offer three reasons why it is important to perform an explicit treatment of uncertainty:
1. To identify important disagreements in a problem and to anticipate the unexpected;
2. To ask experts to be explicit about what they know and whether they really disagree in order to understand the experts and their often differing opinions; and
3. To update and adapt policy analyses that have been done in the past whenever new information becomes available, in order to improve decision making in the present.
Furthermore, it is important to provide responsible expressions of uncertainty to those making regulatory decisions because
1. Without understanding the uncertainty surrounding the key factors in a decision, decision makers may be tempted to use means or other central estimates and ignore unlikely, but extreme results.
2. Alternatively, without explicit expression of uncertainty, decision makers may be overly cautious and make decisions based on extremely conservative assumptions.
There are no simple rules for translating uncertainty information into a decision. However, a decision maker should be informed about and appreciate the range of uncertainty when making a decision. It is also helpful to present this information in a form that can assist decision making, such as by presenting probability distributions for health effects, examining extreme cases and tail probabilities, and incorporating these inputs into a more formal cost–benefit or decision analysis. Ultimately, decision makers have to make the decision in the face of uncertainty, a difficult job that involves weighing probabilities against consequences, protecting the average person as well as the most sensitive and exposed ones, and providing assurance that the regulatory action will be protective, even in the face of unlikely scenarios, model assumptions, and parameters.
The legal framework provides constraints on regulatory decisions (see the columns in Table 5-1), and how the uncertainty information is presented and used depends on the type of uncertainty (rows in Table 5-1). In the following sections we discuss the implications of uncertainty analysis for decision making for each of the three columns in Table 5-1, highlighting the differences between the three types of uncertainty (rows of Table 5-1) as appropriate.
As discussed previously, under some environmental laws EPA is charged with protecting public health and not with balancing public health against other factors such as cost. In this case uncertainty analysis is necessarily restricted to assessing the uncertainty about health effects and the likely reduction of health effects for different regulatory options. In the case of variability and heterogeneity, uncertainty is usually expressed by presenting tables of risks for different populations and environmental conditions, together with a statistical assessment of the relative likelihood of the associated population and environmental categories. Often, those tables demonstrate the extreme cases, such as the most sensitive subpopulation or the highest-exposure conditions or both. Decision makers then have to make a difficult judgment concerning how to set an appropriate level of protection for the population at large and for those most sensitive or those exposed to the most severe environmental conditions. In many cases two specific hypothetical cases are examined: the average population risk and the risk to the maximally exposed or most sensitive person. The various possible regulatory options are then compared in terms of the risks for both cases.
When model and parameter uncertainty come into play, there is additional uncertainty about health risks. With model and parameter uncertainty, even the average risk to the population under average environmental conditions is subject to uncertainty. The decision maker has to make important judgments about how credible the extreme risk estimates are and how much to rely on them in decision making. It is important to compare risk estimates and associated distributions across the regulatory options and to characterize the risks with uncertainty (probability) distributions for all options. In many cases this will reveal a marginal decrease in risk reduction (both in terms of the mean risk and the high-risk tail) as the risk-reduction effort increases. The decision makers’ task is then to weigh the marginal decrease in risk against the effort made to reduce it. When deep uncertainty is involved, tables of risks that characterize the risks for very different scenarios can help clarify the issues. For example, the Intergovernmental Panel on Climate Change (IPCC) has developed several reference scenarios that describe different future worlds based on assumptions about population growth, economic development, and patterns of production and consumption, especially in the energy area (IPCC, 2007). Those scenarios may involve both changes in the natural environment and major social and demographic changes. It is tempting to identify pessimistic scenarios (for example, a scenario in which climate change leads to higher levels of precipitation, population growth is larger than expected in some regions, and shifts to renewable and low-carbon forms of energy occur late). In the case of deep uncertainty, fair efforts should be made to identify a range of
scenarios that identify all possible paths. Within each scenario, risk estimates can be provided for different regulatory options. That still leaves the decision makers with the difficult task of selecting regulatory options in the face of huge variations in scenarios. Decision makers should be informed by extensive sensitivity analyses and should examine the marginal decrease in risk reduction as the level of effort increases. The main goal is to find regulatory solutions that are effective over a broad spectrum of scenarios.
The above comments were aimed at the use of uncertainty analysis directly in decision making. As Morgan and Henrion (1990) have pointed out, there are many ancillary benefits to an explicit and formal treatment of uncertainty. Uncertainty analysis can demonstrate the difference in risks for different subpopulations and environmental conditions, thus providing a basis for the debate about trading off the average population versus sensitive or highly exposed ones. Uncertainty analysis is also useful in determining where experts agree and disagree, and it provides a focus for debate as well as for efforts to collect further information. Uncertainty analysis also helps stakeholders clarify how extreme values of the probability distributions affect regulatory decisions.
Uncertainties About Technology Availability
In examining uncertainties about technology availability case (column 2 of Table 5-1), we consider both health risks and the availability of technologies to reduce them, for example, by reducing air pollution from power plants or water pollution from chemical plants. Regulatory frameworks for this case often require the implementation of “best practicable” or “best available” technologies. Regarding the uncertainty analysis of health risks, the discussion above is still applicable. The new element is the uncertainty analysis of technological availability.
In assessing uncertainties about technology availability, the tools of technology assessment and technology risk analysis apply. Some technologies considered for implementation will be mature, proven, already in use, and immediately implementable at a reasonably well-known cost. Other technologies may only have been proven in principle and may have never been used for the purposes at hand. Assessing uncertainties about the likelihood of successfully developing the unproven technologies and the effectiveness of the technologies if they are successfully developed can inform the decision about which technology may be considered “best practicable” or “best available.”
The results of an uncertainty analysis of technology availability can inform decision makers in quantitative terms about the maturity of the technology. This is almost always an issue involving expert judgment, and it is most likely to involve model and parameter uncertainty. Technology
availability rarely involves variability or heterogeneity, and it involves deep uncertainty only for the most speculative technologies, for example, cold fusion.
Uncertainties About Cost–Benefit Analyses
In environmental regulatory contexts the benefits of regulations are reduced health and environmental risks. Health and environmental risks are uncertain and so are the benefits of reducing them with new regulations. The issue of using uncertainty analysis of health effects was discussed before. Here we discuss the use of uncertainty analysis of economic costs.
There are two types of economic costs. There are the direct life-cycle costs of implementing a proposed regulation through implementing new technologies and processes. These are often uncertain, especially with new technologies. There is also uncertainty about the broader economic impact of a proposed regulation, involving questions like, Will this decrease the competitiveness of Industry A? or, Will it cost jobs, and how many? In this section we will focus on direct economic costs.
Uncertainty analyses about direct costs can be used by decision makers in connection with uncertainty analysis about health risks and benefits to compare costs and benefits. These analyses will often show that both the health benefits and the costs are highly uncertain, and, as a result, decisions to change regulations are not easily differentiated. For example, in a graphical representation, with costs on the x axis and health impacts on the y axis, a representation without uncertainties would show different regulatory options as points in that graph; when considering uncertainties, these points would be surrounded both by vertical error bands (reflecting uncertainty about health effects) and by horizontal error bands (reflecting uncertainty about direct costs). Rarely do these representations suggest simple and straightforward solutions, but they still need to be presented to the decision maker to properly reflect the uncertainty inherent in the problem.
The uncertainty about macroeconomic impacts of proposed regulations (impact on special industries, gross domestic product, and employment) is often even more substantial. Here we find disagreement among experts about the way that regulation affects the economy, even when using similar models—for example, input–output models of computable generalized equilibrium models. Providing ranges and sensitivity analyses is the most common way to express uncertainties in these models.
Value of Information
In typical environmental and health problems, EPA decision makers must select among regulatory options. However, they also have the choice
to defer a decision and to gather additional information prior to making a final decision. Value-of-information (VOI) methods can help determine whether or not additional information will help when making a decision.3 VOI methods permit decision makers to compare alternative strategies to managing uncertainty: electing to proceed with currently available, uncertain information; electing to invest in better information that reduces the uncertainty prior to formulating a decision; or electing to ignore uncertainty entirely (NRC, 2009). In other words, VOI analysis “evaluates the benefit of collecting additional information to reduce or eliminate uncertainty in a specific decision making context” (Yokota and Thompson, 2004a, p. 635).
VOI analysis has been applied to business decision making (see Box 5-2) and medical decision making (Yokota and Thompson, 2004b).4 Although not yet widely applied to environmental decisions (Yokota and Thompson, 2004a), the use of VOI has been recommended for such decisions (Presidential/Congressional Commission on Risk Assessment and Risk Management, 1997) and has been applied to some questions about climate change (see, for example, Nordhaus and Popp, 1997; Rabl and van der Zwaan, 2009; Yohe, 1996). Yokota et al. (2004) applied the VOI approach in the Voluntary Children’s Chemical Evaluation Program initiated by EPA in 2000; working in the context of tiered chemical testing, they sought to answer the question of when information about the risks to children is sufficient.5 Their analysis demonstrated that knowledge about exposure levels and control costs is important for decisions about toxicity tests.
As discussed by Hammitt and Cave (1991), the guiding principle of a VOI approach is that additional information is valued not for its own sake, but rather for the potential benefit of making better, welfare-enhancing decisions in the future. Findings from additional research are not known beforehand, so it is the expected value of the improvement in welfare that is relevant. For example, if finding A is obtained, we wish to know the decision and the utility (or gain), however measured, that is associated with the outcome, and the same for finding B, and so on.
Research that is unlikely to change a decision is considered to have little value ex ante. In some cases it may be possible to address uncertainty about a key underlying assumption, the “weakest link.” If research can resolve
3 Value-of-information analysis is one tool from the field of decision analysis. See Howard (2007) for a discussion of the field of decision analysis in general.
4 Value-of-information analysis is referred to as expected value of information (EVI) analysis in medical decision making (Claxton et al., 2001).
5 In the Voluntary Children’s Chemical Evaluation Program (VCCEP), EPA asked the manufacturers or importers of 23 chemicals to which children have a high likelihood of exposure “to [voluntarily] provide information on health effects, exposure, risk, and data needs” (EPA, 2010).
Suppose that a small division of a large company has been asked to maximize the expected profits of its division. The division has to make a decision now between two options for next year. A safe option yields $500,000 in profits next year, and a risky option yields returns that are contingent on a future event, which gives the firm $1 million if the event occurs and no profit if it does not. The event has a 40 percent chance of occurrence. Then the expected profit of the risky option is $400,000, and if there is no opportunity to collect more information, the firm should pursue the safe option.
If the firm could collect “perfect information” in advance on whether the event will occur, it could take the risky option when the event is known to occur and the safe option when it is known not to occur, in which case the firm will realize an expected profit of $700,000 (0.40 × $1,000,000 + 0.60 × 500,000 = $700,000). Thus, the expected value of perfect information is $200,000 ($700,000, the profit with optimal decisions including the information, minus $500,000, the profit from the optimal decision without such information).
Similarly, the “expected regret” could be calculated to indicate the value of information. The regret of choosing the safe option is zero if the event does not occur and $500,000 if it does (with $500,000 being the difference between the $1 million dollars the division you would get with the risky option and the $500,000 it is guaranteed with the safe option). Multiplying the regret value ($500,000) by the probability of the event (0.4) times regret yields $200,000 in expected regret, which is the same as calculated above from profit minus profit. The first fundamental theorem of VOI holds that expected regret equals VOI.
this uncertainty, which is critical to a choice among policy options, it may be particularly valuable.
The goal of a VOI analysis is to determine the value of additional information in coming to a decision. Although formal VOI analyses involve complex modeling, the essence of what is calculated in a VOI analysis can be explained in relatively simple terms. When calculating the VOI, it is helpful to consider four different possible approaches to decision making under uncertainty: (1) making a decision that takes into account perfect information regarding the state of nature; (2) making a decision that takes into account imperfect additional information regarding the state of nature; (3) making a decision that takes into account current uncertainty regarding the state of nature; and (4) making a decision that ignores uncertainty regarding the state of nature. As can be seen in Figure 5-2, each of these decision approaches can be placed along a horizontal axis that represents
FIGURE 5-2 Schematic illustrating the values that can be calculated in a value-of-information analysis.
Abbreviations: EVIU = expected value of including uncertainty; EVPI = expected value of perfect information; EVSI = expected value of sample information.
the expected losses; the losses are least when acting optimally with perfect information and greatest when uncertainty is ignored. A number of different measures of the value of information can be calculated, including the expected value of perfect information (EVPI),6 the expected value of sample information (EVSI), and the expected value of including uncertainty (EVIU) (see Box 5-3 for a description). The calculation of EVPI, EVSI, and EVIU is illustrated beneath the horizontal axis in Figure 5-2. All three measures compare the expected value of acting optimally with current information against the expected value of proceeding under reduced- or zero-uncertainty conditions. EVPI captures the value of eliminating uncertainty; EVSI captures the value of reducing uncertainty; and EVIU captures the value of ignoring uncertainty.
A value-of-information analysis has a number of benefits. First, it captures the sensitivity of decisions to uncertainty, taking into explicit account the decision maker’s level of risk aversion, the inherent variability of the situation, and the current state of the evidence base. Second, it can serve as a guide to model selection and justification. In instances where EVIU greatly exceeds EVPI, for example, analysts may find it easier to proceed with current evidence, knowing that the inclusion of uncertainty in their models is what matters and that delaying the analysis in anticipation of
6 A more precise term for the expected value of perfect information (EVPI) is the maximum amount a person or society is willing to pay for the information, which incorporates attitudes toward risk.
The expected value of including uncertainty (EVIU)
The EVIU is defined as the improvement in net benefit (or avoided harm) that can be achieved when the uncertainty surrounding a decision is taken into account. The EVIU is computed by taking the difference between the expected outcome that could be achieved by making optimal use of currently available information about uncertainty and the expected outcome that could be achieved by ignoring that uncertainty and simply treating all random variables as fixed at some central, point estimate value. The resultant figure provides an upper bound on the value of building uncertainty into the analysis in the first place.
The expected value of perfect information (EVPI)
EVPI is the improvement in net benefit (or avoided harm) that can be achieved if the uncertainty surrounding a decision is completely resolved. The EVPI is computed by taking the difference between the expected outcome that would result from making optimal use of perfect information and the expected outcome that would result from making optimal use of currently available information. The resultant figure provides an upper bound on the value of additional investment in information. It denotes the most a decision maker should be prepared give up to learn the true state of nature. Although an EVPI analysis postulates the impossible situation in which all variability is eliminated, it offers decision makers useful insights into the extent to which current uncertainty reduces the quality of their decisions.
The expected value of sample information (EVSI)
The EVSI is closely related to the EVPI. Recognizing that it is almost never possible to completely eliminate uncertainty, the EVSI measures the improvement in net benefit (or avoided harm) that could be achieved if the uncertainty surrounding a decision could be reduced, rather than completely resolved. Here the expected outcome that could be achieved by making optimal use of currently available information is compared with the expected outcome that could be achieved by making optimal use of some specified level of additional information.
better information will confer little additional value. By contrast, in instances where EVIU is comparatively small, analysts may be more able to justify using fixed point estimates and ignoring parameter uncertainty. Third, high ratios of EVSI to EVPI may indicate the presence of efficient research investment opportunities, thus helping decision makers to identify priorities from among competing sources of uncertainty.
The decision to gather more information in response to a VOI analysis can delay the actual decision, and that delay has costs associated with it. Those costs include (1) the costs of additional data collection and analysis;
(2) in many situations, more importantly, the cost of delaying action; and (3) the cost of modifying decisions once implemented. Some decisions are, for all practical purposes, irreversible. If the costs of data collection or research and delaying a decision are low and the costs of subsequently modifying a policy decision are high, decision makers may decide to seek further information to reduce uncertainty before making a decision.
Business decisions—where the value of information is the difference between the profits with and without the information—illustrate well the concept of VOI analysis (see Box 5-2 for an example). As can be seen in Box 5-2, the value of information is not a fixed number but rather a random variable that depends on the decision maker’s prior estimate of what the new information will reveal. For this reason, the term “expected value of information” is used, referring to what the additional information is expected to be worth on average before the new information is collected. If there is no possible outcome in which additional information gathering or research would change the decision, then the expected value of information is zero. If any decision is changed for the better after some result, than the value of information is positive. If the costs of obtaining the information (either data-gathering research costs or costs from delaying a decision, as might be the case with some regulatory options) are less than the expected value of information, then it is better to get the information.7
In the theoretical world of Box 5-2, it is assumed that the additional information is perfect. For example, there are no errors in predicting whether or not an event will occur. In practice, however, errors are made. The prediction may fail to predict that an event will or will not occur. In the real world of imperfect information, one can use Bayesian updating to incorporate the uncertainty inherent in the new information. In Bayesian updating, a weight is attached to new information, and a second weight is attached to the prior belief; the weights must sum to one. Thus, if the new information is thought to be particularly credible, it will be assigned a higher weight, with a correspondingly lower weight being placed on the prior belief. In the context of VOI, if the weight on placed on the new estimate is low, then it will generally not pay to obtain the additional information (Hunink, 2001; Raiffa, 1968).
In a public context, such as EPA’s decision context, the value of information is calculated in terms of the anticipated net benefits rather than the anticipated profitability calculated in the business example. The calculation of net benefits can be very complex, and for many decisions the only
7 Conventionally the costs of obtaining the information are not part of the value-of-information calculations, but are instead compared at the end. In any case, these costs and the cost of revising the decision once made must be considered in choosing whether or not to seek additional information and, if so, what type of information is to be sought.
practical way to assess the effect of a policy option, as well as its inherent uncertainty, is to implement the option and pursue an active strategy of monitoring its effects, with the monitoring being done, to the extent possible, in quantitative terms.
While conceptually appealing, in the context of EPA’s public decision making VOI has two challenges that do not create as many difficulties in private decisions, such as the one described in Box 5-2. First, expected profit is simpler to estimate than the estimated costs involved in a decision by the EPA. Second, unlike the situation with private decisions, the rationale for EPA postponing a decision and seeking new information may have to be explained to various segments of the public. That explanation will be complicated by the fact that many of the costs and assumptions underlying VOI calculations, including the credibility weights in Bayesian updating analysis, are subjective and difficult to defend. Despite those challenges, VOI can be a useful approach to help determine what information is worth gathering for future decisions.
Implementation of a regulatory decision is an important step in the management process. This step requires significant skills in addressing often competing stakeholder, legal, and political considerations surrounding the proposed decision.
Good decision making under uncertainty involves updating information through research, monitoring the implementation of regulatory action, and periodically revisiting and adapting the decision. A plan should be in place that outlines which uncertainties are being researched and when the decision will be revisited to see if uncertainty has decreased to the point that the decision should be revisited. As discussed earlier in this chapter, when decisions involve deep uncertainty, adaptive management approaches are particularly useful. Those approaches require increased monitoring and a plan for gathering more information and revisiting the decision.
As discussed in Chapter 1, other factors in addition to human health risks, economic factors, and technology availability play an important role in many of EPA’s decisions. Although not thought of as traditional uncertainties that can be quantified, there is uncertainty in those factors that should be considered in making and communicating about with EPA’s decisions. The roles that some of those factors and the uncertainties in them play in EPA’s decisions are discussed below.
Special Populations and Equity
In some cases the regulatory problem is shaped by issues concerning special populations (e.g., the lead exposure of children) or by equity or environmental justice concerns, which have been labeled as priorities by various executive orders,8 although these orders do not have the weight of law. EPA recently issued a report, Plan EJ 2014: Legal Tools, that details the legal tools related to environmental justice that are available to the agency (EPA, 2011b).
These special considerations could influence the choice of analytical approaches since approaches that emphasize net aggregate costs and benefits do not typically address these concerns. In particular, these factors can add to the variability and heterogeneity in estimates of health risks and economic factors. If the formal approaches described above are used in these contexts, they must be disaggregated so that the impacts they have on special populations can be examined as well as the aggregate effects. In doing so, EPA will be able to see the effects that its decisions could have on different groups and will be able to include the potential effects on those groups in the rationale for its decision. That will allow stakeholders to better understand the agency’s decision.
The geographic scope of a decision problem may be global, national, regional, or local. Spatial or geographic considerations are likely to introduce special problems into assessing and accounting for uncertainty. For example, data on a local area may be inadequate to characterize exposure or the sensitivity of populations to the exposure. Given the inadequacy of data collected on a national basis for use in decisions limited to local areas, decision making may be improved by additional data collection and analysis. Furthermore, the preferences of the residents in a community may differ from national averages, and those preferences can affect the values that people assign to outcomes which, in turn, will affect the economic analyses. The goal of an uncertainty analysis is to characterize how these values differ, and doing so may require additional data collection. A characterization of such differences can be qualitative or quantitative.
If the scope of a problem is local, such as is the case for a Superfund problem, local stakeholders (including members of the public) may provide input at various times during the analysis phase. It is crucial, therefore, to obtain stakeholder involvement in the problem-formulation phase,
8 For example, Exec. Order No. 12898. 77 FR 11752 (February 28, 2012) and Exec. Order No. 13045. 78 FR 19884 (April 23, 1997).
particularly with regard to decisions about the endpoints to be included in the analyses. On the other hand, if the scope of a problem is national, as is the case when setting an ambient air quality standard, the type of stakeholder involvement will be driven more by the statutory framework and agency procedures. For national issues, the stakeholders who provide input are often representatives of groups with special interests (e.g., industry, or advocacy organizations focused on a particular disease) in addition to—or even rather than—being community members.
Decisions applicable to a specific geographic area are well suited to the incorporation of public values. Even when the statutory directive is for the consideration of health effects, the implementation plans will often be of great interest to local communities. For this reason EPA will often solicit input on implementation plans through written comments or at hearings in order to gather public comments at locations across the country (EPA, 2012).
Identifying the effects of geographic scope on a decision in the initial, problem-formulation stage will help EPA identify important stakeholders and ensure that the variability in the perspectives can be addressed in the assessment and management phases of the decision. These concerns could affect the assessment of economic factors in particular.
Uncertainty analysis and more formal approaches to decision making have not always been applied to these factors in a systematic or rigorous way, but some of the analytic techniques described in Chapter 2 and Appendix A could be applied to them. For example, Arvai and Gregory (2003) used multiattribute utility analysis to evaluate different approaches to stakeholder involvement in a decision related to the cleanup of a contaminated site; one approach involved the presentation of scientific information, while the other involved the presentation of scientific information and “values-oriented information that seeks to improve the ability of nonexpert participants to make difficult trade-offs across a variety of technical and nontechnical concerns” (p. 1470). The importance of stakeholder engagement is discussed further below.
Agency decision-making processes that involve stakeholders, including dialogues with stakeholders about uncertainties, can demonstrate intentional transparency and create, maintain, and enhance a relationship of trust between the agency and its stakeholders.9 In addition, a growing
9 The terms used to refer to the parties that can be involved in environmental decision making are varied and include “stakeholders,” “the public,” “affected parties,” and “interested parties.” The definitions of these terms (i.e., the expertise, affiliations, and perspectives of the
body of research demonstrates that the political aspects of stakeholder processes do not sacrifice decision quality (Beierle, 2000) and that public participation (NRC, 2008) can in fact add information to and improve the quality and legitimacy of agencies’ decisions about the environment.10 Because decisions may ultimately have some impact for the stakeholders, if the decision-making process is to be fair and democratic stakeholders must be given the opportunity to be involved in making those decisions, including decisions about which uncertainties need better elucidation. Early and continuous involvement of stakeholders can also prevent delays that can occur when stakeholders are not engaged in decision making until later in the process, at which time they might take legal actions.
EPA has issued much guidance on public and stakeholder involvement in its programs and activities (EPA, 1998, 2003, 2011a), and there are several regulations that contain public involvement procedures for specific EPA programs and activities.11 The EPA also issued an agency-wide public involvement policy (reissued periodically with updates) that can be applied to all EPA programs and activities (EPA, 2003).12 The agency-wide policy is not mandatory, however. In spite of the existing guidance, there has been repeated concern and criticism over the failure of EPA to engage stakeholders more systematically and adequately as part of its various regulatory mandates for environmental decision making (see, for example, NRC, 1996, 2008; Presidential/Congressional Commission on Risk Assessment and Risk Management, 1997). This was the justification for a recommendation made in Science and Decisions (NRC, 2009) that EPA adopt formal provisions for stakeholder involvement across a three-phase framework for
individuals and organizations they include) have also varied. Unless otherwise specified, in this report we use stakeholder to refer to any parties interested in or affected by a decision-making authority’s activities. Stakeholders may include decision makers, industry groups, communities and community organizations, environmental organizations, scientists and technical specialists, individuals from the public, and others.
10 For a comprehensive review of research on public participation in environmental assessment and decision making, the reader is encouraged to refer to NRC, 2008.
11 See, for example, 40 CFR Part 25—Public Participation in Programs under the Resource Conservation and Recovery Act, the Safe Drinking Water Act, and the Clean Water Act; 40 CFR Part 271—Requirements for Authorization of State Hazardous Waste Programs; 40 CFR Part 300—National Oil and Hazardous Substances Pollution Contingency Plan, Subpart E—Hazardous Substance Response (establishes methods and criteria for determining the appropriate extent of response authorized by CERCLA and CWA section 311(c)).
12 According to the guidance, the seven basic steps to effective public involvement are to (1) plan and budget for public involvement activities, (2) identify the interested and affected public, (3) consider providing technical or financial assistance to the public to facilitate involvement, (4) provide information and outreach to the public, (5) conduct public consultation and involvement activities, (6) review and use input and provide feedback to the public, and (7) evaluate public involvement activities (EPA, 2003).
risk-based decision making (see Figure 5-1).13 This recommendation echoes the point made in other NRC reports (see, for example, NRC, 1996, 2008) that technical and analytical aspects of the decision-making process be balanced with adequate involvement by interested and affected parties, and it is a point with which this committee concurs.
Concerns about procedural fairness and trust are even more salient when scientific uncertainty is reported (NRC, 2008). Some research has demonstrated that people show a heightened interest in evaluating the credibility of information sources when they perceive uncertainty (Brashers, 2001; Halfacre et al., 2000; van den Bos, 2001), and they are also more likely to challenge the reliability and adequacy of risk estimates and be less accepting of reassurances in such situations (Kroll-Smith and Couch, 1991; Rich et al., 1995). When EPA anticipates more uncertainty in scientific aspects of decision making, the need for stakeholder involvement may often be greater. Other research has spoken to the importance of describing the existence of uncertainties in risk assessments as well, both to facilitate transparency and to increase public perceptions of agency honesty (Johnson and Slovic, 1995; Lundgren and McMakin, 2004; Morgan and Henrion, 1990; NRC, 1989).
Developing provisions for stakeholder involvement in decision making, including guidance on discussing with stakeholders the sources of uncertainty and how uncertainty is being managed, could lead to greater transparency and trust and also has the potential to result in better decision making. Stakeholders might be interested in how uncertainty can be dealt with in the analysis, the implications of uncertainties, and what can or cannot be done about the uncertainties. Stakeholders may also suggest new uncertainties not previously under consideration by EPA and, by expressing their values and concerns (cultural, religious, economic, and so on), help decision makers prioritize how the uncertainties are factored into decision making.
In discussions with stakeholders about uncertainty, it is important that EPA be proactive in engaging the range of stakeholders for whom a decision may have an impact. Science and Decisions (NRC, 2009) recommended that EPA provide incentives to allow for balanced participation of stakeholders, including affected communities and those stakeholders for whom participation is less likely because of competing priorities, fewer resources, a lack of knowledge, or other factors. Boeckmann and
13 The three phases are (1) problem formulation and scoping, (2) planning and conduct of risk assessment, and (3) risk-management phases (see Figure 5-1). As part of the framework, the report also suggests that stakeholder involvement should have time limits so as not to delay decision making and that there should be incentives so that participation is more balanced and includes impacted communities and less advantaged stakeholders (NRC, 2008).
Tyler (2002) found that the public is more likely to participate “in their communities when they feel that they are respected members of those communities” (p. 2067). Showing respect, therefore, is important for stakeholder engagement. The resources required for such an engagement of stakeholders, however, must be weighed against the need for such actions, given the context of the decision, including consideration of the potential health risks, the costs associated with the potential regulatory options, and the magnitude, sources, and nature or type of the uncertainty associated with the decision.
• Incorporating uncertainty analysis into a systematic framework, such as a modified version of the decision framework in Science and Decisions (NRC, 2009), provides a process for decision makers, stakeholders, and analysts to discuss the appropriate and necessary uncertainty analyses.
• Involvement of decision makers in the planning and scoping of uncertainty analyses during the initial, problem-formulation phase will help ensure that the goals of the uncertainty analysis are consistent with the needs of the decision makers.
• Involvement of stakeholders in the planning and scoping of uncertainty analyses during the initial problem-formulation phase will help define analytic endpoints and identify population subgroups as well as heterogeneity and other uncertainties.
• Uncertainty analysis must be designed on a case-by-case basis. The choice of uncertainty analysis depends on the context of the decision, including the nature or type of uncertainty (that is, heterogeneity and variability, model and parameter uncertainty, or deep uncertainty), and the factors that are considered in the decision (that is, health risk, technology availability, and economic, social, and political factors) as well as the data that are available.
• When assessing variability and heterogeneity:
Analyses of statistical distributions, including extreme-value analyses, are useful for assessing uncertainty in data on health effects (that is, estimates of risks). The use of safety or default factors (using statistics) can also be helpful under certain circumstances.
Direct assessments and technological choice or risk analyses developed using statistics can be helpful for assessing technological availability.
Cost-effectiveness, cost–benefit analysis, and multiattribute utility analysis developed using statistical methods can be useful for assessing costs and benefits.
• When assessing model and parameter uncertainty:
Expert elicitation and the analysis of probability distributions, including extreme value analyses, can be useful for assessing health effects. Safety or default factors developed using expert judgments can also be helpful.
Formal expert elicitation to assess technology availability, as well as technology choice and risk analysis using expert judgment, can be helpful in assessing technology factors.
Cost-effectiveness, cost–benefit analysis, and multiattribute utility analysis developed using expert judgments can be useful for assessing costs and benefits.
• When assessing deep uncertainty:
Scenario analysis and robust decision-making methods can be helpful for assessing health effects, technology factors, and costs and benefits.
• The interpretation and incorporation of uncertainty into environmental decisions will depend on a number of characteristics of the risks and the decision. Those characteristics include the distribution of the risks, the decision makers’ risk aversion, and the potential consequences of the decision.
• The quality of the analysis and recommendations following from the analysis will depend on the relationship between analyst and the decision maker. The planning, conduct, and results of uncertainty analysis should not be isolated from the individuals who will eventually make the decisions. The success of a decision in the face of uncertainty depends on the analysts having a good understanding of the context of the decision and the information needed by the decision makers, and the decision makers having a good understanding of the evidence on which the decision is based, including an understanding of the uncertainty in that evidence.
Although some analysis and description of uncertainty is always important, how many and what types of uncertainty analyses are carried out should depend on the specific decision problem at hand. The effort to analyze specific uncertainties through probabilistic risk assessment or quantitative uncertainty analysis should be guided by the ability of those analyses to affect the environmental decision.
Arvai, J., and R. Gregory. 2003. Testing alternative decision approaches for identifying cleanup priorities at contaminated sites. Environmental Science and Technology 37(8):1469–1476.
Beierle, T. C. 2000. The quality of stakeholder-based decisions: Lessons from the case study record. Washington, DC: Resources for the Future.
Boeckmann, R. J., and T. R. Tyler. 2002. Trust, respect, and the psychology of political engagement. Journal of Applied Social Psychology 32(10):2067–2088.
Brashers, D. E. 2001. Communication and uncertainty management. Journal of Communication 51(3):477–497.
Claxton, K., P. Neumann, S. Araki, and M. Weinstein. 2001. Bayesian value-of-information analysis. An application to a policy model of Alzheimer’s disease. International Journal of Technology Assessment in Health Care 17(1):38–55.
EPA (U.S. Environmental Protection Agency). 1998. EPA stakeholder involvement action plan. Washington, DC: Environmental Protection Agency. http://www.epa.gov/publicinvolvement/siap1298.htm (accessed November 20, 2012).
———. 2003. Public involvement policy of the U.S. Environmental Protection Agency. Washington, DC: Environmental Protection Agency. http://www.epa.gov/publicinvolvement/pdf/policy2003.pdf (accessed January 3, 2013).
———. 2010. Voluntary Children’s Chemical Evaluation Program (VCCEP). http://www.epa.gov/oppt/vccep (accessed November 20, 2012).
———. 2011a. Expert Elicitation Task Force white paper. Washington, DC: Environmental Protection Agency. http://www.epa.gov/stpc/pdfs/ee-white-paper-final.pdf (accessed January 3, 2013).
———. 2011b. Plan EJ 2014: Legal tools. Washington, DC: Environmental Protection Agency. http://www.epa.gov/environmentaljustice/resources/policy/plan-ej-2014/ej-legal-tools.pdf (accessed January 3, 2013).
———. 2012. The plain English guide to the Clean Air Act. http://www.epa.gov/air/caa/peg/public.html (accessed May 24, 2012).
Fenwick, E., K. Claxton, and M. Sculpher. 2001. Representing uncertainty: The role of cost-effectiveness acceptability curves. Health Economics 10(8):779–787.
Garber, A. M., and C. E. Phelps. 1992. Economic foundations of cost-effective analysis. NBER working paper series no. 4164. Cambridge, MA: National Bureau of Economic Research. http://www.nber.org/papers/w4164.pdf (accessed January 3, 2013).
Gregory, R. 2011. Structured decision making: A practical guide to environmental management choices. Hoboken, NJ: Wiley-Blackwell.
Gregory, R. S., and R. L. Keeney. 2002. Making smarter environmental management decisions. Journal of the American Water Resources Association 38:1601–1612.
Gregory, R., T. C. Brown, and J. L. Knetsch. 1996. Valuing risks to the environment. Annals of the American Academy of Political and Social Science 545:54–63.
Halfacre, A. C., A. R. Matheny, and W. A. Rosenbaum. 2000. Regulating contested local hazards: Is constructive dialogue possible among participants in community risk management? Policy Studies Journal 28(3):648–667.
Hammitt, J. K., and J. A. K. Cave. 1991. Research planning for food safety: A value of information approach. Washington, DC: RAND.
Hammond, K. R., B. F. Anderson, J. Sutherland, and B. Marvin. 1984. Improving scientists’ judgments of risk. Risk Analysis 4(1):69–78.
Howard, R. A. 2007. The foundations of decision analysis revisited. In Advances in decision analysis, edited by W. Edwards, R. F. Miles, and D. Von Winterfeldt. New York: Cambridge University Press. Pp. 32–56.
Hunink, M. G. M. 2001. Decision making in health and medicine: Integrating evidence and values. Cambridge; New York: Cambridge University Press.
IPCC (International Panel on Climate Change). 2007. Climate change 2007: Synthesis report. Contribution of Working Group I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. http://www.ipcc.ch/publications_and_data/publications_ipcc_fourth_assessment_report_synthesis_report.htm (accessed November 12, 2012).
Johnson, B. B., and P. Slovic. 1995. Presenting uncertainty in health risk assessment: Initial studies of its effects on risk perception and trust. Risk Analysis 15(4):485–494.
Keeney, R. L. 1996. Value-focused thinking: A path to creative decision making. Cambridge, MA: Harvard University Press.
Keeney, R. L., and H. Raiffa. 1976. Decisions with multiple objectives. Cambridge, UK: Cambridge University Press.
Koppenjan, J., and E.-H. Klijn. 2004. Managing uncertainties in networks: A network approach to problem solving and decision making. London: Routledge.
Kroll-Smith, J. S., and S. R. Couch. 1991. As if exposure to toxins were not enough: The social and cultural system as a secondary stressor. Environmental Health Perspectives 95:61–66.
Lundgren, R. E., and A. H. McMakin. 2004. Risk communication: A handbook for communicating environmental, safety, and health risks. Columbus: Battelle Press.
Mingers, J., and J. Rosenhead. 2004. Problem structuring methods in action. European Journal of Operational Research 152(3):530–554.
Morgan, M. G., and M. Henrion. 1990. Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. New York: Cambridge University Press.
Nordhaus, W. D., and D. Popp. 1997. What is the value of scientific knowledge? An application to global warming using the price model. Energy Journal 18(1):1–45.
NRC (National Research Council). 1983. Risk assessment in the federal government: Managing the process. Washington, DC: National Academy Press.
———. 1989. Improving risk communication. Washington, DC: National Academy Press.
———. 1996. Understanding risk: Informing decisions in a democratic society. Washington, DC: National Academy Press.
———. 2008. Public participation in environmental assessment and decision making. Washington, DC: The National Academies Press.
———. 2009. Science and decisions: Advancing risk assessment. Washington, DC: The National Academies Press.
Presidential/Congressional Commission on Risk Assessment and Risk Management. 1997. Risk assessment and risk management in regulatory decision-making. Final report. Volume 2. Washington, DC: Presidential/Congressional Commission on Risk Assessment and Risk Management.
Rabl, A., and B. van der Zwaan. 2009. Cost–benefit analysis of climate change dynamics: Uncertainties and the value of information. Climatic Change 96(3):313–333.
Raiffa, H. 1968. Decision analysis: Introductory lectures on choices under uncertainty. Reading, MA: Addison-Wesley.
Rich, R. C., M. Edelstein, W. K. Hallman, and A. H. Wandersman. 1995. Citizen participation and empowerment: The case of local environmental hazards. American Journal of Community Psychology 23(5):657–676.
Rosenhead, J. 1996. What’s the problem? An introduction to problem structuring methods. Interfaces 117–131.
Spetzler, C. S. 2007. Building decision competency in organizations. In Advances in decision analysis: From foundations to applications, edited by W. Edwards, R. F. Miles, and D. von Winterfeldt. New York: Cambridge University Press. Pp. 451–468.
Stinnett, A. A., and J. Mullahy. 1998. Net health benefits: A new framework for the analysis of uncertainty in cost-effectiveness analysis. Medical Decision Making 18(2 Suppl.):S68–S80.
van den Bos, K. 2001. Uncertainty management: The influence of uncertainty salience on reactions to perceived procedural fairness. Journal of Personality and Social Psychology 80(6):931–941.
von Winterfeldt, D. 2007. Defining a decision analytic structure. In Advances in decision analysis: From foundations to applications, edited by W. Edwards, R. Miles, and D. von Winterfeldt. New York: Cambridge University Press. Pp. 81–103.
von Winterfeldt, D., and B. Fasolo. 2009. Structuring decision problems: A case study and reflections for practitioners. European Journal of Operational Research 199(3):857–866.
Yohe, G. 1996. Exercises in hedging against extreme consequences of global change and the expected value of information. Global Environmental Change 6(2):87–101.
Yokota, F., and K. M. Thompson. 2004a. Value of information analysis in environmental health risk management decisions: Past, present, and future. Risk Analysis 24(3):635–650.
———. 2004b. Value of information literature analysis: A review of applications in health risk management. Medical Decision Making 24(3):287–298.
Yokota, F., G. Gray, J. K. Hammitt, and K. M. Thompson. 2004. Tiered chemical testing: A value of information approach. Risk Analysis 24(6):1625–1639.