Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
4 Analysis We use the term analysis to refer to ways of building understanding by systematically applying specific theories and methods that have been developed within communities of expertise, such as those of the natural science, social science, engineering, decision science, logic, mathematics, and law. Risk analysis, an activity that applies analytic techniques to the understanding of risks, has grown rapidly since its beginning in the 1950s. It involves estimating the likeli- hood of occurrence and possible severity of particular kinds of harm. Analysis can also be used to examine risk problems to characterize their history and analyze possible outcomes of different decisions, strategies or policies. Risk analysis can be qualitative as well as quantitative; in fact, for some important elements of risk, no valid method of quantification is available. Analytic techniques are essential for understanding risk, and many useful volumes have been written about them (e.g., Raiffa, 1968; U.S. Nuclear Regulatory Commission, 1975; Lewis et al., 1975; Fischhoff et al., 1981; van Winterfeldt and Edwards, 1986; Crouch and Wilson, 1982; Travis, 1988; Cohrssen and Covello, 1989; Morgan and Henrion, 1990; Rodricks, 1992; Royal Society Study Group, 1992; Suter, 1993; National Research Council, 1994a>. For this reason, our treatment of analytic tech- niques is brief. Chapter 2 has pointed to the need to apply analytic tech- niques more broadly, so as to expand the aspects of risk that are given careful scientific attention. This chapter discusses the general principles and purposes of analysis and addresses two substantive analytical issues 97
98 UNDERSTANDING RISK: INFORMING DECISIONS IN A DEMOCRATIC SOCIETY that have received much attention in recent discussions of risk character- ization: the appropriate use of analytic techniques to reduce the multidi- mensionality of risk and the analysis of uncertainty. Risk analyses usually address such basic questions as: What can go wrong? How likely is it? What are the consequences? How certain is this knowledge? (see Kaplan and Garrick, 1981~. Although these questions are most often asked only about risks to human health and safety and the environment, they can in principle be asked about the full range of harms that concern interested and affected parties and public officials. We em- phasize that analysis can be used for social questions about risk, includ- ing potential economic, social, political, and cultural harms; the design of messages synthesizing the results of analyses; and the design and evalua- tion of procedures for broadly based deliberation. Analysis therefore may involve more than the tools of the natural sciences and more than quantification. Methods for quantitative analysis include collection and evaluation of observational or archival data, experimental studies, epidemiological and econometric analysis, survey research, and the development of pre- dictive models of the physical or social phenomena affected by the risk. Methods for qualitative analysis include systematic clinical and field ob- servation, logical inference from historical and comparative studies, in- ference from legal precedent, ethnographic interviewing, and the applica- tion of principles of ethics. Although the bulk of the effort in developing methods of risk analysis has been addressed to quantitative methods, critical aspects of risk frequently require qualitative evaluation. PURPOSES AND CHALLENGES OF ANALYSIS Analysis is essential to the risk decision process because it is the best source of reliable, replicable information about hazards and exposures and options for addressing them. Analysis, in quantitative form when appropriate data and methods are available, offers a window on the rela- tive magnitude of hazards and exposures. Relevant analysis, in quantita- tive or qualitative form, strengthens the knowledge base for delibera- tions, both about how to deal with hazards and about how to better inform risk decisions. Analysis can clarify issues by identifying the likely results of decisions, the implications of options, and previously unrecognized potential dangers. It can enable all parties to reach agreement on some issues and focus further discussion on areas of disagreement. It can pro- vide a basis for selecting among positions without regard to who favors those positions. And it illuminates the decision options that are available when choices must be made with incomplete information, under uncer- tainty, and with strong and opposite positions having been declared.
ANALYSIS 99 Analysis, like deliberation, needs to be much more extensive in some decision situations than in others. It is almost always possible to consider conducting more detailed analysis so that a risk decision can be better informed. But like additional deliberation, additional analysis requires time and other resources. Judgments about the appropriateness of con- ducting analysis are very much part of the analytic-deliberative process: the possibility of doing additional useful analysis does not necessarily require that it be carried out. Without good analysis, deliberative processes can arrive at agree- ments that are unwise or not feasible. For example, the U.S. government negotiated an agreement in 1989 to clean up the Hanford, Washington, nuclear weapons site by 2018 because "thirty years seemed like a reason- able length of time to complete the cleanup" (Blush and Heitman, 1995:ES- 2~. But the agreement included milestones, including one for removing tritium from groundwater that may not be met because no technology yet exists to accomplish the task (U.S. General Accounting Office, 19951. Analysis of the proposed agreements from the standpoint of technical feasibility might have led to a more realistic commitment. Although analysis is most commonly associated with the task of gath- ering and interpreting data, it also provides critical input to the other steps leading to risk characterization. It can help define problems. For example, analysis of chemical processes in the atmosphere first defined the problem of stratospheric ozone depletion and predicted that it would occur as a result of anthropogenic releases of chlorofluorocarbons (Rowland and Molina, 1974~. It can generate options. One example is the so-called geoengineering approach to responding to the threat of climate change (Committee on Science, Engineering, and Public Policy, 1992~. And it can help summarize information, for example, by finding accurate and effective ways of presenting uncertainty. Analytic approaches are increasingly being used to summarize knowl- edge. These include techniques for clear graphic presentation of data that can be of great use for understanding the many factors relevant to a decision. However, good presentation without a correspondingly high quality of substance can mislead decision participants and subvert the role of analysis. Similarly, other new decision support systems, including integrated database management and modeling, provide opportunities for improving the ability to perform, summarize, and communicate analy- sis. Effective decision support systems can allow analysts to access and evaluate data, in some cases in real time (e.g., for hurricane, flood, or pollutant spill evaluations); test predictive models; evaluate management and decision options; perform uncertainty analyses; and identify data and research needs to improve predictions. Quantitative models to organize and interpret data are particularly
100 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMO CRITIC SOCIETY important to risk characterization. In some fields, such as ecological risk characterization, analyses are sometimes based largely on conceptual models. Models provide a framework that defines the relationships that are valuable to study and specify how measured quantities are to be interpreted in relation to the real world. Models simplify the world and can therefore provide clear responses to policy questions. But they also present analysts with a tradeoff between the needs for simplicity and for verisimilitude. Incorporating more real-world components and processes can lead to more realistic representations, but complex models can re- quire analysts to make many estimates, and may exceed analysts' ability to understand how the model operates and therefore to obtain meaning- ful insights. Simpler models provide clearer and possibly better analysis, but may omit or misrepresent some critical processes or components; there are justifications for different approaches to making the tradeoff (see, e.g., Weaver, 1948; Simon, 1982; Beck, 1987; lefferys and Berger, 1992; Morgan and Henrion, 1990:Chap. 11~. One method seeks a flexible, hierarchial, and step-wise approach to complexity. Initial model formula- tions are simple, attempting to frame, scope, and bound possible risks, thereby helping to identify whether and how more sophisticated analysis should be pursued. More detailed models and analyses are then devel- oped, allowing for comparisons across levels of complexity and concep- tual representation. Models and other decision support systems also may help meet the challenge of integrating analysis with deliberation by enabling a wide range of interested parties to participate in a more sophisticated and bet- ter informed way in the analytic-deliberative process. When the underly- ing model and data inputs have been developed in a scientifically sound and an open and inclusive manner that inspires trust and support among participants, they can serve as a basis and focal point for joint investiga- tion and evaluation of alternatives by all the parties to a decision. If the data and models are not understandable by participants, there is a poten- tial for specialists to use them to manipulate the understanding of non- experts, and for them to be perceived as manipulative. STANDARDS FOR GOOD ANALYSIS Good quantitative analysis has several characteristic features: · It is consistent with state-of-the-art scientific knowledge. · Any assumptions used are clearly explained, used consistently, and tested for reasonableness. · The analysis is checked for accuracy (e.g., of calculations). · Unnecessary assumptions are removed before the final analysis is
ANALYSIS 10 reported, after checking to ensure that the removed assumptions do not affect the results. · Any models used for calculation are well defined and, ideally, validated by testing against experimental results and observational data. · Data sources are identified in such a way that the data can be obtained by anyone interested in checking them. · Calculations are presented in such a form that they can be checked by others interested in verifying the results. · Uncertainties are indicated, including those in data, models, pa- rameters, and calculations. support. . .. . ~. . · Results are discussed clearly, indicating what conclusions they can Although all these standards are reasonable, often they are not met in practice. Analysts may uncritically select assumptions that are unreason- able. They may choose, but not explain, key assumptions that substan- tially determine the outcome. They may even be unaware of assumptions that are implicit in the models they use. They may adopt models that are easy to use but have inherent weaknesses. They may neglect model vali- dation because of time pressures. They may use data without checking the source and quality. They may not mention uncertainties because they are difficult to estimate, undermine the certitude with which the results can be presented, or even invalidate the analysis. They may neglect bal- ance in an effort to strengthen their conclusions. Good qualitative analysis has many of the same features as good quantitative analysis, but it faces greater burdens. Because it tends to have less well-established procedures, qualitative analysis tends to be more difficult to validate, more subject to opinion, and more easily dis- credited by skeptics. However, some of the issues most important to interested and affected parties-such as issues of informed consent and some equity issues are only treatable by qualitative analysis. It is a challenge for researchers as well as analysts to develop reasonable stan- dards for qualitative analysis. For both quantitative and qualitative risk analysis, technical adequacy is a necessary but not sufficient characteristic: analysis must also be rel- evant to the given risk decision. First, the questions to be addressed must be appropriate for the available analytic techniques and must be ones for which information exists. An analyst often can be most helpful by identi- fying questions that cannot be answered with available information un- less reframed. Second, the analysis should detail the limits of current knowledge, identify which factors have been included and excluded, and summarize the uncertainties associated with its results. Third, analysis should respond to the needs and expectations of the interested and af
102 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMOCRATIC SOCIETY fected parties. Fourth, analysis should address the issues that need to be resolved for the decision. Finally, analysis should be independently re- viewed as to its assumptions, calculations, logic, results, and interpreta- tions. This point is particularly important and often neglected. A review of what conclusions can be drawn is critical, since it is the conclusions that form the basis of a risk decision. ANALYSIS TO REDUCE THE COMPLEXITY OF RISK A great variety of analytic techniques exist for reducing the complex- ity of risk. We do not comment on specific ones, but focus instead on how such techniques can be appropriately integrated into the process that results in a risk characterization. We focus especially on the class of techniques, including those of benefit-cost analysis and multiattribute utility analysis, that aims to reduce risk to a single dimension as an aid to priority setting and decision making. Chapter 2 emphasizes the multidimensional nature of risk and its importance for understanding and coping with risks. This complexity raises several difficult questions for risk analysis, among them the follow- ~ng: · Which of the many dimensions of a particular risk are relevant to the decision at hand? For which should efforts be made to conduct quan- titative analysis? For which should analysis be qualitative? Which di- mensions do not need to be analyzed? · Are there reliable and valid techniques for estimating the various nonhealth outcomes of concern, such as ecological effects, social effects, and effects on future generations? · Which dimensions of a risk are important, and to whom? How important? How does one know? . Is it appropriate to aggregate different dimensions of risk into a single overall measure of the magnitude of the risk? Are there reliable and valid methods that can be used for such aggregation? · If there are no adequate methods for aggregating the dimensions of the risk, what methods should be used to set priorities for action among different hazards and risks? Risk analysts are aware of these issues and have attempted to develop analytical techniques to address them. There are specialized techniques for analyzing particular dimensions of risk, such as ecological risks (e.g., Harwell et al., 1990; Bartell, Garner, and O'Neill, 1992; Kopp and Smith, 1993; Suter, 1993), certain social and economic effects (e.g., Finsterbusch and Wolf, 1981; Finsterbusch, Llewellyn, and Wolf, 1984; Greenberg and
ANALYSIS 103 Hughes, 1993), distributional equity (e.g., Zeckhauser, 1975; Anderson, 1988; Leigh, 1989; Ellis, 1993), and intergenerational equity (e.g., N1iscusi and Moore, 1989; Cropper, Aydede, and Portney, 1994). There are also techniques for addressing several dimensions of risk at once to try to simplify the understanding of risk-by combining many dimensions into one. Some of these techniques convert deaths, illnesses, and nonhealth outcomes into monetary units for use in cost-benefit analysis (for a review covering several methods used in economics, see Cropper and Oates, 1992). Some aim to arrive at a nonmonetary, single-dimensional sum- mary, expressed, for example, as an overall indicator of health risk or quality of life, as a basis for making comparisons and setting priorities (e.g., Olsen, Melber, and Merwin, 1981). Others, such as the techniques of multiattribute utility analysis, allow for different ways of reducing the dimensionality of risk depending on value priorities specified by the us- ers (e.g., Keeney and Raiffa, 1976; Edwards and Newman, 1982; see Ap- pendix A for one example of an application). And there are techniques for making quantitative comparisons between risks that vary in their un- certainty profiles (e.g., Finkel, 1990). Such analytic techniques have been developed to illuminate and try to bring rationality to difficult choices between alternatives whose risks (and benefits) differ qualitatively as well as quantitatively. They respond to the need of decision makers for better ways to take the various dimen- sions of a choice into account and for a rational and defensible basis for making decisions. Government agencies may also use the techniques to routinize their decision processes and to meet legal tests regarding arbi- trariness and capriciousness. There are two chief strengths of such analytical techniques: they require analysts to pay careful attention to several dimensions of risk and, in the course of deciding on how to aggregate across dimensions, the techniques may elicit careful deliberation about the relationships and tradeoffs among the dimensions. Because of these strengths, such tech- niques can be valuable aids in understanding risk. They can make tradeoffs clearer and show what decisions would follow from accepting particular value choices. The techniques also have significant dangers and pitfalls associated with their goal of simplifying an inherently multidimensional problem and with their use not only to inform, but also to help make decisions. Techniques that aim to simplify risk necessarily embed value choices, some of them highly contentious. Among others, they embed a choice to set risks to all individuals equal or to treat some kinds of people, such as children or people who are involuntarily exposed, as more worth protect- ing than others. They embed choices about whether to discount future risks and, if so, by how much. They embed choices about how to weigh
104 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMOCRATIC SOCIETY risks to natural habitats against risks to economic activity, risks to human health against principles of informed consent, and so forth. In addition, they involve making a judgment that all the dimensions of risk that are relevant to the decision at hand have been considered. The values associ- ated with each of these judgments are built into the analysis, but some of the judgments made in any given instance may not be widely accepted in the society. Thus, there are likely to be people who do not accept the judgments and value choices embedded in any particular analysis. Because of these dangers and pitfalls, we express caution about the use of analytic techniques to simplify risk. These techniques can be help- ful, but they should be handled with care and should not be used to dominate decision making. Similar concerns have been expressed by many others (e.g., Lave, 1981; Dietz, 1987, 1994; National Research Coun- cil,1989; lasanoff, 1993; Fischhoff, 1994, 1995~. Recently, a broad group of economists reviewed the use of benefit-cost analysis in environmental, health, and safety regulation and reached similar conclusions (Arrow et al., 1996:3,7,10~: Benefit-cost analysis is neither necessary nor sufficient for designing sen- sible public policy. If properly done, it can be very helpful to agencies in the decisionmaking process.... There may be factors other than bene- fits and costs that agencies will want to weigh In decisions, such as equity within and across generations.... Care should be taken to assure that quantitative factors do not dominate important qualitative factors in decisionmaking. Our caution derives not from the fact that these techniques require their practitioners to exercise judgment-judgment is involved in all tech- niques that simplify complex realities in the service of decision making. The danger lies in using judgments that are implicit in analytic techniques but are made without broad-based deliberation, as substitutes for that deliberation. It lies in acting as if values are not embedded in the analyses or as if some particular analytic technique can be assumed in advance to yield the best or most trustworthy understanding of a risk situation. Gov- ernment agencies may be strongly tempted to use analytic techniques as substitutes for informed and appropriately broad-based deliberation in weighing conflicting values because of their need for routine and legally defensible decision procedures. They should resist this temptation. Analytic techniques for simplifying risk may aid the analytic-delib- erative process or interfere with it. Research does not offer a basis for definitive guidance as to how to make these techniques helpful. Our experience and our reading of the case material suggest that the key is that the deliberative process should help shape the analysis, determining which particular techniques are used and how their results are inter
ANALYSIS 105 preted. Especially when the decision at hand is highly controversial and when strong values and interests may come into conflict, it is important that the spectrum of scientists, public officials, and interested and affected parties come to agreement in advance on which techniques of simplifica- tion, if any, will be used and what they will be used for, and that they have the opportunity to examine the way the techniques are being used, to question the analysts, and to demand that the analysis be varied in ways that they believe will illuminate their deliberations. In short, there should be appropriately broad-based deliberation and iteration concern- ing the use of these techniques, just as with other risk analytic techniques. Without such feedbacks, it is more likely that the interests that appear to lose on the basis of the analysis will criticize the analytic technique as biased, thus defeating the hope that analysis will yield rational, defen- sible, and legitimate decisions. Some people may object that nonexperts are incapable of making competent decisions about complex analytical techniques that they do not understand. But the fact that they may not understand the techniques is precisely the reason that the analysis must be responsive to the informa- tion needs of the interested and affected parties, as determined by the deliberative process. So long as decision participants understand which value assumptions underlie an analysis, the analysis can serve the deci- sion. To the extent that the value assumptions become opaque, as can occur when analysis uses unnecessarily sophisticated mathematical tech- niques or when value assumptions are hidden in the details of a model, the analysis begins to take over the decision. Participants who do not know how value choices are affecting the analytic outputs are likely to become suspicious, especially if there is a history of distrust among the parties. Such a situation may cause more difficulties than it avoids. We conclude that analytic techniques for simplifying risk should be treated like other analytic techniques used to inform risk decisions. That is, decisions about using them, refining them, and interpreting their re- sults should be made as part of an appropriately broad-based analytic- deliberative process involving not only analytic experts, but also the pub- lic officials and interested and affected parties whose decisions the techniques are intended to inform. - These conclusions have implications for a collection of recent legisla- tive proposals and agency guidances that call for using analytic tech- niques of benefit-cost analysis or risk analysis as the sole or primary basis for making "comparative risk" judgments or for "risk-based decision making" (a recent prominent example is in U.S. Environmental Protection Agency, 1993j). These proposals rely on analytic techniques that reduce risk to a single dimension, such as dollars or statistically expected cancer cases, as a way to make public policy decisions. They rest on two pre
106 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMO CLITIC SOCIE~ gumptions: that an available analytic technique can make such a reduc- tion in a way that is scientifically defensible and can achieve wide social acceptance and that decisions made by using a one-dimensional scaling of risk will be socially acceptable. Like much else in risk characterization, the appropriateness of these presumptions is situation-specific. There may be situations in which the presumptions are appropriate, but they are not so in the general case. In particular, for the reasons given above, we do not believe they are appropriate for many of the highly controver- sial choices for which these proposals are being promoted. We understand the need for rational, defensible procedures for mak- ing risk decisions, but we warn against adopting standard procedures that make the values and interests at stake less transparent to decision participants. Adopting such procedures may simply shift the ground of controversy from the values at stake to the arcane details of benefit-cost analysis or some other complex analytic technique. Such a shift would not, in our judgment, improve understanding of risk. At worst, it might further erode trust in already suspect government agencies. We believe that techniques for simplifying risk may have great value for improving risk characterization and decision making if they are used carefully, in the context of an analytic-deliberative process. We warn strongly, however, against adopting them as a routine basis for decision making in the absence of evidence that they can improve present proce- dures. It would be worthwhile to experiment with the use these tech- niques in particular areas of risk decision making where they seem likely to be helpful and to carefully evaluate the effects of their use on under- standing and on the decision-making process. It would also be worth- while to experiment further with deliberative techniques for priority setting, in which an appropriately broad-based process considers infor- mation from analyses of the various dimensions of a risk and information from the application of analytic techniques that seek to simplify risk. THE ANALYSIS OF UNCERTAINTY Much attention has been recently given to quantitative, analytic pro- cedures for describing uncertainty in risk characterizations (e.g., Finkel, 1990; Morgan and Henrion, 1990; National Research Council, 1994a; Browner, 1995~. We discuss this topic in some detail because it illustrates the strengths and limitations of analysis and the need to combine it with deliberation. The uncertainty of risk estimates and the interpretation of uncertainty have become a frequent focus of controversy. Uncertainty commonly surrounds the likelihood, magnitude, distribution, and implications of risks. Uncertainties may be due to random variations and chance out
ANALYSIS 107 comes in the physical world, sometimes referred to as aleatory uncertainty, and to lack of knowledge about the world, referred to as epistemic uncer- tainty. Sometimes, scientists may not know which of two models of a risk-generating process is applicable. Such situations are sometimes re- ferred to as presenting indeterminacy. When uncertainty is present but unrecognized, it is simply referred to as ignorance. This last case is the most worrisome, as it can result in mischaracterization of risk that sys- tematically underestimates uncertainty, with potentially serious implica- tions for the decision process. When uncertainty is recognizable and quantifiable, the language of probability can be used to describe it. Objective or frequency-based prob- ability measures can describe aleatory uncertainties associated with ran- domness, and subjective probability measures (based on expert opinion) can describe epistemic uncertainties associated with the lack of knowl- edge. Sometimes, however, uncertainty is recognized but cannot be mea- sured, quantified, or expressed in statistical terms. For instance, the eco- nomic impact of global climate change may be greatly affected by the future forms and structures of economic organization in different parts of the world, yet uncertainty about them 100, 50, or even 20 years from now is great, and extremely challenging to quantify. Similar arguments hold for many assessments of risks far into the future, such as those for radio- active waste repositories where risks are computed over design periods of 1,000 or 10,000 years. The uncertainty, especially regarding human intrusion into a repository over a 10,000-year time span, is such that "it is not possible to make scientifically supportable predictions of the prob- ability" of such anintrusion (NationalResearch Council, 1995:11~. Three hypothetical descriptions of risk can illustrate the prevalence and importance of the different types of uncertainty in risk characteriza- tion. Consider these three risks: a 1-in-100 chance of a river overflowing its levee in a given year with a given impact on life and property; a 1-in- 10,000 chance of a volcano erupting near the proposed waste repository at Yucca Mountain in the next 10,000 years, resulting in the release of a given quantity of radioactive material; and a 1-in-1,000,000 chance of an individual contracting a fatal cancer over his or her lifetime due to a chemical exposure. Even if each of these probabilities of occurrence and impact were known with certainty, the precise realizations of the risks (e.g., when, where, to whom, and how severe the actual harm) would still be random and thus inherently uncertain. An understanding of this in- herent, aleatory uncertainty is fundamental to risk characterization. Furthermore, in each of these (and most other) cases, the probabilities of occurrence and impact are not known with certainty; they are usually highly uncertain. In the case of the river levee, the probability of occur- rence may have been estimated on the basis of recent or historical
l OS UNDERSTANDING RISK: INFORMING DECISIONS IN A DEMO C~TIC SOCIETY streamflow records, but those records may be of limited duration or com- pleteness and thus may not accurately represent the longer historical record. This possibility creates epistemic uncertainty. In addition, the underlying statistical model for floods could be suspect, especially if the statistical properties of water flow in the river are nonstationary, for ex- ample, because of land-use changes in the river basin or long-term cli- mate change. Assessment of the probability of a volcanic eruption at Yucca Mountain depends both on information about nearby volcanic eruptions over the past several million years and assumptions about the geological processes that create such eruptions (Nuclear Waste Technical Review Board, 1995~. These assessments and assumptions are similarly subject to epistemic uncertainty. In the case of the 1-in-1,000,000 lifetime cancer risk associated with a chemical exposure, such an estimate is often based primarily on indirect evidence and scientific models for exposure, dose, and toxicity. Such models are subject to uncertainty and errors in both their conceptual for- mulation and the values they estimate for a range of variables affecting how the chemical is transported and transformed in the environment and how the proportion of it that reaches human beings operates in the body. Since the estimated probabilities of cancer are usually well below prevail- ing incidence rates, the risk estimates are generally not subject to valida- tion or refinement based on epidemiological studies. Thus, barring marked advances in understanding of chemical fate and transport in the environment and of carcinogenesis in humans, full resolution of these uncertainties is unlikely in the near future. Of course, research into indi- vidual components of the exposure-dose-toxicity process can help resolve portions of this uncertainty. Significant advances have been made in recent years in the develop- ment of analytical methods for evaluating, characterizing, and presenting uncertainty and for analyzing its components, and well-documented guidance for conducting an uncertainty analysis is available (e.g., Raiffa, 1968; Cox and Baybutt, 1981; Kahneman, Slovic, and Tversky, 1982; Howard and Matheson, 1984; Beck, 1987; Iman and Helton, 1988; Clemen, 1990; Finkel, 1990; Morgan and Henrion, 1990: National Research Coun- cil, 1994a). We do not repeat this technical guidance, or recommend specific approaches for uncertainty analysis. Rather, we focus on the role of uncertainty in risk characterization and the role that uncertainty analy- sis can play as part of an effective iterative process for assessing, deliber- ating, and understanding risks. In describing this role, we note the criti- cal importance of social, cultural, and institutional factors in determining how uncertainties are considered, addressed, or ignored in the tasks that support risk characterization.
ANALYSIS 109 Uncertainties that Matter Perhaps the most important need is to identify and focus on uncer- tainties that matter to understanding risk situations and making decisions about them. To accomplish this task, the general approach of decision analysis is helpful. Analysts identify the full set of options for addressing the risk, including options that may extend beyond an initial or limited set of technical fixes or regulatory responses. They then assess the poten- tial impact of each option on the risk problem, using the appropriate natural and social science studies and models. The important uncertain- ties are those that create important differences in the assessed outcomes and may therefore affect preferences among the available decision op- tions. Because risk characterization requires providing information about the full set of factors of concern to the interested parties, it must address uncertainty not only about the physical and biological impacts of the risk, but also about the social and political factors inherent to the risk. If social or equity factors matter significantly to the decision, then they deserve at least as careful attention in an uncertainty analysis as do the technical factors, chemical transport properties, dose-response parameters, and so forth. Another important source of uncertainty lies in the choice of ways to estimate risks and make decisions. The choice of a deliberative process may affect decisions and the ultimate risks in an indeterminate way. It is difficult to predict public reactions to the release of data that are alarming, but of questionable validity: Will it increase or decrease self-protective action? Will it complicate problem resolution or make it easier? Such questions also reveal indeterminacy. When the decision process itself adds uncertainty to risk estimation, efforts to understand and study these process factors and the uncertainties they bring are important to advanc- ing risk characterization. Purposes The analysis of uncertainty should elucidate the current state of knowledge and prospects for improving it. As noted by North (1995:278), "Perhaps the most important aspect is not the probability number, but the evidence and reasoning it summarizes." As part of an open, iterative, and broadly based analytic-deliberative process, uncertainty analysis can in- form all the parties of what is known, what is not known, and the weight of evidence for what is only partially understood. Describing the uncer- tainty in the current state of information does not in and of itself represent or imply an advancement in that state; it does, however, help clarify what
110 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMOCRATIC SOCIETY can be known and perhaps help identify directions for future research and data collection efforts. As part of the analysis of uncertainty, explicit efforts should be made to identify the activities and resource allocations most likely to yield sig- nificant reductions in the uncertainties that matter. Again, these uncer- tainties may involve the technical-physical components of the risk prob- lem, the social-legal-ethical dimensions, or elements of the evolving processes of risk analysis and decision making. New information will not always reduce uncertainty it may sometimes provide the knowledge and insight necessary to recognize that the problem is more complex and uncertain than previously recognized. But this too is enlightening, pro- viding an improved understanding of the state of the knowledge perti- nent to the risk problem. Such new information should be encouraged, even if it threatens to make the risk problem less tractable. The goal is to provide a comprehensive summary of available and relevant knowledge as the basis for a decision. Uncertainty analysis also involves assessing the potential for uncer- tainty to be reduced, which may have important implications for the choice among decision alternatives. Formal value-of-information analy- sis provides a set of useful techniques for assessing these implications. These techniques involve estimating how risks would change with new information, such as additional experimental results, before that informa- tion exists. For example, the artificial sweetener saccharin was consid- ered to pose a cancer risk to humans on the basis of observations of bladder cancer in rats. Additional research during the past 20 years has yielded results that suggest that the physiological conditions under which exposure to sodium saccharin causes bladder tumors in rats may not apply to humans (Cohen and Ellwein, 1995a,b), thus calling into question previous risk estimates for humans. Suppose similar studies could be carried out on other chemicals that are regarded as carcinogens on the basis of animal tests. Even though it is difficult to get definitive results, research to evaluate chemicals that are believed to be carcinogens based on animal tests can be worthwhile when regulatory costs are high. The potential improvement in regulatory decisions, in terms of costs avoided and lives saved, from the study results might have very high value com- pared with the cost of such studies (North et al., 1994; North, 1995~. Value-of-information methods address whether potential reductions in uncertainty would make a difference in the decision; they suggest pri- orities among reducible uncertainties on the basis of how much difference the expected reduction might make. They have been useful in helping to identify the value of research and data collection for a number of environ- mental and related risk issues (e.g., Raiffa, 1968; Howard, Matheson, and North, 1972; Finkel and Evans, 1987; Reichard and Evans, 1989; Clemen,
ANALYSIS 111 1990; Freeze et al. 1990; lames and Freeze, 1993; Taylor et al. 1993; lames and Gorelick, 1994; Dakins et al.1994; North, 1995~. Value-of-information analysis can be of considerable use in the analytic-deliberative process. We emphasize, however, that determining whether a reduced uncertainty would make a difference in a decision often requires deliberation as well as analysis. Different participants in the decision process may not agree on how to interpret new information or on the appropriate criteria for making or revising risk decisions. Limits Considerable research highlights the difficulties that experts and non- experts alike have in making scientific judgments related to risk and prob- ability estimation (e.g., Kahneman and Tversky, 1972, 1973; Lichtenstein and Fischhoff, 1977; Kahneman et al. 1982; Freudenburg, 1988; Morgan and Henrion, 1990; Clarke, 1993; Tversky and Kochler,1994~. These diffi- culties are minimized when the judgment is easy, when there is a clear criterion of accurate judgment, and when those making the judgment have frequent feedback that gives them empirical knowledge about how accurate their judgments are (Fischhoff, 1989~. Some risk-related judg- ments have these qualities judgments about the frequency of highway accident fatalities may be an example but many of the most controver- sial risk judgments do not. Indeed, the biases, imprecision, and overcon- fidence often associated with expert evaluations of risk provide much of the impetus for conducting an uncertainty analysis. If point estimates of risk are likely to contain significant errors, then explicit evaluation of uncertainty is needed to ensure consideration of the possible sources, magnitude, and implications of these errors. However, just as scientific judgments concerning point estimates are often tenuous and susceptible to overconfidence, so too are characterizations of the uncertainty in these estimates. Formal uncertainty analysis should not be conducted or pre- sented as a final, full, and all-enlightening explication of the risk problem. This is especially true when expertise in risk and uncertainty analysis is unevenly distributed among different parties in an adversarial setting, and formal analysis serves as a tool (whether intentionally or not) to limit participation in and control of the debate. Rather, uncertainty analysis should be recognized as an often helpful technique that, for some prob- lems, can provide insights in support of risk characterization. A number of findings about the psychology of judgment under un- certainty have implications for the ability of experts both to develop risk estimates and to describe their associated uncertainty (Kahneman, Slovic, and Tversky, 1982~. Important among these are the following:
112 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMOCRATIC SOCIETY · Availability: People (including experts) tend to assign greater prob- ability to events to which they are frequently exposed, e.g., in the news media, scientific literature, or discussion among friends or colleagues, and that are thus easy for them to imagine or recall through mental ex- amples. The "availability" of an event to memory or imagination may not be correlated with the actual probability of the event occurring (Lichtenstein et al., 1978~. Indeed, mention in the news media or the scientific literature may occur because the event is rare and unusual. Availability may be one reason that people greatly overestimate the fre- quency of homicide relative to suicide or the risk of death from accidents relative to the risk of death from diseases (Lichtenstein et al., 19781. · Anchoring and adjustment: People's estimates of uncertain values are influenced by an initial reference value, which may be based on only speculative or illustrative information presented as part of an initial prob- lem formulation, from which they make adjustments on the basis of addi- tional information. Moreover, the adjustment is often insufficient, so that the overall probability assessment is unduly weighted toward the initial anchor value. For example, strong anchoring effects were obtained by Lichtenstein et al. (1978), who had two groups of respondents estimate the frequency of death in the United States from various causes. One group was given the death rate from accidental electrocution (1,000 per year) as a standard of comparison. The second group was given the death rate from motor-vehicle accidents (50,000 per year) as a standard. The second group gave uniformly higher estimates than the first group for all other hazards. · Representativeness: People judge an event by reference to others that resemble it, even if the resemblance carries little or no relevant infor- mation. Information that is available or provided on the occurrence of one supposedly representative event can cause analysts to ignore or un- dervalue large amounts of relevant information. Thus, representative- ness has been attributed as the cause of many shortcomings or biases in "statistical thinking," such as failure to appreciate the difference in reli- ability between small and large samples of data and failure to make one's predictions of future events sufficiently dependent on the overall popula- tion mean rather than a few events presumed to be typical. · Belief in "law of small numbers" and disqualification: Many scientists believe small samples drawn from a population to be more representative of the population than is justified on the basis of standard statistical sam- pling theory. Accordingly, a little evidence can unduly influence the probability assessment. However, people also tend to "disqualify"-that is, discount or neglect information that contradicts strongly held con- victions. · Overconfidence: As a result of these heuristics, many experts over
ANALYSIS 113 estimate the probability that their answers to technical questions are cor- rect, including probability estimates for risk problems, especially when the questions or problems are difficult and complex. While these cognitive tendencies are now widely recognized, and techniques have been developed to attempt to address them as part of expert evaluation and elicitation methods (see Spetzler and Stael von Holstein, 1975; Wallsten and Budescu, 1983; Morgan and Henrion, 1990), they provide an important caution. (For statistical models that can be used to account for errors or misrepresentation in probability elicitation and assessment, see Chaloner, 1996; Dickey, 1980; Genest and Schervish, 1985; Kadane et al., 1980; Wolpert, 1989.) A healthy dose of skepticism and humility is appropriate in interpreting any summary of information on risk and uncertainty. When conducting uncertainty analysis, other cautions and reality checks are In order. First, results of analysis can be very sensitive to assigned probabilities and uncertainties, especially when the estimates involve rare, low-probability events. Freudenburg (1988) demonstrates this for the case of a hypothetical low-probability event that usually pre- sents risk a of 1 In 1 million.: The ability to deal with ignorance and surprise unforeseen or un- foreseeable circumstances is inherently limited In an uncertainty analy- sis. Unfortunately, experience shows that it is often these unknown cir- cumstances and surprise events that shake risk analyses and topple expectations, rather than the factors (important though they might be) that have been recognized and Incorporated in formal analyses. Examples include the surprising combinations of Improbable events that led to the 1979 accident at the Three Mile Island nuclear power plant and an earlier accident at the nuclear power plant at Browns Ferry, Alabama. Uncertainty analysis should also avoid the temptation to view the evaluation and simulation results that some techniques of uncertainty analysis generate as the equivalent of field and laboratory studies and data. As noted by Morgan et al. (1984:214-215~: . . . analytical techniques tfor uncertainty analysis] . . . are not a substi tute for scientific research. They do, however, produce very technical iFreudenburg's calculation is as follows: Assume that 10 percent of the time, the event has a probability of 1 in 1 billion, that 10 percent of the time, the probability is 1 in 1 thousand, and that the remaining 80 percent of the time it is 1 in 1 million. The overall risk is (0.1 x 10-9 + 0.8 x 10-6 + 0.1 x 10-3), which equals .0001008001, or slightly more than 1 in 10 thousand- a much larger number than the most likely value.
114 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMO CRITIC SOCIETY looking results and it is usually faster and cheaper to go ask a group of experts what they think than it is to sponsor the research that is needed to learn the true answers. In agencies pressed for quick decisions, oper- ating on short time constants, and staffed by many people who do not have technical backgrounds, there is a risk that these techniques will inadvertently become a substitute for science. Although careful, well-focused, and appropriately modest applica- tions of uncertainty analysis should be helpful for many problems, there are situations in which there is simply no need for formal methods of this type. This may be the case in simple, repetitive, and highly institutional- ized settings where the administrative need for consistency and standard- ized, "bright-line" decision rules may outweigh the need to characterize the uncertainty of the consequences of a particular decision (though an occasional review to assess the ongoing performance and uncertainty of the overall decision-making process is still in order). Also, formal uncer- tainty analysis may not help if the uncertainty in the fundamental under- standing of the basic processes that drive the risk, or of whether the risk is even present at all, is so large that a quantitative estimate can only lead to obfuscation. An example is the possibility that global emissions of green- house gases could lead to a drastic change of state such as shutting off the North Atlantic Ocean circulation pattern (the Gulf Stream), leading to a drastically colder climate in Northern Europe. Both the probability of occurrence of such an event and the range of possible consequences should it occur are extremely difficult to characterize. In such cases, identification of important issues and perhaps some selected analysis of scenarios (without assigning probabilities to these scenarios), is the best that can be accomplished. Social Context Various social, cultural, and institutional factors affect how people recognize and use information on uncertainty. Understanding depends not only on the inherent features of a risk, or even the experience and expertise of the analyst attempting to characterize it, but also on the social context of the risk analysis and the associated deliberative process (e.g., Brown, 1989; lasanoff, 1987a, 1987b, 1991; MacKenzie, 1990; Michael, 1992; Shapin, 1994; Thompson, Ellis, and Wildovsky., 1990; Wynne, 1980, 1987, 1995~. These factors affect the way information about uncertainty is cre- ated and utilized in evaluating risks and the degree to which analysts acknowledge uncertainty. Cultural and social factors affect whether or not uncertainty is openly recognized in risk characterizations. In many legal settings, for instance, the proceedings are expected to produce a sharp boundary between truth
ANALYSIS 115 and belief through "fact-finding." Scientific and social institutions that must maintain trust and authority as the interpreters of scientific truth and that must support a clear legal finding can often display a purposeful ignorance or pushing aside of information on uncertainty. Suppression of uncertainty can also operate through the group processes of consensus building, for example, during the deliberations of scientific advisory pan- els and expert bodies, even when there is no legal mandate for a single outcome or recommendation. When the stakes in a decision are high, accuracy or inaccuracy in science may be accentuated by participants for their own purposes. For example, in the early 1980s a debate over acceptable levels of polychlori- nated biphenyls in the ground around leaking transformers (for example, on electric power poles) highlighted existing uncertainty about the health risks. Environmentalists argued that cleanup should be to the level of detection (about 5 parts per million tppm], at that time), while several industry groups argued that a 50 ppm cleanup level should be considered safe. Because of the uncertainty in available health studies, both positions received scientific support, but neither could prevail. Eventually both sides concluded that achieving an acceptable cleanup policy would yield more benefits than an unending argument about health effects. They reached a compromise that included a 25 ppm cleanup standard and jointly persuaded the U.S. Environmental Protection Agency and Con- gress to implement their compromise; the uncertainty ceased to matter to the parties (Bannerman, 1987; Warren, 19871. The perception of uncertainty tends to vary with closeness to the problem those very close to or far from a problem often acknowledge the greatest uncertainty, while those with some partial knowledge tend to consider their understanding to be more definitive, suggesting a "trough of uncertainty" (MacKenzie, 1990), or, perhaps, that a little knowledge can be dangerous to understanding. Perceptions of uncertainty can also be greatly influenced by the cultural and social context of the perceivers' experiences and their roles in relation to the risk problem. Judgments of the uncertainty of scientific information often reflect the trust and reliabil- ity placed in the institutions that have generated the information. For most people, an investment of time and energy to understand scientific information and its uncertainty is only worthwhile when they may be affected or when such information is relevant to decisions over which they have the power and agency to act and make a difference. In some cases, interested parties may even seek technical ignorance when such behavior is socially beneficial or appropriate, such as when knowledge can impart responsibility or liability for a risk or when pursuit of such knowledge can signal mistrust in actors or social arrangements upon which they depend for support or protection. Scientific theory and ap
116 UNDERSTANDING RISK: INFORMING DECISIONS INA DEMO CRITIC SOCIETY preaches assume that more information and less uncertainty is always preferred, but this may not be the case in some cultural and social situa- tions. Summary and Implications Uncertainty is a critical dimension in the characterization of risk. Participants in decisions need to consider not only its magnitude, but also its sources and character whether it is due to inherent randomness or to lack of knowledge and whether it is recognized and quantifiable, recog- nized and indeterminate, or perhaps unrecognized. Uncertainty is best examined in the context of a decision, focusing on the uncertainties that matter most to the ongoing deliberation and decision processes. These uncertainties may involve the physical and technical aspects of the risk, the social and economic dimensions of the risk, or political or behavioral factors that influence the evolution of the risk and associated uncertainty. By focusing on these factors in a decision-analytic context, uncertainty analysis can enlighten decision participants, help counter the cognitive biases that affect expert judgment on risk, and help set priorities for fur- ther information gathering efforts. Uncertainty analysis should be conducted with care and in conjunc- tion with deliberation. Although uncertainty analysis can be a useful tool for more informative characterization of risk, it has limitations. It cannot address the truly unexpected the risks that were never considered in a risk analysis but that arise with unknown frequency in real events. It can at times be misleading, and in certain cases, may have no appropriate role at all. Moreover, cognitive biases can affect judgments about uncertainty as well as about risk. Uncertainty analysis and its users should remain aware of the fact that both the analysis and people's interpretations of it can be strongly affected by the social, cultural, and institutional context of the decision setting and the formal or perceived role of the various par- ticipants, which can exert pressure toward perceiving more or less uncer- tainty, or different kinds of uncertainty, than would otherwise be recog- nized. CONCLUSIONS Analytic techniques can be used for several aspects of risk character- ization. Most familiar among these uses is to estimate the likelihood of particular adverse outcomes. In addition, they are often used to reduce inherently multidimensional risks to a single dimension so as to facilitate decision making, and to characterize the uncertainties surrounding esti- mates of adverse outcomes. Much insight can be gained from applying
ANALYSIS 117 analytical techniques to these purposes, and there are strong practical reasons for decision makers to seek standardized, replicable, and defen- sible analytic procedures. However, there are important pitfalls associ- ated with overreliance on analysis. Analysis conducted to simplify the multidimensionality of risk or to make sense of uncertainty can be mis- leading or inappropriate, can create more confusion that it removes, and can even exacerbate the conflicts it may have been undertaken to reduce. Because of the power of formal analytical techniques to shape under- standing, decisions about using them for these purposes and about inter- preting their results should not be left to analysts alone, but should be made as part of an appropriately broad-based analytic-deliberative pro- cess. Used in this manner, proper analysis can enlighten both scientific understanding and the goals of effective risk decision making.