Click for next page ( 45


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 44
Dealing With Uncertainty About Risk in Risk Management CHRIS G. WHIPPLE Science tells us what we can know, but what we can know is little, and if we forget how much we cannot know we become insensitive to many things of great impor- tance. Theology, on the other hand, induces a dogmatic belief that we have knowl- edge where in fact we have ignorance, and by doing so generates a kind of impertinent insolence towards the universe. Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales. Bertrand Russell, A History of Western Philosophy, 1945 Until the last 15 years or so, efforts to improve health and safety were directed primarily at risks of relatively certain magnitude. The social harm from accidents and diseases such as polio was all too easy to measure. Risks were managed by learning from mistakes; this is still an essential part of good risk management. But trial-and-error management is ill suited for many risks of current concern for example, risks with long latency periods or catastrophic potential. We now seek better ways to manage risks prospec- tively, methods that avoid the human costs of a trial-and-error approach. Where experience is not a guide, risk management is more difficult. We have been struggling with several such cases for the past decade: nuclear power, chemical carcinogens, and more recently, biotechnologies. One approach to uncertainty about such risks has been to try to reduce it through research. Substantial resources have been expended to understand these risks, and risk management has been improved by such studies. Despite this 44

OCR for page 44
DEALING WITH UNCERTAINTY ABOUT RISK IN RISK MANAGEMENT 45 effort, however, and particularly when direct human evidence is not avail- able, large uncertainties about risk remain. Research may eventually resolve many questions that now trouble us, and in some cases postponing a decision to await research results may avoid uncertainty. But many risks are likely to remain uncertain indefinitely. Estimating the magnitude of risks that cannot be measured directly fre- quently requires the use of assumptions that cannot be tested empirically. Not only are such risks uncertain, but often the uncertainty cannot be charac- terized by a probability distribution. Although such distributions are useful for describing some uncertainties, they are often not feasible in risk assess- ment. Sometimes there is no reasonable method even to assign weights to the plausibility of alternative assumptions. Methods have been developed to elicit subjective descriptions of uncertainty; these, however, raise the ques- tion of whose estimates to accept. Recognition of these uncertainties has at times led to the view that risk assessment is a dubious enterprise, too uncertain to be relied upon for risk- management decisions. But low-level risks are inherently uncertain regard- less of the approach taken to their study. This uncertainty is simply more apparent under some approaches to social risk management than others. Given the discomfort that uncertainty causes, it may be tempting to overstate what risk assessment can tell us. The limits to science are imprecise, as are the distinctions between that which is known and that which can reasonably be assumed. For these reasons, a technically accurate description of uncer- tainties is now considered an essential part of risk assessment. Risk assessors use assumptions to bridge gaps in knowledge. Often there are several alternative assumptions, each scientifically plausible and with no reasonable basis for choosing among them. For example, an assessor must decide which dose-response model to use in extrapolating from high- to low- dose risk. In such situations, recent practice endorses conservatism in risk estimation as protective of public health. The argument presented in this paper is that conservatism, defined as the systematic selection of assump- tions leading to estimates of high risk, is not protective of human health in most situations. One way to deal with uncertainty is to categorize the smallest risks (often the most uncertain risks) as de minimis risks. De minimis risks are those judged to be too small to be of social concern, or too small to justify the use of risk-management resources for control (see Weinberg, in this volume). Properly applied, a de minimis risk concept can help set priorities for bring- ing regulatory attention to risk in a socially beneficial way. Although the de minimis approach ignores risks below some low limit, it too is in the long- standing tradition of risk-management methods that are intended to err on the side of safety in matters of uncertainty.

OCR for page 44
46 CHRIS G. [YlIIPPLE RISK VERSUS UNCERTAINTY Risk, as it is generally understood by health and safety risk analysts, measures the probability and severity of loss or injury. Uncertainty, on the other hand, refers to a lack of definite knowledge, a lack of sureness; doubt is its closest synonym. At times, these terms are confused. Risk and uncertainty are related in that both preclude knowledge of future states and both may be described by probabilities. It is important, however, to distinguish whether a lack of predictabiity arises from insufficient knowl- edge (uncertainty) or from a well-understood probabilistic process (risk). The risk associated with a bet on a fair coin toss is known precisely; the risk has no uncertainty, although the outcome of the toss is uncertain. Con- versely, the outcome of the administration of an experimental drug is also uncertain, but in such a case the inability to predict may be due more to a lack of information than to what also may be an inherently probabilistic response to the drug. The predictability of the result of a large number of trials helps to clarify the distinction between risk and uncertainty. For a fair coin toss, we can predict that about half of the results will be heads. For an experimental drug given to a large population, the number of people adversely affected may not be predictable except within a broad range. In the case of an experimental drug, the estimated probability that an average individual will experience an adverse effect (or equivalently, the number of people in an exposed population experiencing an adverse effect) might be described by use of a probability distribution. A probability distri- bution applied to a probability is called a second-story probability. Such a distribution describes the likelihood that the probability of an adverse effect is a particular value. Decision analysts and theorists of subjective probability frequently note that the second-story probability representation is unnecessarily complex; such measures can be mathematically collapsed into a single probability. That is, the probability of a probability is a probability. For individual decision making, it may be immaterial what combination of probabilistic processes and information gaps gives rise to an estimate of the likelihood of some outcome; it is sufficient to describe the likelihood of an outcome by a probability. However, in the case of social risk management by a regulatory agency, it is often useful to distinguish between risk and uncertainty. Risk Assessment Policy The recent report Risk assessment in the Federal Government: Managing the Process (National Research Council, 1983) endorsed the concept that scientific questions about the degree of risk posed by a specified exposure or activity should be separated, to the extent feasible, from the policy questions

OCR for page 44
DIALOG lYITH UNC~TA1N~ ABOUT RISK IN RISK M~NAG~ENT 47 about what risk-management steps should be taken. The report clearly describes how science and policy cannot be entirely separated and makes the point that many seemingly scientific issues such as the assumptions made in a risk assessment have direct relevance to management decisions. As seen by the committee that wrote the report, The goal of risk assessment is to describe, as accurately as possible, the possible health consequences of changes in human exposure to a hazardous substance; the need for accuracy implies that the best available scientific knowledge, supple- mented as necessary by assumptions that are consistent with science, will be applied [National Research Council, 19831. The difficulty arises when there is no scientific basis for selection among alternative assumptions. The study committee did not offer a general recom- mendation for choosing assumptions when this occurs. However, it did note that in such cases it may be appropriate to select the most conservative assumptions (that is, those leading to the highest estimate of risk). The committee chose carcinogenic risks and their assessment to illustrate many points in the report primarily because the estimation of these risks has become more standardized than it has for other risks. Assumptions generally thought to be conservative are routinely used by agencies in evaluating potential carcinogens. For example, conservative risk-assessment assump- tions are used by EPA's Carcinogen Assessment Group to estimate a plausi- ble upper bound for risk; the plausible lower bound is taken to be zero risk except where direct human evidence indicates otherwise. These upper- bound risk estimates are based on data from the most sensitive sex, strain, and species of test animal, and for the cancer tumor type (often including benign tumors) and site that maximize the estimated potency. Extrapolation of animal results to humans are calculated from the ratio of surface areas, an approach more conservative than scaling by weight, and extrapolation from high- to low-dose response is based on a dose-response model that exhibits linearity at low doses. The selection of sensitive sex, strain, and species is at times justified on the grounds that humans are genetically diverse, differ widely in health status, and are exposed to many other potentially harmful agents (Anderson, 1983), but these assumptions are generally thought to be conservative in their application to human cancer risk. Is Conservatism Protective? Does reliance on assumptions producing upper-bound risk estimates pro- tect health? The question is analytically tractable. Not surprisingly, its answer depends on what assumptions are made. For some seemingly reason- able analytical assumptions conservatism is protective; for others it is not.

OCR for page 44
48 CHRIS G. WHIPPLE Certainly the perception of many risk analysts is that conservative risk- assessment assumptions are protective. High-risk estimates are associated with stringent standards. An analyst's own sense of responsibility encour- ages conservatism. Although the social costs of false alarms are acknowl- edged, to give a incorrect assurance of safety is believed to be far worse. The relative social cost of risk underestimation is taken to outweigh that of overestimation. An analytical case for conservatism in risk assessment is made by Talbot Page (1978) who argues that the appropriate response to uncertain environ- mental risks is to balance the social costs of false negatives (substances or activities incorrectly thought to be safe) with the costs of false positives (those incorrectly believed to be hazardous). His analysis indicates that the use of this expectation rule is clearly preferable to approaches aimed exclu- sively at avoiding either type of risk misclassification (that is, false positives or false negatives). Page observed, "Application of this approach requires four pieces of information: the cost of a false negative; the cost of a false positive; and the probability of each." Given the difficulty in ascertaining the probabilities of false positives and negatives, he argues that when the potential adverse effects of an environmental risk are many times greater than the potential benefits, a proper standard of proof of danger under the expected cost minimization criterion may be that there is only "at least a reason- able doubt" that the adverse effect will occur, rather than requiring a greater probability, such as "more likely than not," that the effect will occur. Simple rules of thumb embodied in legal and regulatory institutions may come closer to expected cost minimization than elaborate attempts at quantification [Page, 1978]. The interesting feature of Page's analysis is his lack of aversion to uncer- tainty; uncertain risks are judged to the extent they can be estimated and characterized. Page proposes a rather stringent rule, that a substance or activity be considered hazardous if there is "at least a reasonable doubt" of safety. This rule derives in part from his analysis that for most environmental risks, the relative social costs of a false negative (leading to a failure to regulate a hazardous substance) greatly exceed the costs of regulating a safe substance. Among the common characteristics of environmental risk, Page lists modest benefits and catastrophic costs. For substances like food-color additives or fluorocarbon propellants, where benefits are easy to forgo or where safer substitutes exist, the "at least a reasonable doubt" rule is appro- priate from a cost-benefit viewpoint. But to find that analytical conservatism is generally protective requires three assumptions: (1) that the disparity in social costs between false nega- tives and false positives is great, (2) that risk-management decisions are

OCR for page 44
DE-aLlNG [PITH UNCERT~IN~ ABOUT RISK IN RISK M~N~G~ENT 49 insensitive to resource constraints and do not incur significant opportunity costs, and (3) that activities or agents identified as hazardous (whether true positives or false positives) can be eliminated without the creation of signifi- cant new risks. The potential protectiveness of conservatism also depends on whether risk managers compensate for conservatism in standard setting. The Social Costs of Error In his analysis Page described dichotomous risk decisions and classifica- tions as follows: substances were either carcinogenic or not, and when they were misclassified the resultant errors were either false positives or false negatives. This representation is a useful way to show how it is socially desirable to balance the costs of errors in managing uncertain risks, and this was Page's objective. Actual problems are generally not so black or white. They often involve a substance's degree of carcinogenic potency and the establishment of expo- sure limits. Under this view, risks and risk-management alternatives are continuously variable rather than discrete. It is actually easier to make the case for controlling risk under this continuous perspective, because it is generally harder to justify the ban of a hazardous substance on cost-benefit grounds than it is to justify a marginal reduction. This follows from the common assumption that health benefits are constant per unit of reduced exposure but that as use decreases to zero, progressively more valuable social benefits are forgone. If potency and exposure are variable, the harm from risk assessment errors is far less than if they are discrete. A shift in analytic assumptions, for example, to the average carcinogenic potency exhibited in several species rather than potency in the most sensitive species, could result in a less stringent standard. But this seems unlikely to lead to the public health disaster or excessive individual risk that one associates with the failure to recognize and control a potent carcinogen. However, this argument may not apply to environmental risks such as those from biotechnologies or climate changes. Consider basing regulatory standards for exposure to hazards on assessed risk, using analytic criteria that appropriately reflect social cost. For expo- sures at the standard, the marginal costs of regulatory action should exactly balance the marginal benefits of reduced public health risk. For standards that are slightly displaced from the point at which marginal costs equal marginal benefits, perhaps because of small errors in assessing risk, the social costs of overexposure or underexposure will largely be offset by reduced orincreased costs of risk control. These costs due to small inaccura- cies in the estimation of false positive and false negative errors are roughly

OCR for page 44
so CHRIS G. WHIPPLE symmetrical; for large inaccuracies, the costs of unnecessary regulatory stringency or public health risk will vary. The social costs from errors in risk estimation would be minimized if mean-value estimates of risk were used. Mean-value risk estimates reflect the weighed average of all possible risk values. The conservative practice of using upper-confidence-bound risk estimates leads to the first-order effect of overinvestment in risk control, but also leads to a lower human health risk. Resource Constraints and Risk Management Are national health and safety expenditures limited in the aggregate, or are they variable, depending on the outcome of many independent risk- management decisions? If risk-reduction expenditures are not limited in the aggregate but are determined on a case-by-case basis, then it is appropriate to consider whether conservatism is protective by considering specific cases. However, if the fraction of GNP allocated for risk reduction is politi- cally constrained, or if some other factor constrains risk management in the aggregate, then the collective effect of risk management decisions is the appropriate basis for evaluating whether conservatism serves a useful pur pose. Because risk analysts and agency standard setters generally focus on one risk at a time, the single-risk focus is a natural form of reference. From the perspective of a single-risk management decision, analytical conservatism is protective, but at a price. A conservative risk estimate produces lower risk exposures. Here, the potential costs of large errors appear to be asymmetri- cal to the regulator. He believes that risk-reduction costs are bounded, and that the uncertain consequences of risk exposures may be much greater than these costs. An additional factor encouraging conservatism is how a regulatory agency's decisions might be judged in hindsight. An overcontrolled risk will probably drop from sight once a decision is implemented and control invest- ments made, despite continuing social costs. But an undercontrolled risk, possibly discovered through the identification of victims, is far more dis- turbing for a regulatory agency. If risk reductions are limited by resource scarcity, however, the logical regulatory objective is to allocate the scarce resource in a way that maxi- mizes social benefits. Opportunity costs, the value of benefits forgone from possible alternative uses of the scarce resource, become important under these circumstances. Money or regulatory attention spent on one risk is not available for another, so it is important not to waste resources on trivial risks. Here, conservatism is counterproductive, and risks are increased if

OCR for page 44
DEALING WITH UNC~T~IN~ ABOUT RISK IN RISK M~N~G~ENT 51 resources are shifted from significant risks to small, exaggerated risks. Under this fixed-allocation or zero-sum case, risk reductions are maximized when the cheapest and easiest risk reductions are given highest priority. Here, conservative estimates shift resources to uncertain risks, increasing expected health consequences. Which perspective on regulatory resources is correct? Both have their merits. Regulatory agencies may be limited in the actions they can take by the availability of scientific or administrative resources within their own staffs. But risk-management responsibility assigned to the agencies by Con- gress is fragmented and suggests nothing in the way of an overall ceiling on risk spending. The major cost of control is borne by producers, not regula- tory agencies, so agency budgets are not a direct constraint. But while regulatory expenditures appear to be variable and flexible, dependent on the perceived appropriate action in each case, there may be a political feedback from the regulated parties that limits the amount of money an agency can require a producer to spend. A subtler consideration is that, to the extent that the public finds uncertain risks discomforting, greater expenditures for risk control may be politically feasible if hinds are directed to deal with uncertain (and unpopular) risks. Risk Transfers Often a regulatory action that reduces one risk will increase another (Whipple, 19851. This is especially true when the particular benefit obtained is considered essential but the method for achieving the benefit carries risks. The important issue here is the recognition that the appropriate measure for analysis of a risk-reducing action is the net risk reduction. In that event, uneven conservatism in risk assessment can have a perverse effect by lead- ing to the substitution of a large risk for a small one. The cyclamate ban, leading to greater use of saccharine, may be one such instance. (Risks from both subtances are significantly uncertain.) Electricity production is also a good example, because utilities are obligated to provide service. A restric- tion on coal use can lead to greater oil use. If regulatory considerations make nuclear power unattractive, then perhaps a utility will choose coal instead; the net change in public risk would need to be evaluated. In some cases for example, those involving carcinogens-it may be possible to compare risks that have common conservative assumptions and arrive at a reasonable relative ranking. But for dissimilar technologies for example, coal and nuclear electricity the comparison of conservative risk estimates does not include conservative assumptions common to both esti- mates. In these cases, conservatism is less useful and less protective than are mean estimates of risk.

OCR for page 44
52 CHRIS G. WHIPPLE Do Standard Setters Compensate for Conservative Risk Analysis ? Regulatory decision makers may consider the details of the evidence supporting a risk estimate and compensate for perceived biases in analysis. If this is the case and appropriate adjustments are made, then standards will be the same no matter what risk-assessment assumptions are made. In this case, conservative analysis would not lead to either more stringent or less stringent standards than would mean, or best, estimates of risk. It is likely that conservatively estimated risks are discounted in some cases but not in others, and it is unlikely that such adjustments can be made appropriately and consistently. In the previous discussion of resource constraints, it was assumed that conservative estimates lead to stringent criteria. But it is apparent that con- servatism in risk management need not be achieved through conservative risk assessment assumptions. For example, more stringent criteria for allow- able risk, and less conservative assumptions for estimating risk, would yield current levels of protection. If greater use were made of this flexibility to vary risk criteria in response to conservatism in risk assessment, an attractive approach would be to select risk-assessment assumptions based on their discriminator power. Relative risk estimates based on overly conservative assumptions may not distinguish important differences between risk. For example, an increase in benign liver tumors and a corresponding decrease in leukemias and mammary-gland fibroadenomas have been observed in response to test chemicals in the Fischer 344 rat (Haseman, 19831. Under present assessment methods, a carcinogen that increases benign tumors at one site but reduces malignant tumors at other sites might have the same assessed risk as one that increases the overall burden of malignant tumors. CONSERVATISM IN RISK ASSESSMENT: COMMENTS Even if efforts to be less conservative in risk assessment are accepted, there will be cases where no method for choosing between alternative assumptions is available. The best that risk analysis can provide when this happens is a collection of estimates based on a range of plausible models. Granger Morgan and his colleagues (1984) have taken this approach to describe the estimated health effects from sulfur air pollution. If less conservative assumptions were adopted for carcinogens, under- standing the human-health implications of alternative animal bioassays would take on added importance. There would be apparent value in conduct- ing a wide variety of animal tests with known human carcinogens as a means

OCR for page 44
DEALING WITH UNCERTAlINTY ABOUT RISK IN RISK MANAGEMENT 53 of calibrating these experiments. A second consideration, suggested by animal test results (Haseman, 1983), is whether certain carcinogens redis- tribute the tumor burden whereas others increase the incidence of tumors. If this turns out to be the case, it may be beneficial to discriminate between the two types of effect. Conservative assumptions about risk are thought to provide protection against uncertainty in risk, although sometimes at an added cost. Much impetus for analytical conservatism derives from the belief that this practice protects health. This is the perspective when risks are viewed singly. But conservatism may not protect if reduced exposure to uncertain risks is achieved at the expense of increased exposure to known risks. Considering the many ways in which a conservative analysis can fail to protect, intentional use of conservative risk estimates is not beneficial to public health. In addition to misallocating scarce resources, conservatism can lead to unwise risk transfers and encourage risk regulators to compensate for perceived conservatism. When this happens, risk regulation becomes less predictable and more arbitrary. DE MINIMIS RISK* The term de minimis is used in law to describe trivial issues not deserving of a court's time and attention. When applied to health and safety risks and their regulation, the term refers to a risk that avoids regulatory attention by virtue of its small size. This concept has several potential regulatory applica- tions. A de minimis rationale can be used either to determine the regulatory standard or to decide that no standard is required. In the latter case, whole classes of small risks may be excluded from regulatory consideration. In addition, de minimis may be the basis for en enforcement decision, as when a policeman decides not to cite a driver for exceeding the speed limit by one mile per hour. The impetus to establish a consistent de minimis approach to risk regula- tion has increased in recent years for several reasons. First, technologies for identifying risks have improved in several ways. Improvements in analytical chemistry permit the detection of hazardous substances at the part-per- billion or even part-per-trillion level; only a decade ago such exposures would have been ignored simply because they would have been undetect- able. (Radiation is an exception to this rule, since it has been detectable at low levels for decades. This helps to explain why many de minimis proposals have arisen from the radiation protection area. ~ * This section is derived from C. Whipple, "Application of the De Minimis Concept in Risk Management." Paper presented at a joint session of the American Nuclear Society and Health Physics Society, New Orleans, June 6, 1984.

OCR for page 44
54 CHRIS G. [YHIPPLE Second, our view of the nature of low-level risks has shifted somewhat over the past decade. An initial concern that we face rare but potent carcino- gens seems to have given way to the view that carcinogens are fairly com- monplace and significantly varied in their potency. Recent studies that reveal widespread exposures to natural carcinogens in food (Ames, 1983a) have troubling implications for a risk-management policy based on elimina- tion of carcinogens (Ames, 1983b; Epstein et al., 1984), and strengthens the argument for adopting a carcinogen-management policy that bases regula- tory action on both carcinogenic potency and exposure. One effect of the increasing number of candidate substances for regula- tion, and of the consequent need to set priorities for regulatory effort, is that case-by-case decision making is seen as too cumbersome. The de minimis approach appears to provide a means for simplifying the regulatory process, by providing an alternative to setting standards for substances considered to pose the lowest risks. Such an alternative is particularly important to an agency that has a statutory mandate to regulate exposure to a substance or class of substances but lacks resources to deal with low-risk, low-priority substances. In addition to these regulatory incentives for using a de minimis rule, industry is likely to support this approach since it defines a threshold for regulatory involvement. The de minimis rule could produce greater predict- ability in regulation and provide industry with a risk target for avoiding regulation. The de minimis approach may also provide a policy solution to questions that lie beyond the reach of scientific resolution. This would reduce the pressures on regulatory agencies to produce scientific judgments about low- level risks where information is limited or unavailable. A major consideration with the application of a de minimis policy to risk management is whether the risks borne by the public or an occupational group would differ from those borne in the absence of a de minimis policy. Clearly, many small risks that would be formally excluded from regulatory concern under a de minimis approach are unlikely to be regulated under any approach. Exposures to these risks are not the issue, since in such cases the de minimis policy would make no difference. However, a de minimis risk policy could be interpreted as formally legitimizing risks that are now per- mitted on pragmatic grounds only. Although the risks permitted under a de minimis rule would be quantita- tively small by definition, the public reaction to such risks may be influenced more strongly by the qualitative characteristics of the risk (Fischhoff et al., 19781. Many of the agents for which the de minimis approach is being considered pose both uncertain and carcinogenic risks; these characteristics appear to enhance the degree of public concern to risk.

OCR for page 44
DE-4LING WITH UNCERTAINTY ABOUT RISK IN RISK MANAGEMENT 55 De Minimis Risk and Conflicting Social Objectives The objectives of risk management involve such fundamental conflicts that regulatory solutions that satisfy all interested parties are unlikely. One such objective springs from the expressed desire for safety; that is, the desire to eliminate risks to the extent possible. Countering this social objective is the desire for efficiency in risk management. The desire for efficiency rests on the argument that resources are scarce. Calabresi and Bobbitt (1978) note, however, that "commonly . . ., scarcity is not the result ofany absolute lack of a resource but rather of the decision by society that it is not prepared to forgo other goods and benefits in a number sufficient to remove the scarcity." In health and safety risk regulation, the conflicting objectives of maxi- mum protection and careful use of scarce resources are considered in varying degrees. In many cases, economic efficiency is explicitly stated as one of several regulatory goals, and the analytical examination of regulatory costs and benefits is customary or even obligatory. In such cases the de minimis approach is likely to formalize the practice of ignoring small risks. Where the regulatory mandate is more strongly focused on protection, however, the costs of achieving safety are secondary considerations or are not legal considerations at all, at least in principle. In practice, cost consider- ations do usually influence all regulatory decisions. In these circumstances and because of the historical legal acceptance of de minimis (Davis, 1981), such an approach may be particularly useful in avoiding the regulation of trivial risks that a literal reading of the law would require. Under a well- designed de minimis system, the social objective of achieving a high degree of safety could be met while recognizing a need to ignore small risks. In this way de minimis offers the possibility of bringing practical considerations into decisions where very small risks or uncertain are involved without resorting to the risk-cost trade-offs that many find offensive and certain laws prohibit. In short, de minimis may permit us to avoid facing the difficult conflicting objectives contained within our risk-value system. One practical value of a de minimis policy may result from the pressures it creates to reevaluate inconsistencies in the attention applied to various risks. A de minimis risk policy is philosophically consistent with a view expressed by Lord Rothschild (1978~: "There is no point in getting into a panic about the risks of life until you have compared the risks which worry you with those that don't but perhaps should." Another objective in modern risk regulation is the separation (to the extent that is practical) of questions of science from questions of policy (National Research Council, 1983~. This objective arises from several motivations, notably to permit public participation in the formulation of risk policy with- out requiring scientific expertise, and to distance scientific debate from

OCR for page 44
56 CHRIS G. WLIIPPLE questions of regulatory action. In practice, such separation is difficult to maintain (see Bayer, in this volume), as illustrated by the tendency to use conservative assumptions in risk assessment when scientific uncertainties are large. This tendency reflects a societal consensus that conservatism is protective in risk matters. Attempting to separate scientific questions from policy questions is particularly difficult for low-level risks, because assess- ment uncertainties are great for these risks. For many exposures there is no direct evidence that risk exists; the evidence is only that risk exists for much higher exposures. Out of prudence we assume that low-level risks do exist and that thresholds do not exist. But this raises further difficulties, as John Gibbons (1983) notes: A zero-threshold situation leaves the policymaker in a great quandary. As long as there is some threshold level below which there are no ill effects, social equity can be preserved. But if dose and effect have a zero-zero intercept, then the policyma- ker must talk about determining acceptable nsk, which is far more difficult to deal with than no nsk. In other words, risk management has become much harder because we no longer believe in thresholds, at least scientifically. Here the de minimis approach can offer policy thresholds in lieu of scientific thresholds. Individual Versus Societal Definition of De Minimis Risk Certainly a society can manage risk to the population as a whole by limiting individual risks. This is, in fact, the approach taken by the Nuclear Regulatory Commission (1983) in its proposed safety goals for nuclear power plants. Individual risk limits are appropriate in cases where individ- uals face relatively high risks. But when individual risks are neither high nor inequitably distributed and the need for management arises because a large number of people face a low-to-moderate risk, then individual risk approaches can lead to misallocation of resources. To take a hypothetical example, a 10-6/yr risk of death to 1,000 people produces 10-3 expected fatalities per year, which is equivalent to an expectation of one fatality per thousand years. This same risk of 10-6/yrapplied to the entire U.S. popula- tion of 230 million produces an expectation of 230 fatalities per year. If judgment of whether either situation represents de minimis risk is based solely on the degree of individual risk involved, the regulatory response (or nonresponse) to the two situations is likely to be comparable. Yet common sense tells us that greater effort and expenditure are justified to save 230 lives than 0.001 life.

OCR for page 44
DEALING WITH UNCERT~lNTY ABOUT RISK IN RISK M~N~G~ENT Multiple Sources ofRisk 57 The definition of a de minimis risk should reflect the possibility that multiple de minimis exposures could result in a large aggregate risk. For instance, one could conceivably be exposed to the same hazardous material in drinking water, in the air, or in a variety of foods as in exposure to a pesticide in a rural area. As a practical matter, one or two pathways are likely to dominate exposures, and it is unlikely that total exposures could be greater than several times the exposure received through the most significant path- way. Given the uncertainties in the estimate of low-level risks, this differ- ence seems trivial. A far more troubling question is posed by the sheer number of risk agents. it would hardly be comforting to learn that although no single chemical in drinking wafer poses a cancer risk greater then 10-7/yr, there are hundreds of such chemicals. The problem this issue poses for the use of de minimis as a regulatory threshold is the degree to which risks are examined and managed singly rather than aggregated. The prevalent chemical-specific approach to risk analysis seems to sup- port a de minimis concept on an agent-by-agent basis. Other approaches to risk analysis support a de minimis definition for an aggregation of agents and provide confidence that the sum of de minimis risks any individual sees is limited in the aggregate. Of course there are many ways to aggregate risks: one could consider all effluents from a single facility or those found only in the air, in drinking water, or in food. Clearly the approach taken under a de minimis philosophy to avoid exces- sive accumulations of risk can take many forms' depending on the specific context. The Food and Drug Administration treats this issue through strin- gent criteria that consider the potential for accumulation of risk. Proposed de minimis risk levels for radiation are typically higher than the levels consid- ered by FDA for food additives, reflecting the fact that there are far fewer sources of radiation than of food additives. Applying the De Minimis Concept Many issues must be resolved to develop a workable de minimis policy. Contextual issues such as multiple exposures, the population at risk, and the degree of confidence in a risk estimate suggest that a basic de minims risk philosophy will have to be flexible to specific considerations. Proposed de minimis approaches have dealt with aspects of the problem, notably with the comparative logic that justifies selection of a de minimis level in a specific context. More effort is needed, however, if the de minimis approach to risk

OCR for page 44
58 CHRIS G. WHIPPLE management is to be widely adopted. In particular, the aggregation of multi- ple nsks has not received sufficient attention. Public acceptance may be the critical constraint on adoption of a de minimis approach to risk management. Thus, it may be helpful to think of de minimis nsks as those that are of too low a priority to regulate rather than as acceptable low nsks. This distinction, based on priority rather than accept- ability of nsks, naturally encourages a comparative-nsk viewpoint and avoids the difficult question, acceptable to whom? A final point that could promote the acceptability of a de minimis approach is recognition that, in general, de minimis levels apply not to known nsks but rather to nsks of unknown magnitude, which are thought to be estimated conservatively. CONCLUSIONS The issues addressed in this paper reflect our social aversion to uncertainty about health and safety nsks. We often try to deal with such uncertainty by avoiding it, as with the use of conservative analytic assumptions. Such assumptions are likely to exaggerate risk and are consistent with the adage "better safe than sorry"; they reflect the risk assessor's perspective that "crying wolf" is preferable to falsely providing assurance of safety. But this approach, sensible for a single nsk, appears counterproductive when adopted as a general rule. Finally, application of such principles as de minimis risk may provide socially acceptable methods for making regulatory decisions about uncertain nsks only when such nsks are likely to be small. REFERENCES Anderson, E. L. 1983. U.S. Environmental Protection Agency, Carcinogen Assessment Group. Quantitative approaches in use to assess cancer risk. Risk Analysis 3 (Decem- ber):277-295. Ames, B. N. 1983a. Dietary carcinogens and anticarcinogens: Oxygen radicals and degenera- tive diseases. Science 221: 1256-1264. Ames, B. N. 1983b. Letter. Science 224:668-670,757-760. Calabresi, G., and P. Bobbitt. 1978. Tragic Choices. New York: W. W. Norton. Davis, J. P. 1981. The Feasibility of Establishing a "De Minimis" Level of Radiation Dose and A Regulatory Cut-off Policy for Nuclear Regulation. Report GP-R-33040. Columbia, Maryland: General Physics Corporation. Epstein, S. S., and J. B. Swartz. 1984. Letter. Science 224:660-666. Fischhoff, B., P. Slovic, S. Lichtenstein, S. Read, and B. Combs. 1978. How safe is safe enough? A psychometric survey of attitudes toward technological risks and benefits. Policy Sciences 8: 127- 152. Gibbons, J. H. 1983. In S. Panem, ea., Public Policy, Science and Environmental Risk, proceedings of a workshop at the Brookings Institution, February 28, 1983. Washington, D.C.: Brookings Institution.

OCR for page 44
DE~1Ll~G WITH U~CERT~IN~ ABOUT RISK IN RISK GENT 59 Haseman, J. K. 1983. Patterns oftumorincidence in two-year cancer bioassay feeding studies in Fischer 344 rats. Fundamental and Applied Toxicology 3: 1-9. Morgan, M. G., S. C. Morris, M. Henrion, D. A. L. Amaral, and W. R. Rish. 1984. Technical uncertainty in quantitative policy analysis: A sulfur air pollution example. Risk Analysis 4 (September):201-216. National Research Council. 1983. Risk Assessment in the Federal Government: Managing the Process. Committee on the Institutional Means for Assessment of Risk to Public Health. Washington, D.C.: National Academy Press. Page, T. 1978. A generic view of toxic chemicals and similar risks. Ecology Law Quarterly 7(2):207-244. Rothschild, N. 1978. Antidote to panic. Nature 276:555. U.S. Nuclear Regulatory Commission. 1983. Safety Goals for Nuclear Power Plant Opera- tion. NUREG-0880, Revision 1 for Comment, May. Office of Policy Evaluation. Washing- ton,D.C. Whipple, C. 1985. Redistributing risk. Regulation 9(3)(May/June) :37-44.