Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 9
Science and Its Lunits. The Regulator's Dilemma ALVIN M. WEINBERG In his essay "Risk, Science, and Democracy," William D. Ruckelshaus has expressed very clearly what might be called the regulator's dilemma: During the past 15 years, there has been a shift in public emphasis from visible and demonstrable problems such as smog from automobiles and raw sewage, to potential and largely invisible problems, such as the effects of low concentrations of toxic pollutants on human health. This shift is notable for two reasons. First, it has changed the way in which science is applied to practical questions of public health protection and environmental regulation. Second, it has raised difficult questions as to how to manage chronic risks within the context of free arid democratic institutions. [Ruckelshaus, 1985; see also Bayer, Bond, and Whipple, in this volume.] When the concerns were obvious-like smog in Los Angeles-science could and did give unequivocal answers. For example, smog comes from liquid hydrocarbons, and the answer to smog lay in controlling emissions of these substances. The regulator's course was rather straightforward because the science upon which the regulator based his judgment was operating well within its power. But when the concern was subtle-How much cancer is caused by 10 percent of background radiation? science was being asked a question that lay beyond its power to answer; the question was trans- scientific. Yet the regulator, by law, was expected to regulate, even though science could hardly help in the process. This is the regulator's dilemma. A slightly different version of this paper appears in Issues in Science and Technology vol. 2, no. 1 (Fall 1985):59-72. 9
OCR for page 10
10 ALVIN . WEINBERG Though this essay is subtitled "The Regulator's Dilemma," many of the same issues arise in the adjudication of disputes over who is to blame, and who is to be compensated, for damages allegedly caused by rare events. The regulator's dilemma is faced also by the toxic tort judge indeed the regula- tor's dilemma could equally be called the "toxic tort dilemma." If my car injures a pedestrian, I am liable to be sued but at issue is not whetherI have injured the pedestrian; instead, the question is whetherI am at fault for running into him. If the lead from my car's exhaust is alleged to cause bodily harm, the issue is not whether my car emitted lead but whether the lead actually caused the alleged harm. The two situations are quite different: in the first, the relation between cause and injury is not at issue; in the second, it is the issue. This paper, therefore, is an attempt to delineate more precisely those limits to science that give rise to the regulator's dilemma; I shall speculate on how these intrinsic limits to science seem to have catalyzed a profound attack on science by some sociologists and public interest activists; and I shall offer a few ideas that might help harried regulators finesse these trans-scientific limits of science. SCIENCE AND RARE EVENTS Science deals with regularities in our experience; art deals with singulari- ties. It is no wonder that science tends to lose its predictive or even explana- tory power when the phenomena it deals with are singular, unreproducible, and one of a kind that is, rare rather than regular, reproducible, and recurring. Though science can often analyze a rare event after the fact (say, the Cretaceous-Tertia~y extinction), it has great difficulty predicting when such an uncommon event will occur. Let us distinguish between two sorts of rare events-"accidents" and "low-level physical insults." Accidents are large-scale malfunctions whose etiology is not in doubt but whose a priori likelihood is very small. The occurrences at Three Mile Island in 1979 and at Bhopal, India, in 1984 are examples of accidents. The precursors to these events and the way in which the accidents unfolded are well understood. Estimates of the likelihood of the particular sequence of malfunctions are on less solid ground. As the number of individual accidents increases, prediction of their probability becomes more and more reliable. We can predict very well how many automobile fatalities will occur in 1986; we can hardly claim the same degree of reliability in predicting the number of serious reactor accidents in 1986. Low-level insults are rare in a sense different from "rare" as applied to accidents. We know that about 100 reds of radiation will double the mutation
OCR for page 11
SCIENCE AND ITS LIMITS: THE REGU~OR'S DILEMMA 1 1 rate in a large population of exposed mice. How many mutations will occur in a population of mice exposed to 100 millirems of radiation? Here the mutations, if induced at all by such low levels of exposure, are so rare that to demonstrate unequivocally an effect with 95 percent confidence would require the examination of many millions of mice. Though in principle this is not impossible, in practice it is. Moreover, even if we could perform so heroic a mouse experiment, the extrapolation of such findings to humans would still be fraught with uncertainty. Thus, the effects of very low level insult in human beings are rare events whose frequency again is beyond the ability of science to predict with accuracy. When dealing with events of this sort, science resorts to the language of probability that is, instead of saying that this accident will happen on that date or that a particular person exposed to a low-level insult will suffer a particular fate, it tries to assign probabilities for such occurrences. Of course, where the number of instances is very large or where the underlying mechanisms are fully understood, the probabilities themselves are perfectly reliable. In quantum mechanics there is no uncertainty as to the probability distributions. But in the class of phenomena being discussed here, even though the likelihood of an event's happening or of a disease's being caused by a specific exposure is given as a probability, the probability distribution itself is very uncertain. One can think of a somewhat fuzzy demarcation between what I have called science and trans-science: the domain of science covers phenomena that are deterministic, or the probability of whose occur- rence can itself be stated precisely; trans-science covers the domain of events whose probability of occurrence is itself highly uncertain. "Scientif;ic "Approaches to Rare Events Despite the difficulties, science has devised mechanisms for estimating, however imperfectly, the probability of rare events. For accidents, the tech- nique is probabilistic risk assessment (PRA); forlow-level insults, a variety of empirical and theoretical approaches have been used. Though probabilistic risk assessment had been used in the aerospace industry for a long time, it first sprang into public prominence with Norman C. Rasmussen's Reactor Safety Study in 1975 (U.S. Nuclear Regulatory Commission, 1975~. Probabilistic risk assessment seeks to identify all sequences of subsystem failures that may lead to a failure of the overall system; it then tries to estimate the consequences of each system failure so identified. The output of a PRA is a probability distribution, P(C); that is, the probability, P. per reactor-year (RY), of a consequence having magni- tude C. Consequences include both material damage and health effects. The
OCR for page 12
12 ALVIN M. WEINBERG probability of accidents having large consequences is usually less than the probability of accidents having small consequences. A probabilistic risk assessment for a reactor requires two separate esti- mates: first, an estimate of the probability of each accident sequence and, second, an estimate of the consequences particularly the damage to human health caused by the uncontrolled effluents released in the accident. An accident sequence is a series of equipment malfunctions or human miscalcu- lations: a pump that fails to start, a valve that does not close, an operator confusing an "on" with an "off" signal. For many of these individual events, we have statistical data for example, enough valves have operated for enough years so that at least in principle we can make pretty good estimates of the probability of failure. But uncertainties still remain, since we can never be certain that we have identified every relevant sequence. Proof of the adequacy of PRA must therefore await the accumulation of operating experience. For example, the median probability of a core melt in a light-water reactor (LWR), according to the original Rasmussen report, was 5 x 10-s/RY; the core melt at Three Mile Island's number 2 reactor (TMI-2) occurred after only 700 light-water reactor-years. However, TMI-2 differed from the reactors treated by Ras- mussen and, in retrospect, one could rationalize most of the discrepancy between the Rasmussen estimate and the seemingly premature occurrence at TMI-2 (Rasmussen, 19811. Since TMI-2, the world's LWRs have accumu- lated some 1,500 years of reactor operation without a core melt. This perfor- mance places an upper limit on the a priori estimate of the core-melt proba- bility. Thus, if this probability were as high as 10-3/RY (as had been suggested by D. Okrent, 1981), then the likelihood of surviving 1,500 reactor-years would not be more than 22 percent; or, we can say with 78 percent confidence that the core-melt probability is not as high as 1 in 1,000 reactor-years. With 500 LWRs on line in the world, should we survive until 2000 without another core melt, we could then say with 95 percent confi- dence that the core-melt probability is not higher than 1 in 3,000 reactor- years. In the absence of such experience, one is left with rather subjective judgments. Although the Lewis critique (U.S. Nuclear Regulatory Commission, 1978) of Rasmussen's study asserted that it could not place a bound on the uncertainty of PRA, Rasmussen has argued that his estimate of core-melt probability might be in error by about a factor of 10 that is, the probability may be as high as 1 in 2,000 reactor-years or as low as 1 in 200,000 reactor- years. As we see, we can, after 1,500 reactor-years of operation without a core melt, say with about 50 percent confidence that Rasmussen's upper limit (1 in 2,000 reactor-years) is not too optimistic. And if we survive to 2000 without a core melt, the confidence level with which we can make this
OCR for page 13
SCIENCE AND ITS LlMlTS: THE REGU~OR'S DILEMMA 13 assertion rises to 95 percent. Our confidence in probabilistic risk analysis can eventually be tested against actual, observable experience. But until this experience has been accumulated, we must concede that any probability we predict must be highly uncertain. To this degree our science is incapable of dealing with rare accidents, but time, so to speak, annihilates uncertainty in estimates of accident probability. Unfortunately, time does not annihilate uncertainties over consequences as unequivocally as it does frequency of accidents. A large reactor or chemi- cal plant accident can cause both immediate, acute health effects and delayed, chronic effects. If the exposure either to radiation or to methyl isocyanate (MIC) is high enough, the effect on health is quite certain. For example, a single exposure of about 400 rems will cause about half of those exposed to die. On the other hand, in a large accident there will also be many who are exposed to smaller doses, indeed to doses so low that the dose- response is indeterminable. At Bhopal, 200,000 people who were exposed to MIC recovered. We cannot say positively whether or not they will suffer some chronic disability. The worst accident envisaged in the Rasmussen study, with a probability of 10-9/RY, would lead to an estimated 3,300 early fatalities, 45,000 early illnesses, and 1,500 per year delayed cancers among 10 million exposed people. Almost all of the estimated delayed cancers are attributed to expo- sures of less than 1,000 millirems per year, a level at which it is very difficult to estimate the risk of inducing cancer. Similarly, the critique by the Ameri- can Physical Society (1975) ofthe Rasmussen study attributed an additional 10,000 deaths over 30 years among 10 million people exposed to cesium- 135 from a large accident. The average exposure in this case was 250 millirems per year, again, a level at which our estimates of dose-response are extremely uncertain. Has the nuclear community, particularly its regulators, figuratively shot itself in the foot by trying to estimate the number of delayed casualties resulting from these low-level exposures? In retrospect, the Rasmussen study would have been on more solid ground had it confined its estimates only to those health effects that resulted from exposures at higher levels, where science makes reliable estimates. For the lower exposures the conse- quences could have been stated simply as the number of man-reins of expo- sure of individuals whose total exposure did not exceed, say, 5,000 mil- lirems, without trying to convert this number into numbers of latent cancers. Thus, health consequences would be reported in two categories: (1) for highly exposed individuals, the number of health effects; (2) for slightly exposed individuals, the total man-reins, or even the distribution of expo- sures accrued by the large number of individuals so exposed. Perhaps some scheme such as this could be adopted in reporting the results of future
OCR for page 14
14 ~LVINM. WEINBERG probabilistic risk assessments: it at least has the virtue of being more faithful to the state of scientific knowledge than does the present convention. Low-Level Exposure In both examples of accidents (Bhopal and TMI-2) cited above, many people are exposed to low-level insult. The uncertainties inherent in estimat- ing the effects of such low-level exposure are heaped on top of uncertainties in estimating the probability of the accident that might lead to the exposure in the first place. While science has exerted great effort to ascertain the shape of the dose- response curve at low doses, very little, if anything, can be said with cer- tainty about the low dose-response. Thus, to quote the 1980 report The Effects on Populations of Exposure to Low Levels of Ionizing Radiation (known as the BEIR-III report)-of the National Research Council's Com- mittee on the Biological Effects of Ionizing Radiations, "The Committee does not know whether dose rates of gamma or x rays of about 100 mrads/yr are detrimental to man.... It is unlikely that carcinogenic and teratogenic effects of doses of low-LET [linear energy transfer] radiation administered at this dose rate will be demonstrable in the foreseeable future" (National Research Council, 1980, p. 31. This prompted Philip Handler, then presi- dent of the National Academy of Sciences, to comment in his letter transmit- ting the report to the Environmental Protection Agency, "It is not unusual for scientists to disagree . . . (and) . . . the sparser and less reliable the data base, the more opportunity for disagreement.... This report has been delayed . . . to permit time . . . to display all of the valid opinions rather than distribute a report that might create the false impression of a clear consensus where none exists" (National Research Council, 1980, p. iii). This forthright admission that science can say little about low-level insults is admirable. It represents an improvement over the unjustified assertion in the BEIR-II report of 1972 that 170 millirems per year over 30 years, if imposed on the entire U.S. population, would cause between 3,000 and 15,000 cancer deaths per year (National Research Council, 19721. I do not quarrel with the estimated upper limit which amounts to 1 cancer per 2, 500 man-reins; however, I regard the lower limit's being different from zero as unjustified and as having caused great harm. The proper statement should have been, at 170 millirems per year, we estimate that the upper limit for the number of cancers would be 15,000 per year; and the lower limit might be zero. Since the appearance of the BEIR reports, two other developments have added to the burden ofthose who mustjudge the carcinogenic hazard of low- level insults: (1) natural carcinogens and (2) ambiguous carcinogens.
OCR for page 15
SCIENCE AND ITS LIMITS: THE REGU~OR'S DILEMMA Natural Carcinogens 15 Is cancer "environmental" in the sense of being caused by technology's effluents, or is cancer a natural consequence of aging? In the past few years we have seen a remarkable shift in viewpoint: whereas 15 years ago most cancer experts would have accepted a primarily environmental etiology for cancer, today the view that natural carcinogens are far more important than man-made ones has gained many converts. In his famous Science article illustrated by Robert Indiana's modern painting Eat-Die, Bruce N. Ames (1983) marshaled powerful evidence that many of our most common foods contain carcinogens. Indeed, John R. Totter (1980), supported by the late Philip Handler, has offered epidemiological evidence for the oxygen radical theory of carcinogenesis: that we grow older and eventually get cancer because we metabolize oxygen, and oxygen radicals can play havoc with our DNA. As such views of the etiology of cancer acquire scientific support, the trans-scientific question of how much cancer is caused by a tiny chemical or physical insult likely will be recognized as irrelevant. One does not swat gnats in the face of a stampeding elephant. Ambiguous Carcinogens To further complicate the cancer picture, certain agents, such as dioxin, various dyes, and even moderate levels of radiation, seem to diminish the incidence of some cancers at the same time that they increase the incidence of others; the lifespan of animals treated with such substances on average exceeds that of untreated animals (Weinberg and Storer, 19851. A most striking example, given by Haseman (1983), is that of yellow dye #14: given to leukemia-prone female F344 rats, the dye completely suppresses leuke- mia, which is always fatal, but causes liver tumors, most of which are benign. These two findings or, perhaps, points of view illustrate an underlying point: with regard to low-level insult to human beings, we can say very little about the cancer dose-response curve. Saying that so many cancers will be caused by so much low-level exposure to so many people, a practice that terrifies many people, goes far beyond what science actually can say. How Science Reacts to Intrinsic Uncertainty Does the scientific community accept the notion that there are intrinsic limits to what it can say about rare events? That as events become rarer, the uncertainty in the probability of occurrence of a rare event is bound to grow? Perhaps a better way of framing the question is this: To what use can we put
OCR for page 16
16 ALVIN M. WEINBERG the tools of scientific investigation of rare events say, probabilistic risk assessment and large-scale animal experimentation as surrogate for epide- miological inquiry if we concede that we can never get definitive answers? An uncertainty as high as a factor of 10 is often useful in probabilistic risk assessment, especially if one uses the PRA for comparing risks. For exam- ple, the 1,500 reactor-years already experienced since the TMI-2 accident suggest that a reactor core-melt probability is likely to be less than 10-3/yr and may well be as low as the PRA predicts, less than 10-4/yr. This is to be compared with dam failures, whose probability, based on many hundreds of thousands of dam years (and where time has annihilated uncertainty), is around 10~4/yr. Even with this uncertainty, we can judge roughly how safe reactors are compared to dams. When one compares the relative intrinsic safety of two very similar devices e.g., two water-moderated reactors probabilistic risk assess- ment is on much more solid ground. Here one is not asking for absolute estimates of risk, but rather for estimates of relative safety. If the reactors, A and B. differ in only a few details say that reactor A has two auxiliary feed water (AFW) trains whereas B has only one the ratio of core-melt probabil- ities should be much more reliable than their absolute values, since the ratio requires an estimate of failure of a single subsystem, in this case, the extra AFW on reactor A. Not only can one say with reasonable assurance how much safer reactor A is than reactor B. but one can, as a result of the detailed analysis, identify the subsystems that contribute most to the estimated failure rate. Even if PRA is inaccurate, it is very useful in unearthing deficiencies: one can hardly deny that a reactor in which deficiencies revealed by PRA have been corrected is safer then one in which they have not been corrected, even if one is unwilling to say how much safer. Somewhat the same considerations apply to low-level insult. An agent that does not shorten lifespan at higher dose will not shorten lifespan at lower dose. An agent that is a very powerful carcinogen at high dose is more likely to be a carcinogen at low dose than is an agent that is a less powerful high- dose carcinogen. Thus, animal experiments surely are useful in deciding which agents to worry about and which not to worry about. And of course the Ames test has made at least some preliminary screening of carcinogens more feasible. The difficulty today seems to be not so much identifying agents that at high dose may be carcinogens as it is prohibiting exposures far below levels at which no effect can be, or ever will be, demonstrated. lThe regulator and the concerned citizen are inclined to go so far as to approve the Delaney Clause [21 U.S.C. 348(c)], which forbids in interstate commerce any car- cinogenic agent in food, without ever saying anything about allowable levels or relative risks of, say, cancer induction by nitrosoamines and digestive disorders caused by meat untreated with nitrites!
OCR for page 17
SCIENCE AND ITS LIMITS: 'THE REGUL f OR 'S DILEMMA 17 The Delaney Clause is the worst example of how disregard for an intrinsic limit of science can lead to bad policy by overenthusiastic politicians. Physi- cist Harvey Brooks has often pointed out that one can never prove the impossibility of an event that is not forbidden by a law of nature. Most will agree that a perpetuum mobile is impossible because it violates the laws of thermodynamics. That one molecule of a polychlorinated biphenyl (PCB) may cause a cancer in humans is a proposition that violates no law of nature: hence, many, even within the scientific community, seem willing to believe that this possibility is something to worry about! It was this error that led to the Delaney Clause. THE ATTACK ON SCIENCE FROM THE SOCIOLOGY OF KNOWLEDGE When is an event so rare that the prediction of its occurrence forever lies outside the domain of science, that is, within the domain of trans-science? Clearly we cannot say. Perhaps as science progresses, this boundary between science and trans-science recedes toward events of lower fre- quency. But at any stage the boundary is fuzzy, and much scientific contro- versy revolves around deciding where that boundary lies. One need only read the violent exchange between Edward P. Radford and Harald H. Rossi (National Research Council, 1980) over the risk of cancer from low levels of radiation to recognize that, where the facts are obscure, argument even ad hominem argument blossoms. Indeed, Alice Whittemore (1983), in an article entitled "Facts and Values in Risk Analysis for Environmental Toxi- cants," has pointed out that at this "rare event" boundary between science and trans-science, facts and values are always intermingled. A scientist who believes that nuclear energy is evil because it inevitably leads to proliferation of nuclear weapons (which is a common basis for opposition to nuclear energy) is likely to form judgments about the data on induction of leukemia from low-level exposures at Nagasaki that are different from the judgments of a scientist whose whole career has been devoted to making nuclear power work. Cognitive dissonance is all but unavoidable when the data are ambigu- ous and the social and political stakes are high. No one would dispute that judgments of scientific truth are much affected by the scientist's value system when the issues are at or close to the boundary between science and trans-science. On the other hand, as the matter under dispute moves away from that border into the domain of science, most would claim that the scientist's extrascientific values intrude less and less. Soviet scientists and American scientists may disagree on the effectiveness of a ballistic missile defense, but they agree on the cross section of uranium-235 or the lifetime of the pi-meson. This all seems obvious, even trite. Yet in the past decade or so, a school of
OCR for page 18
18 ALVIN M. WE1NB~G sociology of knowledge has sprung up in the United Kingdom, claiming that "scientific views are determined by social (external) conditions, rasher then by the internal logic of scientific tradition and inherent characteristics of the phenomenal world" (Ben-David, 1978), or that "all knowledge and knowl- edge claims are to be treated as being socially constructed: genesis, accep- tance, and rejection of knowledge [are] sought in the domain of the Social World rather than . . . the Natural World" (Pinch and Bilker, 19841. The attack here is not on science at the border, in particular, the prediction of the frequency of rare events. At least the more extreme of the sociologists of knowledge claim that the traditional ways of establishing scientific truth by appealing to nature in a disciplined manner are not how science really works even in situations very far from the border between science and trans-science. Scientists are seen as competitors for prestige, forpay, and for power, and it is the interplay between these conflicting aspirations, not the working of some underlying scientific ethic, that defines scientific "truth." To be sure, these attitudes toward science are not widely held by practicing scientists at the center of scientific activity; however, they are taken seri- ously by many political activists who, though not in the mainstream of science, nevertheless exert important influence on other institutions the press, the media, the courts which ultimately influence public attitudes toward science and its technologies. If one takes such a caricature of science seriously, how can one trust an expert? If scientific truth, even at the core of science, is decided by negotia- tion between individuals in conflict because they hold different nonscientific beliefs, how can one say that this scientist's opinion is preferred to that one's? And if the matter at issue moves across the science/trans-science boundary, where all we can say with certainty is that uncertainties are very large, how much less able are we to distinguish between the expert and the charlatan, between the scientist who tries to adhere to the usual norms of scientific behavior and the scientist who suppresses facts that conflict with his or her political, social, or moral preconceptions. It will not do to define a new branch of science, "regulatory science," in which the norms of scientific proof are less demanding than are the norms in ordinary science. A far more honest and straightforward way of dealing with the intrinsic inability of science to predict the occurrence of rare events is to concede this limitation and not to ask of science or scientists more than they are capable of providing. Regulators, instead of asking science for answers to unanswerable questions, ought to be content with less far-reaching answers; where uncertainty bands can be established, they should regulate on the basis of uncertainty; where uncertainty bands are so wide as to be meaningless, they need to recast questions so that regulation does not depend on answers to the unanswerable. And, since these same limits apply :-
OCR for page 19
SCIENCE AND ITS LIMITS: THE REGUL'OR'S DILE1IIMAt 19 to litigation, the legal system ought to recognize, much more explicitly than it has heretofore, that science and scientists often have little to say, probably much less than some scientific activists would admit. The bona tides of scientific adversaries often is at the heart of litigation over personal injury alleged to be caused by subtle, low-level exposures. Each side presents witnesses whose scientific credentials are regarded as impeccable by the side the witnesses are supporting. Since the issues them- selves tend to be trans-scientific, one can hardly decide the validity of the "scientific" assertions of either side's witnesses. Under the circumstances, one is probably justified in regarding a scientific witness no differently from any other witness: his or her credibility is judged by past record, behavior, and general demeanor, as well as by self-consistency of testimony. Such, at least, was the way in which Judge Patrick Kelley settled the Johnston v. United States case (U.S. District Court, District of Kansas, Wichita, filed Nov. 15, 1984, #81-1060), by impugning, on grounds no different from those one would invoke in an ordinary lawsuit, the competence if not the integrity of one side's scientific witnesses. FINESSING UNCERTAINTY Various approaches for finessing uncertainty can be identified. Two of these the technological fix and invoking the principle of de m~nimis are described briefly below without claim that these are the most important, let alone the only, approaches. Technological Fix Science cannot predict exactly the probability of a serious accident in a light-water reactor, or the likelihood that a radioactive waste canister in a depository will dissolve and release radioactivity to the environment. Can one design reactors or waste cans for which the probability of such occur- rences is zero or at least which depend, for the prevention of such mishaps, on immutable laws of nature that can never fail, rather than on the incom- pletely reliable intervention of electromechanical devices? Surprisingly, this approach to nuclear safety has come into prominence only in the past five years. K. Hannerz (1983) in Sweden and H. Reutler and G. H. Lohnert (1983) in Germany have proposed reactor systems (an intrinsically safe light-water reactor and the modular high-temperature gas-cooled reactor, respectively), whose safety does not depend on active interventions but on passive, inherent characteristics. Though one cannot say that the probability of mischance has been reduced to zero, there is little doubt that the probabili- ties are several, perhaps three, orders of magnitude lower than the probabili- ties of mischance for existing reactors. To the extent that such reactors
OCR for page 20
20 ~LVINM. WFINBERG embody the principle of inherent safety, their adoption would avoid much of the controversy over reactor safety, the Price-Anderson Act, repetition of the Three Mile Island accident, and so forth. In short, such a technical fix enables one largely to ignore the uncertainties in any prediction of core-melt probabilities. The idea of incorporating inherent or passive safety in the design of chemical plants had been proposed, unbeknownst to the nuclear community, by TrevorA. Kletz (1984) ofthe Loughborough University of Technology in England in 1974, shortly after the disaster at the Flixborough cyclohexane plant, which killed 28 people. One of the main consequences of the Bhopal disaster may well be the incorporation of inherent safety into new chemical plants, which is, again, a way of finessing uncertainty in predicting failure probabilities. The De Minimis Principle A perfect technical fix, such as a totally safe reactor or a crash-proof car, is usually not available, at least at affordable cost. Some low levels of exposure to materials that are toxic at high levels are inevitable, even though we can never accurately establish the risk of such exposures. One way of dealing with this situation is to invoke the principle of de minimis. This principle, as Howard Adler and I showed in 1978, argues that for insults that occur naturally and to which the biosphere has always been exposed, and presum- ably to which it has adapted, one should not worry about any additional man- made exposure as long as the mans de exposure is small compared to the natural exposure (Adler and Weinberg, 1978~. The basic idea here is that the natural level of a ubiquitous exposure (like cosmic radiation), if it is deleteri- ous, cannot have been very deleterious since in spite of its ubiquity, the race has survived. Moreover, we concede that we do not know and can never know what the residual effect of natural exposure really is. An additional exposure that is small compared to the natural background ought to be acceptable; at the very least, its deleterious effect, if any, can never be determined. Adler suggested that for radiation whose natural background is well known, one might choose a de minimis level as the standard deviation of the natural background, which is about 20 percent of the mean background- that is, about 20 millirems per year. This value has been used as the Environ- mental Protection Agency's standard for exposure to the entire radiochemi- cal fuel cycle. We know more about the natural incidence and biological effects of radia- tion than we do for any other agent. It would be natural, therefore, to use the standard established for radiation as a standard for other agents. This
OCR for page 21
SCIENCE AND ITS LIMITS: THE REGU~OR'S DILEMMA 21 approach has been used by Westermark (1980) of Sweden, who has sug- gested that for naturally occurring carcinogens such as arsenic, chromium, and beryllium, one might choose a de minimis level to be, say, 10 percent of the natural background. Clearly, a de minimis level will always be somewhat arbitrary. Neverthe- less, it seems that unless such a level is established, we shall forever be involved in fruitless arguments, the only beneficiary of which will be the toxic tort lawyers. Could the principle of de minimis be applied in litigation in much the same way it might be applied to regulation that is, if the exposure is below de minimis, then the blame is intrinsically unprovable and cannot be litigated? The legal de minimis level might be set higher than the regulatory de minimis; for example, the legal de minimis for radiation might be the background (since the BEIR-III report concedes that there is no way of knowing whether or not such levels are deleterious). The regulatory de minimis couldjustifiably be lower, simply on grounds of erring on the side of safety. One approach might be to concede that there is some level of exposure that is "beyond demonstrable effect" (BDE). This defines a "trans-scientific" threshold. A de minimis level might then be established at some fraction, say one-tenth, of this BDE level. For example, if we take the previously quoted value of 100 millirems per year of low LET (linear energy transfer) radiation as the BDE level for somatic effects, then a de minimis for low LET might be set at 10 millirems per year. Of course, such a procedure would evoke much controversy as to what the BDE level is or whether 10 is an ample safety factor. This example demonstrates, however, that at least in the case of low- level radiation, a scientific committee was able to agree on a BDE level. The safety factor of 10 cannot be adjudicated on scientific grounds. The most one can say is that tradition often supports a safety factor of 10- forexample, the old standard for public exposure (500 millirems per year) was set at one- tenth of the tolerance level for workers (5,000 millirems per year). Can a principle of de minimis be applied to accidents? The idea is that accidents that are sufficiently rare might be regarded somehow in the same category as acts of God, and compensated accordingly. We already recog- nize that natural disasters should be compensated by the society as a whole. One can argue that an accident whose occurrence requires an exceedingly unlikely sequence of untoward events might also be regarded as an act of God. Thus, the Price-Anderson Act (42 U. S. C. 2210) might be modified so that, quite explicitly, accidents whose consequences exceeded a certain level, and whose probability as estimated by PRA would be less than, say, 10-9 per year, would be treated as acts of God. Compensation in excess of the amount stipulated in the revised act would be the responsibility of Con- gress. The cutoff for compensation, or for probabilities, would be negotia
OCR for page 22
22 ALVII!iM. WEINBERG ble, and perhaps would be revised every 10 years or so. One not entirely fanciful suggestion might be to set any probability of the order of 10-7 to 10-8 per year to be a de minimis cutoff, this being the frequency at which the earth may have been visited by the cometary asteroids that may have caused the geologic extinctions. CONCLUSIONS The reader must be aware that, as in most such questions, identifying and characterizing the problem is easier than solving it. That the regulator's and the toxic torts dilemma is rooted in science's inability to predict rare events cannot be denied. How to get the regulator and the toxic tort judge off the horns of the dilemma is far from easy, and my two suggestions are offered tentatively and with diffidence. Equally obvious is the intrinsic social dimension of the issue. In an open, litigious democracy such as ours, any regulation, any judicial decision can be appealed, and if the courts offer no redress, in principle Congress can; but these mechanisms are ponderous. The result seems to me to be a gradual slowing of our technological-social engine as it becomes more and more enmeshed in fruitless argument over irresolvable questions. Western society was debilitated once before by such fruitless tilting with windmills. That was, of course, the devastating campaign against witches of the fourteenth to the early seventeenth centuries. As William Clark (1981) has put it so vividly, in this period society took for granted that death, disease, and crop failure could be caused by witches. To avoid such catastro- phes one had to burn the witches responsible and some million innocent witches were burned as a result. Finally in 1610, the Spanish inquisitor Alonzo Salazar y Frias realized there was no demonstrated connection between catastrophe and witches. Though he did not prohibit their burning, he did prohibit use of torture to extract confessions. The burning of witches, and witch-hunting generally, declined precipitously. This story seems to capture the essence of our dilemma: the connection between low-level insult and bodily harm is probably as difficult to prove as is the connection between witches and failed crops. That our society never- theless has allowed this issue to emerge as a serious social concern is an aberration, which in the modern context is hardly less fatuous than were the witch hunts of the Middle Ages. That dark phase in Western society died out only after several centuries. Let us hope our open, democratic society can regain its sense of proportion far sooner and can get on with managing the many real problems before us instead of wasting our energies on essentially insoluble, and by comparison, intrinsically unimportant, problems.
OCR for page 23
SCIENCE AND ITS LIMITS: THE REGU~OR'S DILL REFERENCES 23 Adler, H. I., and A. M. Weinberg. 1978. An approach to setting radiation standards. Health Physics 34:719-720. American Physical Society. 1975. Report to the American Physical Society by the Study Group on Light Water Reactor Safety. Reviews of Modern Physics 47, Supplement 1. Ames, B. N. 1983. Dietary carcinogens and anticarcinogens: Oxygen radicals and degenera- tivediseases. Science221:1249, 1256-1264. Ben-David, J. 1978. Emergence of national traditions in the sociology of science: The United States and Great Britain. Social Inquiry 48(3-4): 197-218. Clark, W. C. 1981. Witches, Floods, and Wonder Drugs: Historical Perspectives on Risk Management. RR-81-003. Laxenburg, Austria: International Institute for Applied Systems Analysis. Hannerz, K. 1983. Towards Intrinsically Safe Light Water Reactors. ORAU/IEA-83-2(M) Rev. Oak Ridge, Tenn.: Oak Ridge Associated Universities, Institute for Energy Analysis. Haseman, J. K. 1983. Patterns of tumor incidence in two year cancer bioassay feeding studies in Fischer 344 rats. Fundamental and Applied Toxicology 3: 1-9. Kletz, T. A. 1984. Cheaper, Safer Plants or Health and Safety at Work: Notes on Inherently Safer and Simpler Plants. Rugby, England: Institution of Chemical Engineers. National Research Council. 1972. The Effects on Populations of Exposure to Low Levels of Ionizing Radiation. Advisory Committee on the Biological Effects of Ionizing Radiations. Washington, D.C.: National Academy of Sciences. National Research Council. 1980. The Effects on Populations of Exposure to Low Levels of Ionizing Radiation: 1980. Committee on the Biological Effects of Ionizing Radiations. Washington, D.C.: National Academy Press. Okrent, D. 1981. Nuclear Reactor Safety: On the Histo~yoftheRegulato~Process. Madison: University of Wisconsin Press. Pinch, T. J., and W. E. Bijker. 1984. The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science 14:399-441. Rasmussen, N. 1981. In Annals of the New York Academy of Sciences 365 (April 24) :20-36. Reutler, H., and G. H. Lohnert. 1983. The modular high temperature reactor. Nuclear Technology 62:22-30. Ruckelshaus, W. D. 1985. Risk, science, and democracy. Issues in Science and Technology 1(3): 19-38. Totter, J. R. 1980. Spontaneous cancer and its possible relationship to oxygen metabolism. Proceedings ofthe National Academy of Sciences 77(4): 1763-1767. U.S. Nuclear Regulatory Commission. 1975. Reactor Safety Study: An Assessment of Acci- dentRiskinU.S. CommercialNuclear Plants. WASH-1400, NUREG75/014. Washington, D.C. U.S. Nuclear Regulatory Commission. 1978. Risk Assessment Review Group Report to the U.S. Nuclear Regulatory Commission. NUREG/CR-0400. Washington, D.C. Weinberg, A. M., and J. B. Storer. 1985. On "ambiguous" carcinogens and their regulation. Risk Analysis 5(2): 151 - 155. Westermark, T. 1980. Persistent Genotoxic Wastes: An Attempt at a Risk Assessment. Stockholm, Sweden: Royal Institute of Technology. Whittemore, A. 1983. Facts and values in risk analysis for environmental toxicants. Risk Analysis 3(1):23-33.
Representative terms from entire chapter: