Click for next page ( 212


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 211
Appendix C Risk: A Guide to Controversy BARUCH FISCHHOFF FOREWORD BY THE COMMITTEE This appendix was written by Baruch Fischhoff to assist in the deliberations of the National Research Council's Committee on Risk Perception and Communication. It describes in some detail the complications involved in controversies over managing risks in which risk perception and risk communication play significant roles. It addresses these issues from the perspective of many years of research in psychology and other disciplines. The text of the committee's report addresses many of the same issues, and, not surprisingly, many of the same themes, although the focus of the report is more general. The committee did not debate all points made in the guide. Even though this appendix represents the views of only one member, the committee decided to include it because we believe the guide to be a valuable introduction to an extremely complicated literature. PREFACE This guide is intended to be used as a practical aid in applying general principles to understanding specific risk management contro- versies and their associated communications. It knight be thought of as a user's guide to risk. Its form is that of a "diagnostic guide," show- ing participants and observers how to characterize risk controversies 211

OCR for page 211
212 APPENDIX C along five essential dimensions, such as "What are the (psychologi- cal) obstacles to laypeople's understanding of risks?"and "What are the limits to scientific estimates of riskiness?" Its style is intended to be nontechnical, thereby making the scientific literature on risk accessible to a general audience. It is hoped that the guide will help make risk controversies more comprehensible and help citizens and professional risk managers play more effective roles in them. The guide was written for the committee by one of its members. Its substantive contents were considered by the committee in the course of its work, either in the form of published articles and books circulated to other committee members or in the form of issues deliberated at its meetings. As a document, the guide complements the conclusions of the committee's report. CONTENTS I INTRODUCTION Usage, 215 Some Cautions, 216 214 II THE SCIENCE 217 What Are the Bounds of the Problem?, 217 What Is the Hard Science Related to the Problem?, 224 Adherence to Essential Rules of Science, 236 How Does Judgment Affect the Risk Estimation Process?, 238 Summary, 253 III SCIENCE AND POLICY Separating Facts and Values, 254 Measuring Risk, 257 Measuring Benefits, 262 Summary, 268 , - - - - . 254 IV THE NATURE OF THE CONTROVERSY 269 The Distinction Between "Actual" and "Perceived" Risks Is Misconceived, 270 Laypeople and Experts Are Speaking Different Languages, 272 Laypeople and Experts Are Solving Different Problems, 273

OCR for page 211
APPENDIX C Debates Over Substance May Disguise Battles Over Form, and Vice Versa, 275 Laypeople and Experts Disagree About What Is Feasible, 277 I,aypeople and Experts See the Facts Differently, 278 Summary, 280 V STRATEGIES FOR RISK COMMUNICATION.. Concepts of Risk Communication, 282 Some Simple Strategies, 283 Conceptualizing Communication Programs, 286 Evaluating Communication Programs, 291 Summary, 298 213 .282 VI PSYCHOLOGICAL PRINCIPLES IN COMMUNICATION DESIGN..e..ee..e..ee.e..eeeeee...e.e.e....eeeeeeeeeeeee.eee 299 People Simplify, 299 Once People's Minds Are Made Up, It Is Difficult to Change Them, 300 People Remember What They See, 301 People Cannot Readily Detect Omissions in the Evidence They Receive, 301 People May Disagree More About What Risk Is Than About How Large It Is, 302 People Have Difficulty Detecting Inconsistencies in Risk Disputes, 303 Summary, 304 VII C O N C L U SIO N 305 Individual Learning, 305 Societal Learning, 307 BIBI,IOGRAPHY 309

OCR for page 211
I INTRODUCTION Risk management is a complex business. So are the controversies that it spawns. And so are the roles that risk communication must perform. In the face of such complexity, it is tempting to look for simplifying assumptions. Made explicit, these assumptions might be expressed as broad statements of the form, "what people really want is . . ."; "all that laypeople can understand is . . .~; or ~industry's communicators fad! whenever they...." Like other simplifications in life, such assumptions provide some short-term relief at the price of creating long-term complications. Overlooking complexities eventu- ally leads to inexplicable events and ineffective actions. On one level this guide might be used like a baseball scorecard detailing the players' identities and performance statistics (perhaps along with any unique features of the stadium, season, and rivalry). Like a balIgame, a risk controversy should be less confusing to specta- tors who know something about the players and their likely behavior under various circumstances. Thus, experts wright respect the pub- lic more if they were better able to predict its behavior, even if they would prefer that the public behave otherwise. Sirn~larly, un- derstanding the basics of risk analysis might make disputes among technical experts seem less capricious to the lay public. More ambitiously, such a guide might be used to facilitate ef- fective action by the parties in risk controversies, like the Baseball Abstract (James, 1988) in the hands of a skilled manager. For ex- ample, the guide discusses how to determine what the public needs to know in particular risky situations. Being able to identify those needs may allow better focused risk communication, thereby using the public's limited time wisely and letting it know that the com- municators really care about the problems that the public faces. Similarly, understanding the ethical values embedded in the defi- nitions of ostensibly technical terms (e.g., risk, benefit, voluntary) can allow members of the public to ask more penetrating questions about whose interests a risk analysis serves. Realizing that different actors use a term like "risks differently should allow communicators to remove that barrier to mutual understanding. 214

OCR for page 211
APPENDIX C 215 USAGE The guide's audience includes all participants and observers of risk management episodes involving communications. Its intent is to help government officials preparing to address citizens' groups, industry representatives hoping to site a hazardous facility without undue controversy, local activists trying to decide what information they need and whether existing communications meet those needs, and academics wondering how central their expertise is to a particular episode. The premise of the guide is that risk communication cannot be understood in isolation. Rather, it is one component of complex social processes involving complex individuals. As a result, this fuller context needs to be understood before risk communication can be effectively transmitted or received. That context includes the following elements and questions: The Science. What is the scientific basis of the controversy? What kinds of risks and benefits are at stake? How well are they understood? How controversial is the underlying science? Where does judgment enter the risk estimation process? How well is it to be trusted? Science and Policy. In what ways does the nature of the science preempt the policymaking process (e.g., in the definition of key terms, like "risk" and "benefit"; in the norms of designing and reporting studies)? To what extent can issues of fact and of value be separated? The Nature of the Controversy. Why is there a perceived need for risk communication? Does the controversy reflect just a disagreement about the magnitude of risks? Is controversy over risk a surrogate for controversy over other issues? Strategies for Risk Communication. What are the goals of risk communication? How can communications be evaluated? What burden of responsibility do communicators bear for evaluating their communications, both before and after dissemination? What are the alternatives for designing risk communication programs? What are the strengths and weaknesses of different approaches? How can complementary approaches be combined? What nonscientific infor- mation is essential (e.g., the mandates of regulatory agencies, the reward schemes of scientists)? Psychological Principles in Communication Design. What are the behavioral obstacles to effective risk communication? What kinds

OCR for page 211
216 APPENDIX C of scientific results do laypeople have difficulty understanding? How does emotion affect their interpretation of reported results? What presentations exacerbate (and ameliorate) these problems? How does personal experience with risks affect people's understanding? SOME CAUTIONS A diagnostic guide attempts to help users characterize a situa- tion. To do so, it must define a range of possible situations, only one of which can be experienced at a particular time. As a result, the attempt to make one guide fit a large universe of risk management situations means that readers will initially have to read about many potential situations in order to locate the real situation that interests them. With practice, users should gain fluency with a diagnostic approach, making it easier to characterize specific situations. It is hoped that the full guide will be interesting enough to make the full picture seem worth knowing. At no time, however, will diagnosis be simple or human behavior be completely predictable. All that this, or any other, diagnos- tic guide can hope to do is ensure that significant elements of a social-political-psychological process are not overlooked. For a more detailed treatment, one must look to the underlying research lit- erature for methods and results. To that end, the guide provides numerous references to that literature, as well as some discussion of its strengths and limitations. To the extent that a guide is useful for designing and interpreting a communication process, it may also be useful for manipulating that process. In this regard, the material it presents is no different than any other scientific knowledge. This possibility imposes a responsi- bility to make research equally available to all parties. Therefore, even though this guide may suggest ways to bias the process, it should also make it easier to detect and defuse such attempts.

OCR for page 211
II THE SCIENCE By definition, all risk controversies concern the risks associated with some hazard. However, as argued in the text of the report and in this diagnostic guide, few controversies are only about the size of those risks. Indeed, in many cases, the risks prove to be a side issue, upon which are hung disagreements about the size and distribution of benefits or about the allocation of political power in a society. In all cases, though, some understanding of the science of risk is needed, if only to establish that a rough understanding of the magnitude of the risk is all that one needs for effective participation in the risk debate. Following the text, the term ~hazard" is used to describe any activity or technology that produces a risk. This usage should not obscure the fact that hazards often produce benefits as well as risks. Understanding the science associated with a hazard requires a series of essential steps. The first is identifying the scope of the prow lem under consideration, in the sense of identifying the set of factors that determine the magnitude of the risks and benefits produced by an activity or technology. The second step is identifying the set of widely accepted scientific "facts" that can be applied to the problem; even when laypeople cannot understand the science underlying these facts, they may at least be able to ensure that such accepted wisdom is not contradicted or ignored in the debate over a risk. The third step in understanding the science of risk is knowing how it depends on the educated intuitions of scientists, rather than on accepted hard facts; although these may be the judgments of trained experts, they still need to be recognized as matters of conjecture that are both more likely to be overturned than published (and replicated) results and more vulnerable to the vagaries of psychological processes. WHAT ARE THE BOUNDS OF THE PROBLEM? The science learned in school offers relatively tidy problems. The typical exercise in, say, physics gives all the facts needed for its solution and nothing but those facts. The difficulty of such problems for students comes in assembling those facts in a way that provides the right answer. (In more advanced classes, one may have to bring some general facts to bear as well.) 217

OCR for page 211
218 APPENDIX C The same assembly problem arises when analyzing the risks and benefits of a hazard. Scientists must discover how its pieces fit together. They must also figure out what the pieces are. For example, what factors can influence the reliability of a nuclear power plant? Or, whose interests must be considered when assessing the benefits of its operation? Or, which alternative ways of generating electricity are realistic possibilities? The scientists responsible for any piece of a risk problem must face a set of such issues before beginning their work. Laypeople trying to follow a risk debate must understand how various groups of scientists have defined their pieces of the problem. And, as mentioned in the report, even the most accomplished of scientists are laypeople when it comes to any aspects of a risk debate outside the range of their trained expertise. The difficulties of determining the scope of a risk debate emerge quite clearly when one considers the situation of a reporter assigned to cover a risk story. The difficult part of getting most environ- mental stories is that no one person has the entire story to give. Such stories typically involve diverse kinds of expertise so that a thorough journalist might have to interview specialists in toxicology, epidemiology, economics, groundwater movement, meteorology, and emergency evacuation, not to mention a variety of local, state, and federal officials concerned with public health, civil defense, education, and transportation. Even if a reporter consults with all the relevant experts, there is no assurance of complete coverage. For some aspects of some hazards, no one may be responsible. For example, no evacuation plans may exist for residential areas that are packed "hopelessly" close to an industrial facility. No one may be capable of resolving the jurisdictional conflicts when a train with military cargo derails near a reservoir just outside a major population center. There may be no scientific expertise anywhere for measuring the long-term neurological risks of a new chemical. Even when there is a central address for questions, those occu- pying it may not be empowered to take firm action (e.g., banning or exonerating a chemical) or to provide clear-cut answers to personal questions (e.g., "What should ~ do?" or "What should ~ tell my children?"~. Often those who have the relevant information refuse to divulge it because it might reveal proprietary secrets or turn public opinion against their cause.

OCR for page 211
APPENDIX ~ 219 Having to piece together a story from multiple sources, even recalcitrant ones, is hardly new to journalists. What is new about many environmental stories is that no one knows what all of the pieces are or realizes the limits of their own understanding. Experts tend to exaggerate the centrality of their roles. Toxi- cologists may assume that everyone needs to know what they found when feeding rats a potential carcinogen or when testing ground- water near a landfill, even though additional information is always needed to make use of those results (e.g., physiological differences among species, routes of human exposure, compensating benefits of the exposure). Another source of confusion is the failure of experts to remind laypeople of the acknowledged limits of the experts' craft. For exam- ple, cost-benefit analysts seldom remind readers that the calculations consider only total costs and benefits and, hence, ignore questions of who pays the costs and who pays the benefits (Bentkover et al., 1985; Smith and Desvousges, 1986~. Finally, environmental management is an evolving field that is only beginning to establish comprehensive training programs and methods, making it hard for anyone to know what the full ni~.t`~,re in and how their work fits into it. ~_ ~ An enterprising journalist with a modicum of technical knowI- edge should be able to get specialists to tell their stories in fairly plain English and to cope with moderate evasiveness or manipula- tion. However, what is the journalist to do when the experts do not know what they do not know? One obvious solution is to talk to several experts with maximally diverse backgrounds. Yet, sometimes such a perfect mix is hard to find. Available experts can all have common limitations of perspective. Another solution is to use a checklist of issues that need to be covered in any comprehensive environmental story. Scientists themselves use such lists to ensure that their own work is properly performed, documented, and reported. Such a protocol does not create knowledge for the expert any more than it would provide an education to the journalist. It does, however, help users exploit all they know-and acknowledge what they leave out. Some protocols that can be used in looking at risk analyses are the causal model, the fault tree, a materials and energy flow diagram, and a risk analysis checklist. me, ~,, _

OCR for page 211
220 HAZARD \ CAU SAL ) SEQUENCE / ~ , Lit APPENDIX C HUMAN HUMAN NEEDS WANTS ;C> FOOD l l SHOPPING __ _ _ MODIFY CHANGE LIFE STYLE CHOICE OF TECH NOLOGY USE AUTO" MOBILE MODIFY USE PUBLIC TRANSIT INITIATING EVENT LOSE CONTROL . . .. BLOCK BLOCK ; WARNING MEDIAN SIGNS DIVIDERS OUTCOM E HEAD-ON COLLISI ON _ _ _: _ 1 . BLOCK OCCUPANT RESTRAINT CONSE QUENCES HEAD INJURIES 1 2 3 4 5 6 | TIME ~ HIGHER OR DER CONSE QUENCES DEATH ~ _ BLOCK EMER GENCY MEDICAL AID FIGURE II.1 The causal chain of hazard evolution. The top line indicates seven stages of hazard development, from the earliest (left) to the final stage (right). These stages are expressed generically in the top of each box and in terms of a sample motor vehicle accident in the bottom. The stages are linked by causal pathways denoted by triangles. Six control stages are linked to pathways between hazard states by vertical arrows. Each is described generically as well as by specific control actions. Thus control stage 2 would read: "You can modify technology choice by substituting public transit for automobile use and thus block the further evolution of the motor vehicle accident sequence arising out of automobile use." The time dimension refers to the ordering of a specific hazard sequence; it does not necessarily indicate the time scale of managerial action. Thus, from a managerial point of view, the occurrence of certain hazard consequences may trigger control actions that affect events earlier in the hazard sequence. SOURCE: Figure- Bick et al., 1979; caption Fischhoff, Lichtenstein, et al., 1981. The Causal Mode] The causal mode! of hazard creation is a way to organize the full set of factors leading to and from an environmental mishap, both when getting the story and when telling it. The example in Figure Il.1 is an automobile accident, traced from the need for transportation to the secondary consequence of the collision. Between each stage, there is some opportunity for an intervention to reduce the risk of an accident. By organizing information about the hazard in a chronological sequence, this scheme helps ensure that nothing is left out, such as the deep-seated causes of the mishap (to the left) and its long-range consequences (to the right). Applied to an "irregular event" at a nuclear power station, for example, this protocol would work to remind a reporter of such (left- handed) causes as the need for energy and the need to protect the large capital investment in that industry and such (right-h~nded) consequences as the costs of retooling other plants designed like the

OCR for page 211
APPENDIX C 221 affected plant or the need to burn more fossil fuels if the plant is taken off line (without compensating reductions in energy consumption). The Fault Tree A variant on this procedure is the fault tree (Figure IT.2), which lays out the sequence of events that must occur for a particular accident to happen (Green and Bourne, 1972; U.S. Nuclear Regu- latory Commission, 1983~. Actual fault trees, which can be vastly more involved than this example, are commonly used to organize the thinking and to coordinate the work of those designing complex technologies such as nuclear power facilities and chemical plants. At times, they are also used to estimate the overall riskiness of such fa- ciTities. However, the numbers produced are typically quite imprecise (U.S. Nuclear Regulatory Commission, 1978~. In effect, fault trees break open the right-handed parts of a causal mode! for detailed treatment. They can help a reporter to RELEASE OF RADI OACTIVE WASTES TO BIOSPHERE r 1 1 IMPACT OF lARGE METEORITE OR NUCLEAR WEAPON 1 TRANSPORTATION BY GROUNDWATER VOLCANIC ACTIVITY l | | EROSION UPLIFT | I ACCIDENTAL I I r~ FA GLACIAL EALING OF | STREAM l l DRILLING l AMINE THAW PLASTIC SUDDEN RELEASE DEFORMATION OF STORED l AND ROCK PRESSURE RADIATION ENERGY FIGURE II.2 Fault tree indicating the possible ways that radioactivity could be released from deposited wastes after the closure of a repository. SOURCE: Slovic and Fischhoff, 1983.

OCR for page 211
BIBLIOGRAPHY Alfidi, J. 1971. Informed consent: A study of patient reaction. Journal of the American Medical Association 216:1325-1329. Appelbaum, P. S., C. W. Lidz, and A. Meisel. 1987. Informed Consent: Legal Theory and Clinical Practice. New York: Oxford University Press. Appelbaum, R. P. 1977. The future is made, not predicted: Technocratic planners vs. public interests. Society (May/June):49-53. Applied Management Sciences. 1978. Surrey of consumer perceptions of patient package inserts for oral contraceptives. NTIS No. PB-248-740. Washing- ton, D.C.: Applied Management Sciences. Armstrong, J. S. 1975. Tom Swift and his electric regression analysis machine 1973. Psychological Reports 36:806. Atkinson, R. C., R. J. Herrnstein, G. Lindzey, and R. D. Luce. 1988. Stevens' Handbook of Experimental Psychology. New York: Wiley Interscience. Bar-Hillel, M. 1973. On the subjective probability of compound events. Orga- nizational Behavior and Human Performance 9:396-406. Bar-Hillel, M. 1980. The base rate f;`ll~c.`r in nr~h~hil; - r ;1.,1 ~rrn~nt: Psychologica 44:211-233. Barber, W. C. 1979. Controversy plagues setting of environmental standards. Chemical and Engineering News 57~17~:34-37. Barraclough, G. 1972. Mandarins and Nazis. New York Review of Books 19~6~:37-42. Bazelon, D. L. 1979. Risk and responsibility. Science 205~4403~:277-280. Bentkover, J. D., V. T. Covello, and J. Mumpower, eds. 1985. Benefits Assessment: The State of the Art. Dordrecht, Holland: D. Reidel. Berkson, J., T. B. Magath, and M. Hurn. 1939-1940. The error of estimate of the blood cell count as made with the hemocytometer. American Journal of Physiology 128:309-323. Beyth-Marom, R. 1982. How probable is probable? Journal of Forecasting 1 :257-269. Bick, T., C. Hohenemser, and R. W. Kates. 1979. Target: Highway risks. Environment 21 (2) :7-15, 29-38. Bickerstaffe, J., and D. Peace. 1980. Can there be a consensus on nuclear power? Social Studies of Science 10:309-344. Bradburn, N. M., and S. Sudman. 1979. Improving Interview Method and Questionnaire Design. San Francisco: Jossey-Bass. Brokensha, D. W., D. M. Warren, and O. Werner. 1980. Indigenous Knowledge: Systems and Development. Lanham, Md.: University Press of America. Brookshire, D. S., B. C. Ives, and W. D. Schulse. 1976. The valuation of aes- thetic preferences. Journal of Environmental Economics and Management 3:325-346. Brown, R. 1965. Social Psychology. Glencoe, Ill.: Free Press. Burton, I., R. W. Kates, and G. F. White. 1978. The Environment as Hazard. New York: Oxford University Press. Callen, E. 1976. The science court. Science 193:950-951. Campbell, D. T. 1975. Degrees of freedom and the case study. Comparative Political Studies 8:178-193. .~ --- I'd ~ JO . Acta 309

OCR for page 211
310 APPENDIX C Campbell, D. T., and A. Erlebacher. 1970. How regression artifacts in quasi- experimental evaluations can mistakenly make compensatory education look harmful. In Compensatory Education: A National Debate, Vol. 3, Disadvantaged Child, J. Hellmuth, ed. New York: Brunner/Mazel. Campen, J. 1985. Benefit-Cost and Beyond. Cambridge, Mass.: Ballinger. Carterette, E. C., and M. P. Friedman. 1974. Handbook of Perception, Vol. 2. New York: Academic Press. Chapman, L. J., and J. P. Chapman. 1969. Illusory correlation as an obstacle to the use of valid psychodiagnostic signs. Journal of Abnormal Psychology 74:271-280. Chemical and Engineering News. 1980. A look at human error. 58~18~:82. Cohen, B., and I. Lee. 1979. A catalog of risks. Health Physics 36:707-722. Cohen, J. 1962. The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology 65~3~:145-153. Commoner, B. 1979. The Politics of Energy. New York: Knopf. Conn, W. D., ed. 1983. Energy and Material Resources. Boulder, Colo.: Westview. Cotgrove, A. 1982. Catastrophe or Cornucopia? The Environment, Politics and the Future. New York: John Wiley & Sons. Covello, V. T., P. M. Sandman, and P. Slovic. 1988. Risk Communication, Risk Statistics, and Risk Comparisons: A Manual for Plant Managers. Washington, D.C.: Chemical Manufacturers Association. Covello, V., D. Ron Winterfeldt, and P. Slovic. 1986. Risk communication: A review of the literature. Risk Abstracts 3~4~:171-182. Crask, M. R., and W. D. Parreault, Jr. 1977. Validation of discriminant analysis in marketing research. Journal of Marketing Research 14:60-68. Crouch, E. A. C., and R. Wilson. 1982. Risk/Benefit Analysis. Cambridge, Mass.: Ballinger. Cummings, R. G., D. S. Brookshire, and W. D. Schulse, eds. 1986. Valuing En- vironmental Goods: An Assessment of the Contingent Valuation Method. Totowa, N.J.: Rowman & Allanheld. Davidshofer, I. 0. 1976. Risk-taking and vocational choice: Reevaluation. Journal of Counseling Psychology 23:151-154. Davis, J. 1969. Group Performance. Reading, Mass.: Addison-Wesley. Dietz, T. M., and R. W. Rycroft. 1987. The Risk Professionals. Washington, D.C.: Russell Sage Foundation. Doern, G. B. 1978. Science and technology in the nuclear regulatory process: The case of Canadian uranium miners. Canadian Public Administration 21:51-82. Dreman, D. 1979. Contrarian Investment Strategy. New York: Random House. Driver, B., G. Peterson, and R. Gregory, eds. 1988. Evaluative Amenity Resources. New York: Venture. Dunlap, T. R. 1978. Science as a guide in regulating technology: The case of DDT in the United States. Social Studies of Science 8:265-285. Eiser, J. R., ed. 1982. Social Psychology and Behavioral Medicine. New York: John Wiley & Sons. Elliot, G. R., and C. Eisdorfer. 1982. Stress and Human Health. New York: Springer-Verlag.

OCR for page 211
APPENDIX C 311 Fairley, W. B. 1977. Evaluating the "small" probability of a catastrophic acci- dent from the marine transportation of liquefied natural gas. In Statistics and Public Policy, W. B. FairleY and F. Mosteller. eds. Reading. Mass.: Addison-Wesley. , ~ ^ en, ,-~ _~ ~- - Feller, W. 1968. An Introduction to Probability Theory and Its Applications, 3d ea., Vol. 1. New York: John Wiley & Sons. Fineberg, H. V. 1988. Education to prevent AIDS: Prospects and obstacles. Science 239~4840~:592-596. Fischer, D. H. 1970. Historians' Fallacies. New York: Harper & Row. Fischhoff, B. 1980. For those condemned to study the past: Resections on historical judgment. In New Directions for Methodology of Behavior Science: Fallible Judgent in Behavioral Research, R. A. Shweder and D. W. Fiske, eds. San Francisco: Jossey-Bass. Fischhoff, B. 1981. Informed consent for transient nuclear workers. In Equity Issues in Nuclear Waste Management, R. Kasperson and R. W. Kates, eds. Cambridge, Mass.: Oelgeschlager, Gunn and Hain. Fischhoff, B. 1983. "Acceptable risk": The case of nuclear power. Journal of Policy Analysis and Management 2~4~:559-575. Fischhoff, B. 1984. Setting standards: A systematic approach to managing public health and safety risks. Management Science 30:823-843. Fischhoff, B. 1985a. Managing risk perceptions. Issues in Science and Technol- ogy 2~1~:83-96. Fischhoff, B. 1985b. Protocols for environmental reporting: What to ask the experts. The Journalist (Winter) :11-15. Fischhoff, B. 1985c. Risk analysis demystified. NCAP News (Winter):30-33. Fischhoff, B. 1987. Treating the public with risk communications: A public health perspective. Science, Technology, and Human Values 12:3-19. Fischhoff, B. 1988. Judgment and decision making. In The Psychology of Human Thought, R. J. Sternberg and E. E. Smith, eds. New York: Cambridge University Press. FischhoR, B., and L. A. Cox, Jr. 1985. Conceptual framework for regulatory benefits assessment. In Benefits Assessment: The State of the Art, J. D. Bentkover, V. T. Covello, and J. Mumpower, eds. Dordrecht, Holland: D. Reidel. Fischhoff, B., and L. Furby. 1988. Measuring values: A conceptual framework for interpretive transactions with special reference to contingent valuation of visibility. Journal of Risk and Uncertainty 1:147-184. Fischhoff, B., and D. MacGregor. 1983. Judged lethality: How much people seem to know depends upon how they are asked. Risk Analysis 3:229-236. Fischhoff, B., and O. Svenson. 1987. Perceived risks of radionuclides: Under- standing public understanding. In Radionuclides in the Food Chain, G. Schmidt, ed. New York: Praeger. Fischhoff, B., L. Furby, and R. Gregory. 1987. Evaluating voluntary risks of injury. Accident AnaIysis and Prevention 19~1~:51-62. Fischhoff, B. S., Lichtenstein, P. Slovic, S. L. Derby, and R. L. Keeney. 1981. Acceptable Risk. New York: Cambridge University Press. Fischhoff, B., P. Slovic, and S. Lichtenstein. 1978. Fault trees: Sensitivity of assessed failure probabilities to problem representation. Journal of Experimental Psychology: Human Perception and Performance 4:330344.

OCR for page 211
312 APPENDIX C Fischhoff, B., P. Slovic, and S. Lichtenstein. 1980. Knowing what you want: Measuring labile values. In Cognitive Processes in Choice and Decision Behavior, Te Wallsten, ed. Hillsdale, N.J.: Erlbaum. Fischhoff B.. P. Slovic. and S. Lichtenstein. 1981. LaY foibles and expert , , , fables in judgments about risk. In Progress in Resource Management and Environmental Planning, T. O'Riordan and R. K. Turner, eds. New York: John Wiley & Sons. Fischhoff, B., P. Slavic, S. Lichtenstein, S. Read, and B. Combs. 1978. How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Sciences 9:127-152. Fischhoff, B., S. R. Watson, and C. Hope. 1984. Defining risk. Policy Sciences 17:123-129. Fiske, S., and S. Taylor. 1984. Social Cognition. Reading, Mass.: Addison- Wesley. Frankel, C. 1974. The rights of nature. In When Values Conflict, C. Schelling, J. Voss, and L. Tribe, eds. Cambridge, Mass.: Ballinger. Friedman, S. M. 1981. Blueprint for breakdown: Three Mile Island and the media before the accident. Journal of Communication 31:116-129. Furby, L., and B. Fischhoff. In press. Rape self-defense strategies: A review of their effectiveness. Victimology. Gamble, D. J. 1978. The Berger Inquiry: An impact assessment process. Science 199~4332~:946-951. Gilovich, T., R. Vallone, and A. Tversky. 1985. The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology 17:295-314. Gotchy, R. L. 1983. Health risks from the nuclear fuel cycle. In Health Risks of Energy Technologies, C. C. Travis and E. L. Etnier, eds. Boulder, Colo.: Westview. Green, A. E., and A. J. Bourne. 1972. Reliability Technology. New York: Wiley Interscience. Hackney, J. D., and W. S. Linn. 1984. Human toxicology and risk assessment. In Handbook on Risk Assessment. Washington, D.C.: National Science Foundation. Hammond, K. R., and L. Adelman. 1976. Science, values and human judgment. Science 194:389-396. Hance, B. J., C. Chess, and P. M. Sandman. 1988. Improving Dialogue with Communities: A Risk Communication Manual for Government. Trenton: Division of Science and Research Risk Communication Unit, New Jersey Department of Environmental Protection. Handler, P. 1980. Public doubts about science. Science 208~4448~:1093. Hanley, J. 1980. The silence of scientists. Chemical and Engineering News 58~12~:5. Harris, L. 1980. Risk in a complex society. Public opinion survey conducted for Marsh and McLennan Companies, Inc. Harriss, R., and C. Hohenemser. 1978. Mercury: Measuring and managing risk. Environment 20~9~. Hasher, L., and R. T. Zacks. 1984. Automatic and effortful processes in memory. Journal of Experimental Psychology: General 108:356-388. Henrion, M., and B. Fischhoff 1986. Assessing uncertainty in physical con- stants. American Journal of Physics 54~9~:791-798. Henshel, R. L. 1975. Effects of disciplinary prestige on predictive accuracy: Distortions from feedback loops. Futures 7:92-196.

OCR for page 211
APPENDIX C 313 Herbert, J. H., L. Swanson, and P. Reddy. 1979. A risky business. Environment 21 (6):28-33. Hershey, J. C., and P. J. H. Schoemaker. 1980. Risk taking and problem context in the domain of losses: An expected utility analysis. Journal of Risk and Insurance 47:1 11-132. Hirokawa, R. Y., and M. S. Poole. 1986. Communication and Group Decison Making. Beverly Hills, Calif.: Sage. Hohenemser, K. H. 1975. The failsafe risk. Environment 17~1~:6-10. Holden, C. 1980. Love Canal residents under stress. Science 208:1242-1244. Hovland, C. I., I. L. Janis, and H. H. Kelley. 1953. Communication and Persuasion: Psychological Studies of Opinion Change. New Haven, Conn.: Yale University Press. Hynes, M., and E. Vanmarcke. 1976. Reliability of embankment performance prediction. In Proceedings of the ASCE Engineering Mechanics Division Specialty Conference. Waterloo, Ontario, Canada: University of Waterloo Press. Ingram, M. J., D. J. Underhill, and T. M. L. Wigley. climatology. Nature 276:329-334. 1978. Historical Inhaber, H. 1979. Risk with energy from conventional and nonconventional sources. Science 203~4382~:718-723. Institute of Medicine. 1986. Confronting AIDS: Directions for Public Health, Health Care, and Research. Washington, D.C.: National Academy Press. James, W. 1988. Baseball Abstract. New York: Ballantine. Janis, I. L., ed. 1982. Counseling on Personal Decisions. New Haven, Conn.: Yale University Press. Jennergren, L. P., and R. L. Keeney. 1982. Risk assessment. In Handbook of Applied Systems Analysis. Laxenburg, Austria: International Institute of Applied Systems Analysis. Johnson, B. B., and V. T. Covello, eds. 1987. The Social and Cultural Construction of Risk: Essays on Risk Selection and Perception. Dordrecht, Holland: D. Reidel. Joksimovich, V. 1984. Models in risk assessment for hazard ~.rz..~t~ri~::~.t.imn In Handbook of Risk Assessment. Washington, D.C.: National Science Foundation. Joubert, P., and L. Lasagna. 1975. Commentary: Patient package inserts. Clinical Pharmacology and Therapeutics 18~5~:507-513. Kadlec, R. 1984. Field and laboratory event investigation for hazard charac- terization. In Handbook of Risk Assessment. Washington, D.C.: National Science Foundation. Kahneman, D., and A. Tversky. 1972. Subjective probability: A judgment of representativeness. Cognitive Psychology 3:430-454. Kasperson, R. 1986. Six propositions on public participation and their relevance for risk communication. Risk Analysis 6~3~:275-281. Keeney, R. L. 1980. Siting Energy Facilities. New York: Academic Press. Keeney, R. L., and H. Raiffa. 1976. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York: John Wiley & Sons. Kolata, G. B. 1980. Love Canal: False alarm caused by botched study. Science 208(4449) :1239-1242. Koriat, A., S. Lichtenstein, and B. Fischhoff. 1980. Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory 6:107- 118.

OCR for page 211
314 APPENDIX C Krohn, W., and P. Weingart. 1987. Commentary: Nuclear power as a social experiment European political Fall-out from the Chernobyl meltdown. Science, Technology, and Human Values 12~2~:52-58. Kunce, J. T., D. W. Cook, and D. E. Miller. 1975. Random variables and correlational overkill. Educational and Psychological Measurement 35:529- 534. Kunreuther, H., R. Ginsberg, L. Miller, P. Sagi, P. Slovic, B. Borkan, and N. Katz. 1978. Disaster Insurance Protection. New York: John Wiley & Sons. Lachman, R., J. T. Lachman, and E. C. Butterfield. 1979. Cognitive Psychology and Information Processing. Hill~dale, N.J.: Erlbaum. Lakatos, I. 1970. Falsification and scientific research programmer. In Criticism and the Growth of Scientific Knowledge, I. Lakatos and A. Musgrave, eds. New York: Cambridge University Press. Lanir, Z. 1982. Strategic Surprises. Tel Aviv, Israel: HakibLutz Hameuchad. Lave, L. B. 1978. Ambiguity and inconsistency in attitudes toward risk: A simple model. Pp. 108-114 in Proceedings of the Society for General Systems Research Annual Meeting. Louisville, Ky.: Society for General Systems Research. Lawless, E. W. 1977. Technology and Social Shock. New Brunswick, N.J.: Rutgers University Press. Lazarsfeld, P. 1949. The American soldier An expository review. Public Opinion Quarterly 13:377-404. Levine, M. 1974. Scientific method and the adversary model: Some preliminary thoughts. American Psychologist 29:661-716. Lichtenstein, S., and B. Fischboff. 1980. Training for calibration. Organiza- tional Behavior and Human Performance 26:149-171. Lichtenstein, S., B. Fischhoff, and L. D. Phillips. 1982. Calibration of probabil- ities: The state of the art. In Judgment Under Uncertainty: Heuristics and Biases, P. Slovic and A. Tversky, eds. New York: Cambridge University Press. Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman, and B. Combs. 1978. Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory 4:551-578. Lindman, H. G., and W. Edwards. 1961. Supplementary report: Unlearning the gambler's fallacy. Journal of Experimental Psychology 62:630. Lin~rille, P., B. Fischhoff, and G. Fischer. 1988. Judgments of AIDS Risks. Pittsburgh, Pa.: Carnegie-Mellon University, Department of Social and Decision Sciences. MacLean, D. 1987. Understanding the nuclear power controversy. In Scientific Controversies: Case Studies in the Resolution and Closure of Disputes in Science and Technology, H. T. Engelhardt, Jr., and A. L. Caplan, eds. New York: Cambridge University Press. Markovic, M. 1970. Social determinism and freedom. In Mind, Science and History, H. E. Keifer and M. K. Munitz, eds. Albany: State University of New York Press. Martin, E. 1980. Surveys as Social Indicators: Problems in Monitoring Trends. Chapel Hill: Institute for Research in Social Science, University of North Carolina. Mazur, A. 1973. Disputes between experts. Minerva 11:243-262.

OCR for page 211
APPENDIX C 315 Mazur, A. 1981. The Dynamics of Technical Controversy. Washington, D.C.: Communications Press. Mazur, A., A. A. Marino, and R. O. Becker. 1979. Separating factual disputes from value disputes in controversies over technology. Technology in Society 1 :229-237. McGrath, P. E. 1974. Radioactive Waste Management: Potentials and Haz- ard~ From a Risk Point of View. Report EUR FNR-1204 (KFK 1992~. Karlsrnhe, West Germany: US-EUR-ATOM Fast Reactor Program. ~ r ~- 1 ~ ~ O McNeil, lo. a., x. We~chselbaum, and S. G. Pauker. 1978. The fallacy of the 5-year survival rate in lung cancer. New England Journal of Medicine 299:1397-1401. Morgan, M. 1986. Condict and confusion: What rape prevention experts are telling women. Sexual Coercion and Assault 1~5~:160-168. Murphy, A. H., and B. G. Brown. 1983. Forecast terminology: Composition and interpretation of public weather forecasts. Bulletin of the American Meteorological Society 64:13-22. Murphy, A. H., and R. L. Winkler. 1984. Probability of precipitation forecasts. Journal of the American Statistical Association 79:391-400. National Research Council. 1976. Surveying Crime. Washington, D.C.: Na- tional Academy Press. National Research Council. 1982. Survey Measure of Subjective Phenomena. Washington, D.C.: National Academy Press. National Research Council. 1983a. Priority Mechanisms for Toxic Chemicals. Washington, D.C.: National Academy Press. National Research Council. 1983b. Risk Assessment in the Federal Government: Managing the Process. Washington, D.C.: National Academy Press. Nelkin, D. 1977. Technological Decisions and Democracy. Beverly Hills, Calif.: Sage. Nelkin, D., ed. 1984. Controversy: Politics of Technical Decisions. Beverly Hills, Calif.: Sage. Neyman, J. 1979. Probability models in medicine and biology: Avenues for their validation for humans in real life. Berkeley: University of California, Statistical Laboratory. Nisbett, R. E., and L. Ross. 1980. Human Inference: Strategies and Shortcom- ings of Social Judgment. Englewood Cliffs, N.J.: Prentice-Hall. Northwest Coalition for Alternatives to Pesticides. 1985. Position Document- Risk Analysis. NCAP News (Winter):33. Office of Science and Technology Policy. 1984. Chemical carcinogens: Review of the science and its associated principles. Federal Register 49~100~:21594- 21661. O'Flaherty, E. J. 1984. Pharmacokinetic methods in risk assessment. In Hand- book of Risk Assessment. Washington, D.C.: National Science Foundation. O'Leary, M. K., W. D. Coplin, H. B. Shapiro, and D. Dean. 1974. The quest for relevance. International Studies Quarterly 18:211-237. - Ostberg, G., H. Hoffstedt, G. Holm, B. Klingernstierna, B. Rydnert, V. Sam- sonowitz, and L. Sjoberg. 1977. Inconceivable Events in Handling Material in Heavy Mechanical Engineering Industry. Stockholm, Sweden: National Defense Research Institute. Otway, H. J., and D. van Winterfeldt. 1982. Beyond acceptable risk: On the social acceptability of technologies. Policy Sciences 14:247-256.

OCR for page 211
316 APPENDIX C Page, T. 1978. A generic view of toxic chemicals and similar risks. Ecology Law Quarterly 7:207-243. Page, T. 1981. A framework for unreasonable risk in the Toxic Substances Control Act. In Carcinogenic Risk Assessment, R. Nicholson, ed. New York: New York Academy of Sciences. Parducci, A. 1974. Contextual effects: A range-frequency analysis. In Handbook of Perception, Vol. 2, E. C. Carterette and M. P. Friedman, eds. New York: Academic Press. Payne, S. L. 1952. The Art of Asking Questions. Princeton, N.J.: Princeton University Press. Pearce, D. W. 1979. Social cost-benefit analysis and nuclear futures. In Energy Risk Management, G. T. Goodman and W. D. Rowe, eds. New York: Academic Press. Peterson, C. R., and L. R. Beach. 1967. Man as an intuitive statistician. Psychological Bulletin 69~1~:29-46. Peto, R. 1980. Distorting the epidemiology of cancer. Nature 284:297-300. Pew, R. D., C. Miller, and C. E. Feeher. 1982. Evaluation of Proposed Control Room Improvements Through Analysis of Critical Operator Decisions. Palo Alto, Calif.: Electric Power Research Institute. Pinder, G. F. 1984. Groundwater contaminant transport modeling. Environ- mental Science and Technology 18~4~:108A-114A. Poulton, E. C. 1968. The new psychophysics: Six models of magnitude estima- tion. Psychological Bulletin 69:1-19. Poulton, E. C. 1977. Quantitative subjective assessments are almost always biased, sometimes completely misleading. British Journal of Psychology 68:409-421. President's Commission on the Accident at Three Mile Island. 1979. Report of the President's Commission on the Accident at Three Mile Island. Washington, D.C. U.S. Government Printing Office. Rayner, S., and R. Cantor. 1987. How fair is safe enough?: The cultural approach to societal technology choice. Risk Analysis 7~1~:3-9. Reissland, J., and V. Harries. 1979. A scale for measuring risks. New Scientist 83:809-811. Rodricks, J. V., and R. G. Tardiff. 1984. Animal research methods for dose- response assessment. In Handbook of Risk Assessment. Washington, D.C.: National Science Foundation. Rokeach, M. 1973. The Nature of Human Values. New York: The Free Press. Roling, G. T., L. W. Pressgrove, E. B. Keefe, and S. B. Raffin. 1977. An appraisal of patients' reactions to "informed consent" for peroral endoscopy. Gastrointestinal Endoscopy 24~2~:69-70. Rosencranz, A., and G. S. Wetstone. 1980. Acid precipitation: National and international responses. Environment 22~5~:6-20, 40-41. Rosenthal, R., and R. L. Rosnow. 1969. Artifact in Behavioral Research. New York: Academic Press. Rothman, S., and S. R. Lichter. 1987. Elite ideology and risk perception in nuclear energy policy. American Political Science Review 81~2~:383-404. Rothschild, N. M. 1978. Rothschild: An antidote to panic. Nature 276:555. Rubin, D., and D. Sachs, eds. 1973. Mass Media and the Public. New York: Praeger. Schnaiburg, A. 1980. The Environment: From Surplus to Scarcity. New York: Oxford University Press.

OCR for page 211
APPENDIX C 317 Schneider, S. H., and L. E. Mesirow. 1976. The Genesis Strategy. New York: Plenum. Schneiderman, M. A. 1980. The uncertain risks we run: Hazardous material. In Societal Risk Assessment: How Safe is Safe Enough?, R. C. Schwing and W. A. Albers, Jr., eds. New York: Plenum. Schudson, M. 1978. Discovering the News. New York: Basic Books. Schwarz, E. D. 1978. The use of a checklist in obtaining informed consent for treatment with medicate. Hospital and Community Psychiatry 29:97-100. Seligman, M. E. P. 1975. Helplessness. San Francisco: Freeman, Cooper. Shaklee, H., B. Fischhoff, and L. Furby. 1988. The psychology of contracep- tive surprises: Cumulative risk and contraceptive failure. Eugene, Oreg.: Eugene Research Institute. Sharlin, H. I. 1987. Macro-risks, micro-risks, and the media: The EDB case. In The Social and Cultural Construction of Risk, B. B. Johnson and V. T. Covello, eds. Dordrecht, Holland: D. Reidel. Sheridan, T. B. 1980. Human error in nuclear power plants. Technology Review 82~4~:23-33. Shroyer, T. 1970. Toward a critical theory for advanced industrial society. In Recent Sociology, Vol. 2, Patterns of Communicative Behavior, H. P. Drietzel, ed. London: Macmillan. Sioshansi, F. P. 1983. Subjective evaluation using expert judgment: An applica- tion. IEEE Transactions on Systems, Man and Cybernetics 13~3~:391-397. Sjoberg, L. 1979. Strength of belief and risk. Policy Sciences 11:539-573. Slovic, P. 1962. Convergent validation of risk-taking measures. Journal of Abnormal and Social Psychology 65:68-71. Slovic, P. 1986. Informing and educating the public about risk. Risk Analysis 6~4~:403-415. Slovic, P., and B. FischhoE. 1977. On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance 3:544-551. Slovic, P., and B. Fischhoff. 1983. How safe is safe enough? Determinants of perceived and acceptable risk. In Too Hot to Handle? Social and Policy Issues in the Management of Radioactive Wastes, C. Walker, L. Gould, and E. Woodhouse, eds. New Haven, Conn.: Yale University Press. Slovic, P., B. Fischhoff, and S. Lichtenstein. 1978. Accident probabilities and seatbelt usage: A psychological perspective. Accident Analysis and Prevention 17:10-19. Slavic, P., B. Fischhoff, and S. Lichtenstein. 1979. Rating the risks. Environ- ment 21:14-20, 30, 36-39. Slovic, P., B. Fischhoff, and S. Lichtenstein. 1980. Facts vets. fears: Under- standing perceived risk. In Societal Risk Assessment: How Safe Is Safe Enough?, R. Schwing and W. A. Albers, Jr., eds. New York: Plenum. Slovic, P., B. Fischhoff, and S. Lichtenstein. 1984. Modeling the societal impact of fatal accidents. Management Science 30:464-474. Slovic, P., B. Fischhoff, S. Lichtenstein, B. Corrigan, and B. Combs. 1977. Preference for insuring against probable small losses: Implications for the theory and practice of insurance. Journal of Risk and Insurance 44:237-258. Smith, V. K., and W. H. Desvousges. 1986. Measuring Water Quality Benefits. Boston: Kluwer. Stallen, P. J. 1980. Risk of science or science of risk? In Society, Technology and Risk Assessment, J. Conrad, ed. London: Academic Press.

OCR for page 211
318 APPENDIX C Starr, C. 1969. Social benefit versus technological risk. Science 165:1232-1238. Svenson, O. 1981. Are we all less risky and more skillful than our fellow drivers? Act a Psychologica 47:143-148. Svenson, O., and B. Fischhoff. 1985. Levels of environmental decisions. Journal of Environmental Psychology 5:55-67. Thompson, M. 1980. Aesthetics of risk: Culture or context. In Societal Risk Assessment, R. C. Schwing and W. A. Albers, Jr., eds. New York: Plenum. Tockman, M. S., and A. M. Lilienfeld. 1984. Epidemiological methods in risk assessment. In Handbook of Risk Assessment. Washington, D.C.: National Science Foundation. Travis, C. C. 1984. Modeling methods for exposure assessment. In Handbook of Risk Assessment. Washington, D.C.: National Science Foundation. Tribe, L. H. 1972. Policy science: Analysis or ideology? Philosophy and Public Affairs 2:66-110. Tllkey, J. W. 1977. Some thoughts on clinical trials, especially problems of multiplicity. Science 198:679-690. Tulving, E. 1972. Episodic and semantic memory. In Organization of Memory, E. Tulving and W. Donaldson, eds. New York: Academic Press. Turner, C. F. 1980. Surveys of subjective phenomena. In The Measurement of Subjective Phenomena, D. Johnston, ed. Washington, D.C.: U.S. Government Printing Office. Turner, C. F., and E. Martin, eds. 1985. Surveying Subjective Phenomena, Vols. 1 and 2. New York: Russell Sage Foundation. Tversky, A., and D. Kahneman. 1971. The belief in the claw of small numbers. Psychological Bulletin 76 105-110. Tversky, A., and D. Kahneman. 1973. Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5 207-232. T~ersky, A., and D. Kahneman. 1974e Judgment under uncertainty: Heuristics and biases. Science 185:1124-1131. Tversky, A., and D. Kahneman. 1981. The framing of decisions and the psychology of choice. Science 211~4481~:453-458. U.S. Committee on Government Operations. 1978. Teton Dam Disaster. Washington, D.C.: Government Printing Office. U.S. Government. 1975. Hearings, 94th Cong., 1st Sess. Browns Ferry Nuclear Plant Fire, September 16, 1975. Washington, D.C.: U.S. Government Printing Office. U.S. Nuclear Regulatory Commission. 1975. Reactor safety study: An assess- ment of accident risks in U.S. commercial nuclear power plants. WASH 1400 (NUREG-75/0143. Washington, D.C.: U.S. Nuclear Regulatory Com . mission. U.S. Nuclear Regulatory Commission. 1978. Risk Assessment Review Group to the U.S. Nuclear Regulatory Commission. NUREG/CR-0400. Washington, D.C.: U.S. Nuclear Regulatory Commission. U.S. Nuclear Regulatory Commission. 1982. Safety Goals for Nuclear Power Plants: A Discussion Paper. NUREG-0880. Washington, D.C.: U.S. Nuclear Regulatory Commission. U.S. Nuclear Regulatory Commission. 1983. PRA Procedures Guide. NUREG/ CR-2300. Washington, D.C.: U.S. Nuclear Regulatory Commission. Vlek, C. A. J., and P. J. Stallen. 1980. Rational and personal aspects of risk. Act a Psychologica 45:273-300.

OCR for page 211
APPENDIX C 319 Vlek, C. A. J., and P. J. Stallen. 1981. Judging risks and benefits in the small and in the large. Organizational Behavior and Human Performance 28:235-271. van Winterfeldt, D., R. S. John, and K. Borcherding. 1981. Cognitive compo- nents of risk ratings. Risk Analysis 1~4~:277-287. Weaver, S. 1979. The passionate risk debate. The Oregon Journal, April 24. Weinberg, A. M. 1979. Salvaging the atomic age. The Wilson Quarterly (Summer):88-1 12. Weinstein, N. D. 1980a. Seeking reassuring or threatening information about environmental cancer. Journal of Behavioral Medicine 2:125-139. Weinstein, N. D. 1980b. Unrealistic optimism about future life events. Journal of Personality and Social Psychology 93:806-820. Weinstein, N. D., ed. 1987. Taking Care. New York: Cambridge University Press. White, G., ed. 1974. Natural Hazards: Local, National and Global. New York: Oxford University Press. Wilson, R. 1979. Analyzing the daily risks of life. Technology Review 81~4~:40- 46. Wilson, V. L. 1980. Estimating changes in accident statistics due to reporting requirement changes. Journal of Safety Research 12~1~:36-42. Wohlstetter, R. 1962. Pearl Harbor: Warning and Decision. Stanford, Calif.: Stanford University Press. Woodworth, R. S., and H. Schlosberg. 1954. Experimental Psychology. New York: Henry Holt. Wortman, P. M. 1975. Evaluation research: American Psychologist 30:562-575. Wynne, B. 1980. Technology, risk and participation. In Society, Technology and Risk Assessment, J. Conrad, ed. London: Academic Press. Wynne, B. 1983. Institutional mythologies and dual societies in the management of risk. In The Risk Analysis Controversy, H. C. Kunreuther and E. V. Ley, eds. New York: Springer-Verlag. Zeisel, H. 1980. Lawmaking and public opinion research: The President and Patrick Caddell. American Bar Foundation Research Journal 1:133-139. Zentner, R. D. 1979. Hazards in the chemical industry. Chemical and Engi- neering News 57~45~:25-27, 3~34. A psychological perspective.