Click for next page ( 26


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 25
Appendix A The Possibility of Distnbuted Decision Making BARUCH FISCHHOFF AND STEPHEN JOHNSON Modern command-and-control systems and foreign affairs operations represent special cases of a more general phenomenon: having the in- formation and authority for decision making distributed over several in- dividuals or groups. Distn~uted dec~sion-making systems can be found in such diverse settings as voluntary organizations, multinational corporations, diplomatic corps, government agencies, and married couples managing a household. Viewing any distributed decision-maldng system in this broader context helps to clarifier its special, and not-so-special, properties. It also shows the relevance of research and experience that have accumulated elsewhere. As an organizing device, we develop a general task analysis of distributed dec~sion-making systems, detailing the performance issues that accrue with each level of complication, as one goes from the supplest situation (involving a single individual intuitively pondering a static situa- tion with complete information) to the most complex (with heterogeneous, multiperson systems facing dynamic, uncertain, and hostile environments that threaten the communication links and actors in their system). Drawing from the experience of different systems and from research in areas such as behavioral decision theory, psychology, cognitive science, sociology, and organizational development, the analysis suggests bow problems and possi- ble solutions. It also derives some general conclusions regarding the design and management of such systems, as well as the asymptotic limits to their performance and the implications of those limits for an organization and overall design strategy. Partial support for this research was provided lay the Office of Naval Research, under Contract No. N00014~5-C 0041 to Perceptronics, Inc.,~'Behavioral Aspects of Distributed Decision Mak- ing." 25

OCR for page 25
26 DISTRIBUTED DECISION MAKING A SHORT HISTORY OF DECISION AIDING It is common knowledge that decision making is often hard. One of the clearest indications of this difficulty is the proliferation of decision aids, be they consultants, analyses, or computerized support systems (Humphreys, Svenson, and Vari, 1983; Stokey and Zeckhauser, 1978; Wheeler and Janis, 1980; von ~Interfeldt and Edwards, 1986; Yates, 1989~. Equally clear, but perhaps more subtle evidence is the variety of devices used by people to avoid analytic decision making; these include procrastination, endless pursuit of better information, reliance on habit or tradition, and even the deferral to aids when there is no particular reason to think that they can do better (Corbin, 1980~. A common symptom of this reluctance to make decisions is the attempt to convert decision making, which reduces to a gamble surrounded by uncertainly regarding what one will get and how one will like it, to problem solving, which holds out the hope of finding the one right solution (Montgomery, 1983~. Somewhat less clear is just why decision making is so hard. The diversity of coping mechanisms suggests a diversity of diagnoses. The disappointing quality of the help offered by decision aids suggests that these diagnoses are at least somewhat off target. The battlefield of decision aiding is strewn with good ideas that did not quite pan out, after raising hopes and attracting attention. Among the aids that remain, some persist on the strength of the confidence inspired by their proponents and some persist on the strength of the need for help, even if the e~capy of that help cannot be established. In retrospect, it seems as though most of the techniques that have fallen by the wayside never really had a chance. There was seldom anything sustaining them beyond their proponents' enthusiasm and sporadic ability to give good advice in specific cases. The techniques drew on no systematic theoretical base and subjected themselves to no rigorous testing. For the past 20 to 30 years, behavioral decision theory has attempted to develop decision aids with a somewhat better chance of survival (Ed- wards, 1954, 1961; Exhort and Hogarth, 1981; Pitz and Sachs, 1984; Slovic, Fischhoff, and Lichtenstein, 1977; Rappoport and Wallsten, 1972~. Its hopes are pinned on a mixture of prescriptive and descriptive research. The former asks how people should make decisions, while the latter asks how they actually do make decisions. In combination, these two research programs attempt to build from people's strengths while compensating for their weaknesses. The premise of the field is that significant decisions should seldom be entrusted entirely either to unaided intuition or to au- tomated procedures. Finding the optimal division of labor requires an understanding of where people are and where they should be. The quest for that understanding has produced enough surprises to establish that

OCR for page 25
DISTRIBUTED DECISION MAKING 27 it requires an integrated program of theoretical and empirical research. Common sense is not a good guide to knowing what makes a good decision or why it is hard to identity one. Initially, behavioral decision theory took its marching orders from standard American economics, which assumes that people always know what they want and choose the optimal course of action for getting it. Liken literally, these strong assumptions leave a narrow role for descriptive research: finding out what it is that people want by observing their deci- sions and working backward to identity the objectives that were optimized. These assumptions leave no role at all for prescriptive research, because people can already fend quite well for themselves. As a result, the eco- nomic perspective is Slot very helpful for the erstwhile decision aider if its assumptions are true. However, the perceived need for decision aiding indicates that the assumptions are not true. People seem to have a lot of trouble with decision making. The first, somewhat timorous, response of researchers to this discrepancy between the ideal and the realizer was to document it. It proved not hard to show that people's actual performance is suboptunal (Edwards, 1954, 1961; Einhorn and Hogarth, 1981; Pitz and Sachs, 1984; Slovic, Fischhoff, and Lichtenstein, 1977; Rappoport and Wallsten, 1972~. Knowing the size of the problem, at least under certain circumstances, is helpful in a number of ways: it can show how much to worry, where to be ready for surprises, where help is most needed, and how much to invest in that help. However, size estimates are not very informative about how to make matters better. Realizing this limitation, researchers then turned their attention from what people are not doing (making optimal decisions) to what they are doing and why it is not working. Aside from their theoretical interest, such psychological perspectives offer several points of leverage for erstwhile decision aiders. One is that they allow one to predict where the problems will be greatest by describing how people respond to different situations. A second is that they help decision aiders tank to decision makers by showing how the latter think about their tasks. A third is that they show the processes that must be changed if people are to perform more effectively. Although it would be nice to make people over as model decision makers, the reality is that they have to be moved in gradual steps from where they are now. As behavioral decision theory grew, two of the first organizations to see itS potential as the foundation for new decision-aiding methods were the Advanced Research Projects Agency and the Office of Naval Research. I-heir joint program in decision analysis promoted the development of methods that, first, created models of the specific problems faced by in- dividual decision makers and, then, relied on the formal procedures of -

OCR for page 25
28 DISTRIBUTED DECISION MAKING decision theory to identify the best course of action in each. These meth- ods were descriptive in the sense of trying to capture the subjective reality faced by the decision maker and prescriptive in the sense of providing advice on what to do. Although it might have been tempting to take the (potentially flashy) technique and run with it, the program managers required regular interac- tions among their contractors, including psychologists, economists, decision theorists, operations researchers, computer scientists, consulting decision analysts, and even some practicing decision makers. The hope was to keep the technique from outrunning its scientific foundations. At any point in time, decision analysts should use the best techniques available. However, their decision aid will join its predecessors if they cannot eventually an- swer questions such as, How do you know that people can describe their decisions problems to you? What evidence is there that this improves de- cision making, beyond your clients' reports that it makes them feel good? (Fischhoff, 1980~. Like other good-looking products, decision analysis has taken on a life of its own, with college courses, computer programs, and consulting firms. Its relative success and longevity may owe something to the initial attention paid to its behavioral foundations. That research probably helped both by sharpening the technique and by giving it an academic patina that enhanced its marketability. Moreover, there is still a flow of basic research looking at questions such as, Can people assess the extent of their own knowledge? Can people tell when something important is missing from the description of a decision problem? Can people describe quantitatively the relative importance of different objectives (e.g., speed versus accurapy)?: The better work in the field, both basic and applied, carries strong caveats regarding the quality of the help that it is capable of providing and the degree of residual uncermin~ surrounding even the most heavily aided decisions. Such warnangs are essential, because it is hard for the buyer to beware. People have enough experience to evaluate quality in toothpaste and politicians. However, it is hard to evaluate advice, especially when the source is unfamiliar and the nature of the difficult is unclear. Without a sharp conception of why decision making is hard, one is hard put to evaluate attempts to make it better. tAI1 three of these questions refer to essential skills for effective use of decision analysis. The empirical evidence suggests that the answer to each is,"No, not really." However, there is some chance for improving their performance by properly structuring their tasks (F~schho, Svenson, and Slovic, 1987; Goldberg, 1968; Kahneman, Slovic, and Tvemly, 19~32; Slovic, Lichtenstein, and FischhofE, 1988~.

OCR for page 25
DISTRIBU175:D DECISION MAKING WEY IS INDIVIDUAL DECISION MAKING SO lIARD? 29 According to most prescriptive schemes, good decision making involves the following steps: a. Identitr all possible courses of action (including, perhaps, inac tion). b. Evaluate the attractiveness (or aversiveness) of the consequences that may arise if each course of action is adopted. c. Assess the likelihood of each consequence actually happening (should each action be taken). d. Integrate all these considerations, using a defensible tie., rational) decision rule to select the best tie., optional) action. The empirical research has shown difficulties at each of these steps as described below. Option Generation When they think of action options, people often neglect seemingly ob- vious candidates. Moreover, they seem relatively insensitive to the number or importance of the omitted alternatives (Fischhoff, Slavic, and Lichten- stein, 1978; Gettys, Pliske, Manning, and Casey, 1987; Pitz, Sachs, and Heerboth, 1980~. Options that would otherwise command attention are out of mind when they are out of sight, leaving people with the impression that they have analyzed problems more thoroughly than is actually the case. Those options that are noted are often defined quite vaguely, making it difficult to evaluate them presser, communicate them to others, follow them if they are adopted, or tell when circumstances have changed enough to justitr rethinking the decision.2 Imprecision also makes it difficult to evaluate decisions in the light of subsequent experience, insofar as it is hard to reconstruct exactly what one was trying to do and why. That reconstruction is further complicated by hindsight bias, the tendency to exaggerate in hindsight what one knew in foresight (Fischhoff, 1975, 1982~. The feeling that one knew all along what was going to happen leads one to be unduly harsh on past decisions (if it was obvious what was going to happen, then failure to select the best option must mean incompetence) and to be unduly optimistic about future decisions (by encouraging the feeling that things are generally well understoc)d, even if they are not working out so well). 2 For discussion of such imprecision in carefully prepared formal analyses of government actions, see F~schho~ (1984) and F~schho~ and Colic (1985).

OCR for page 25
30 DISTRIBl]TED DECISION MAKING Value Assessment Evaluating the potential consequences might seem to be the easy part of decision making, insofar as people should know what they want and like. Although this is doubtless true for familiar and simple consequences, many interesting decisions present novel outcomes in unusual juxtapositions. For example, two potential consequences that may arise when deciding whether to dye one's graying hair are reconciling oneself to aging and increasing the risk of cancer 10 to 20 years hence. Who knows what either event is realty like, particularly with the precision needed to make trade-offs between the two? In such cases, one must go back to some set of basic values (e.g., those concerned with pain, prestige, vanity), decide which are pertinent, and determine what role so assign them. As a result, evaluation becomes an inferential problem (Rokeach, 1973~. The evidence suggests that people have trouble making such infer- ences (F~schhoff, Slovic, and Lichtenstein, 1980, Eogarth, 1982; National Research Council, 1981; Iversky and Kahneman, 1981~. They may fail to identify all relevant values, to recognize the conflicts among them, or to reconcile those conflicts that they do recognize. As a result, the values that they express are often highly (and unwittingly) sensitive to the exact way in which evaluation questions are posed, whether by survey researchers, decision aids, politicians, merchants, or themselves. Formally equivalent versions of the same question can evoke quite different considerations and hence lead to quite different decisions. ~ take just three examples, (a) the relative attractiveness of two gambles may depend on whether people are asked how attractive each is or how much they would pay to play (Grether and Plott, 197~, Slovic and Lichtenstein, 1983~; (b) an insurance policy may become much less attractive when its premium is described as a sure loss (F~schhoff et aL, 1980; Hershey, Kunreuther, and Schoemaker, 1982~; (c3 a nsly venture may seem much more attractive when described in terms of the lives that will be saved by it, rather than in terms of the lives that will be lost (Kahneman and Tversky, 197~, Tversky and Kahneman, 1981~. People can view most consequences in a number of different lights. How richly they do view them depends on how sensitive the evaluation process is. Questions have to be asked in some way, and how they are asked may induce random error (by confusing people), systematic errors (by emphasizing some perspectives and neglecting others), or unduly extreme judgments (by failing to evoke underlying conflicts). People appear to be ill equipped to recognize the ways in which they are manipulated by evaluation questions, in part because the idea of uncertain values is countenntuitive, in part because the manipulations prey (perhaps unwittingly) on their own lack of insight. Even consideration of their own past decisions does not provide a stable port of reference, because people have difficulty introspecting

OCR for page 25
DISTRIBUTED DECISION EKING 31 about the factors that motivated their actions tie., why they did things) (Ericsson and Simon, 1980; Nisbett and Wilson, 1977~. Thus, uncertainly about values can be as serious a problem as uncertainty about facts (March, 1978~. Uncertain Assessment Although people are Epically ready to recognize uncertainty about what will happen, they are not always well prepared lo deal with that uncertainty (by assessing the likelihood of future events). How people do (or do not) make judgments under conditions of uncertainty has been a major topic of research for the past 15 years (Kahneman, Slovic, and l~versly, 1982~. A rough summary of its conclusions would be that people are quite good at tracking repetitive aspects of their environment, but not very good at combining those observations into inferences about what they have not seen ~wards, 1954, 1961; Einhorn and Hogarth, 1981; Pitz and Sachs, 1984; Stoic, Fischhoff, and Lichtenstein, 1977; Rappoport and Wallsten, 1972; Kahneman, Slovic, and Tversly, 1982; Brehmer, 1980; Peterson and Beach, 1967~. Thus, they might be able to tell how frequently they have seen or heard about a particular cause of death, but not how unrepresentative their experience has been leading them to overestimate risks to which they have been overexposed (Tversky and Kahneman, 1973~. They can tell what usually happens in a particular situation and recognize how a specific instance Is special, yet not be able to integrate those two (uncertain) facts most often focusing on the specific information and ignoring experience (Bar Hillel, 1980~. They can tell how similar a specific instance is to a prototypical case, yet not how important similarity is for making predictions-usually relying on it too much (Bar Hillel, 1984; Kahneman and Iversky, 1972~. They can tell how many times they have seen an erect follow a potential cause, yet not infer what that says about causality-often perceiving correlations when none really exists (Beyth- Marom, 1982a, 1982b; Einhorn and Hogarth, 1978; Shaklee and Mimms, 1982~. In addition to these difficulties in integrating information, people's in- tuitive predictions are also afflicted by a number of systematic biases in how they gather and interpret information. These include overconfidence in the extent of their own knowledge (Fischhoff, 1982; I~chtenstein, Fischhoff, and Phillips, 1982; Wallsten and Budescu, 1983), underestimation of the time needed to complete projects (Armstrong, 1985; Kidd, 1970; Tihan- sly, 1976), unfair dismissal of information that threatens favored beliefs (Nisbett and Ross, 1980), exaggeration of personal ~muni~ to various threats (Svenson, 1981; Weinstein, 1980), insensitivity to the speed with

OCR for page 25
32 DISTRIBUTED DECISION MAKING which exponential processes accelerate (Wagenaar and Sagana, 1976), and oversimplification of others' behavior (Mischel, 1968; Rose, 1977~. Option Choice Decision theory Is quite uncompromising regarding the sort of rule that people should use to integrate all of these values and probabilities in the quest of a best alternative. Unless some consequences are essential, it should be an expectation rule, whereby an option is evaluated according to the attractiveness of its consequences, weighted by their likelihood of being obtained (Schoemaker, 1983~. Since it has become acceptable to question the descriptive validity of this rule, voluminous research has looked at how well it predicts behavior (Feather, 1982~. A rough summary of this work would be that: (a) it often predicts behavior quite well- if one Mows how people evaluate the likelihood and attractiveness of consequences; (b) with enough ingenuity, one can usually find some set of beliefs (regarding the consequences) for which the rule would dictate choosing the option that was selected meaning that it is hard to prove that the rule was not used; (c) expectation rules can often predict the outcome of decision-making processes even when they do not at all reflect the thought processes involved so that predicting behavior is not sufficient for understanding or aiding it (Fischhoff, 1982~. More process-onented methods revealed a more complicated situation. People seldom acknowledge using anything as computationally demanding as an expectation rule or feel comfortable using it when it is proposed to them (Lichtenstein, Slovic, and Zink, 1969~. ~ the extent that they do compute, they often seem to use quite different rules (Kahneman and Iversky, 1979; Tversly and Kahneman, 1981; Beach and Mitchell, 1978; Payne, 1982~. Indeed, they even seem unimpressed by the assumptions used to justify the expectation rule (Slovic and Tversky, 1974~. ~ the extent that they do not compute, they use a variety of simple rules whose dictates may be roughly similar to those of the expectation rule or may be very different (Beach and Mitchell, 1978; Payne, 1982; lands and Mann, 1977; Tversky, 1969~. Many of these can be summarized as an attempt to avoid making hard choices by finding some way to view the decision as an easy choice (e.g., by eliminating consequences on which the seemingly best option rates poorly) (Montgomery, 1983~. Cognitive Assets and Biases This (partial) litany of the problems described by empirical researchers paints quite a dismal picture of people's ability to make novel (or analyt- ical) decisions, so much so that the investigators doing this work have

OCR for page 25
DISTRIBUTED DECISION MAKING 33 been accused of being problem mongers (Berkeley and Humphreys, 1982; Jungermann, 1984; van ~nterfeldt and Edwards, 1986). Of course, if one hopes to help people (in any arena), then the problems are what matter, for they provide a point of entry. In addition to meaning well, investiga- tors in this area have also had a basically respectful attitude toward the objects of their studies. It Is not people, but their performance, that is shown in a negative light Indeed, in the history of the social sciences, the interest in judgmental biases came as part of a cognitive backlash to psychoanalysis, with fits dark interpretation of human foibles. The cognitive perspective showed how biases could emerge from honest, unemotional thought processes. Typically, these mini-theories show people processing information in reasonable ways that often work well but can lead to predictable trouble. A simple example would be relying on habit or tradition as a guide to decision making. That might be an efficient way of making relatively good decisions, but it would lead one astray if conditions had changed or if those past decisions reflected values that were no longer applicable. A slightly more sophisticated example is reliance on the "availability heuristic" for estimating the likelihood of events for which adequate statistical informa- lion is missing. This is a rule of thumb by which events are judged likely if it is easy to imagine them happening or remember them having occurred in the past. Although it is generally true that more likely events are more available, use of the rule might lead to exaggerating the likelihood of events that have been overreported in the media or are the topic of personal worry (lversly and Kahneman, 1973~. Reliance on these simple rules seems to come from two sources. One is people's limited mental computation capacity; they have to simplify things in order to get on with life (Miller, 1956; Simon, 1957~. The second is their lack of training in decision making, leading them to come up with rules that make sense but have not benefited from rigorous scrutiny (Beyth-Marom, Dekel, Gombo, and Shaked, 1985~. Moreover, people's day-to-day experience does not provide them with the conditions (e.g., prompt, unambiguous feedback) needed to acquire judgment and decision making as learned skills. Experience does often allow people to learn the solutions to specific repeated problems through trial and error. However, things get difficult when one has to get it right the first time. WHAT CAN BE DONE ABOUT IT? The down side of this information-processing approach is the belief that many problems are inherent in the way that people think about making decisions. The up side is that it shows specific things that might be done to get people to think more effectively.

OCR for page 25
34 DISTRIBUTED DECISION MANN Just looldng at the list of problems suggests some procedures that might be readily incorporated in automated (online) decision aids (as well as their low-tech human counterparts). ~ counter the tendency to neglect significant options or consequences, an aid could provide checklists with generic possibilities (Beach, ldwnes, Campbill, and Keating, 1976; Ham- mer, 1980; Janis, 19823. 1b reduce the tendency for overconfidence, an aid could force users to list reasons why they might be wrong before assessing the likelihood that they are right (Koriat, [ichtenstein, and Fischhoff, 1980~. Th discourage hindsight bias, an aid can preserve the decision makers' his- tory and rationale (showing how things once looked) (Slovic and Fischhoff, 1977~. 1b avoid incomplete value elicitation, an aid could force users to consider alternative perspectives and reconcile the differences among them. At least these seem like plausible procedures; whether they work is an em- pirical question. For each intervention, one can think of reasons why it might not work at least if done crudely (e.g., long checklists might reduce the attention paid to individual options, leading to broad but superficial analysis). Modeling Languages One, or the, obvious advantage of computerized aids is their ability to handle large amounts of information rapidly. The price paid for rapid information handling is the need to specify a model for the computer's work This model could be as simple as a list of key words for categorizing and retrieving inflation or as complex as a full-blown decision analysis (Behn and Vaupel, 1983; Brown, Kahr, and Peterson, 1974; Keeney and Raiffa, 1976; Raida, 1968) or risk analysis (McCormick, 1981; U.S. Nuclear Regulatory Commission, 1983; Wilson and Crouch, 1982) within which all information is incorporated. However user friendly an aid might be, using a model means achieving a degree of abstraction that is uncommon for many people. For example, even at the simplest level, it may be hard to reduce a substantive domain to a set of key words. Moreover, any model is written in something like a foreign language, with a somewhat strange syntax and vocabulary. Successful usage means being able to translate what one knows into terms that the modeling language (and the aid) can understand. Any lack of fluency on the part of the user or any restrictions on the language's ability to capture certain realities reflects a communication disorder limiting the aid's usefulness. For example, probabilistic risk analyses provide a valuable tool for figuring out how complex technical systems, such as nuclear power or chemical plank, operate and how they will respond to modifications. They do this by representing the system by the formal connections among itS parts (e.g., showing how failures in one sector will affect performance in others).

OCR for page 25
DISTRIBUTED DECISION A~lKING 35 Both judgment and statistics are used to estimate the model's parameters. In this way, it is possible to pool the knowledge of many experts, expose that knowledge to external review, compute the overall performance of the system, and see how sensitive that performance is to variations (or uncertainties) in those parameters. (These are just the sort of features that one might desire in an aid designed to track and project the operation of a military command.) Yet current modeling languages require the experts to summarize their knowledge in quantitative and sometimes unfamiliar terms, and they are ill suited to represent human behavior (such as that of the system's operators) (Fischhoff, 1988~. As a result, the model is not reality. Moreover, it may differ in ways that the user understands poorly, just as the speaker of a foreign language may be insensitive to its nuances. At some point, the user may lose touch with the model without realizing it. The seriousness of this threat with particular aids is an empirical question that iS jUSt being to receive attention Rational Research Council, 19~. Skilled Judgment Whether or not one relies on an aid, a strong element of judgment is essential to all decision making. With unaided decision making, judgment is all. With an aid, it is the basis for creating the model, estimating its pa- rameters, and interpreting its results. Improving the judgments needed for analysis has been the topic of intensive research, with moderately consistent (although incomplete) results, some of them perhaps surprising (Fischhoff, 1982~. A number of simple solutions have proven rather ineffective. It does not seem to help very much to exhort people tO work harder, to raise the stakes hinging on their performance, to tell them about the problems that other people (like them) have with such tasks, or to provide theoret- ical lmowledge of statistics or decision theory. Similarly, it does not seem reasonable to hope that the problems will go away with time or when the decisions are really important Judgment is a skill that must be learned. Those who do not get training or who do not enjoy a naturally instructive environment (e.g., one that provides prompt unambiguous feedback and rewards people for wisdom rather than, say, for exuding confidence) will have difficulty going beyond the hard data at their disposal Although training courses in judgment per se are rare, many organized professions hope to inculcate good judgment as part of their apprenticeship program. This reaming is expected to come about as a by-product of having one's behavior shaped by masters of the craft (be they architects, coaches, officers, or graduate advisers). What is learned is often hard tO express in words and hence must be attributed to judgment (Polanyi, 1962~. What is unclear is whether that learning extends to new decisions, for which the profession has not acquired trial-and-error experience to shape its practices.

OCR for page 25
48 DISTRIBUTED DECISION MAKING in lieu of detailed specific studies. In reality, these two efforts are highly intertwined, with the general principles suggesting what behavioral dimen- sions merit detailed investigation and the empirical studies substantiating (or altering) those beliefs. Were a more comprehensive analysis in place, a logical extension would be to consider the interaction between two dis- tributed decision-making systems, each characterized in the same general terms. Such an analysis might show how the imperfections of each might be exploited by the other as well as how they might lead to mutually unde- sirable circumstances. For example, an analysis of the National Command Authorities of the United States and the Soviet Union might show the kinds of challenges that each is least likely to handle effectively. That kind of diagnosis might seine as the basis for unilateral recommendations (or bilateral agreements) to the effect, "Don't test us in this way unless you really mean it We're not equipped to respond flexibly." Design Guidelines Although still in its formative stages, the analysis tO date suggests a number of general conclusions that might emerge from a more comprehen- sive analysis of distributed decision-makir~g systems. One is that the design of the system needs to bear in mind the reality of the individuals at each node in it. If there is a tendency to let the design process be dominated by issues associated with the most recent complication, then it must be re- sisted. If the designers are unfamiliar with the world of the operators, then they must learn about it. For example, one should not become obsessed with the intricacies of displaying vast quantities of information when the real problem is not knowing what polisher to apply. Given the difficulty of individual decision making, one must resist the temptation tO move on to other, seemingly more tractable problems. A second general conclusion is that many group problems may be seen as variants of individual problems or even as reflections of those problems not having been resolved. For example, a common crisis in the simplest individual decision-maldog situations is determining what the individual wants from them. The group analog is determining what specific policies tO apply or how to interpret general policies in those circumstances. As another example, individuals' inability tO deal coherently with uncertain may underlie their (unrealistic) demands for certainty in communications from others. A third conclusion is that many problems that are attributed tO the imposition of novel technologies can be found in quite low-tech situations. To people living in the same household can have difficulty communicating; allowing them to use only phone or telex may make matters better or worse. The speed of modern systems can induce enormous time pressures,

OCR for page 25
DISTRIBUTED DECISION MAKING 49 yet many decisions cannot be made comfortably even with unlimited time. Telecommunications systems can generate information overload, yet the fundamental management problem remains the simple one of determining what is relevant. In such cases, the technology is best seen as giving the final form to problems that would have existed in any case and as providing a possible vehicle for either creating solutions or putting solutions out of reach. A fourth conclusion is that is pays to accentuate the negative when evaluating the designs of distributed dec~sion-making systems, and to ac- centuate the positive when adapting people to those systems. That is, the design of systems is typically a top-down process beginning with a set of objectives and normative constraints. The idealization that emerges is something for people to strive for but not necessarily something that they can achieve. Looking at how the system keeps people from doing their jobs provides more realistic expectations of overall system performance as well as focuses attention on where people need help. The point of departure for that help must be their current thought processes and capabilities, so that they can be brought along from where they are toward where one would like them to be. People can change, but only under carefully structured conditions and not that fast. When they are pushed too hard, then they risk losing touch with their own reality. Design Ideologies A fifth conclusion is that the design of disported decision-making systems requires detailed empirical work. A condition for doing that work is resisting simplistic design philosophies. There is a vaneW of such principles, each having the kind of superficial appeal that is capable of generating strong organizational momentum, while frustrating efforts at more sensitive design. One such family of simple principles concentrates on dealing with a system's mistakes, by claiming to avoid them entirely ~ prospect (as expressed in "zero defects" or "quality is free" slogans), lo adapt tO them promptly in process (as expressed in "muddling through"), or tO respond to them in hindsight ("learning from experienced. A second family concentrates on being ready for all contingencies, by instituting either rigid flexibility or rigid inflexibility, leaving all options open or planning for all contingencies. A third family emphasizes controlling He human element in systems, either by selecting the right people or by creating the right people (through proper training and incentives). A fourth family of principles proposes avoiding the human element either when it is convenient (because viable alternatives exist), when it is desirable (because humans have known flaws), or in all possible circumstances whether or not human fallibility has been demonstrated (in hopes of increasing system predictability).

OCR for page 25
so DISTRIBUTED DECISION AfAKING Rigid subscription to any of these principles gives the designers (and operators) of a system an impossible task For example, the instruction "to avoid all errors" implies that time and price are unimportant. When this is not the ~se, the designers are left adrift, forced to make trade- offs without explicit guidance. When fault-free design is impossible, then the principle discourages treatment of those faults that do remain. Many fail-safe systems work only because the people in them have learned, by trial and error, to diagnose and respond to problems that are not supposed lo happen. Because the existence of such unofficial intelligence has no place in the official design of the system, it may have to be hidden, may be unable to get needed resources (e.g., for record keeping or realistic exercises), and may be destroyed by any uncontrollable change in the system (which invalidates operators' understanding of those intricacies of its operation that do not appear in any plans or training manuals). From this perspective, when perfection is impossible, it may be advisable tO abandon near-perfection as a goal as well, so as to ensure that there are enough problems for people to learn to cope with them. In addition, when perfection is still (but) an aspiration, steps toward it should be very large before they justify disrupting accustomed (unwritten) relationships. That is, technological instability is a threat to system operation. Additional threats of this philosophy Include unwillingness to face those intractable problems that do remain and setting the operators up to take the rap when their use of the system proves impossible. Similar analyses exist for the limitations of each of the other simple rules. In response, proponents might say that the rules are not meant to be taken literally and that compromises are a necessary part of all design. Yet the categorical nature of such principles is an important part of their appeal and, as stated, they provide no guidance or legitimation for compromises. Moreover, they often tend to embody a deep misunderstanding of the role of people in person-machine systems, reflecting, in one way or another, a belief in the possibility of engineering the human side of the operation in the way that one might hope to engineer the mechnical or electronics side. Human Factors As the long list of human factors failures in technical systems suggests, the attempts to implement this belief are often needlessly clumsy (National Research Council, 1983; Perrow, 1984; Rasmussen and Rouse, 1981~. The extensive body of human factors research is either unknown or is invoked at such a late stage in the design process that it can amount to little more than the development of warning labels and training programs for coping with inhuman systems. It is so easy to speculate about human behavior (and provide supporting anecdotal evidence) that systematic empirical research

OCR for page 25
DISTRIBUTED DECISION MAKING 51 hardly seems needed. Common concomitants of insensitive design are sit- uations in which the designers (or those who manage them) have radically different personal experiences from the operators, themselves work in or- ganizations that do not function very well interpersonally, or are frustrated in trying to understand why some group of others (e.g., the publics does not like them. However, even when the engineering of people is sensitive, its ambi- tions are often misconceived. The complexity of systems places some limits on their perfectability, malting it hard to understand the intricacies of a design. As a result, one can neither anticipate all problems nor confidently treat those one can anticipate, without the fear that corrections made in one domain will create new problems in another.8 Part of the genius of people is their ability to see (and hence respond tO) situations in unique (and hence unpredictable) ways. Although this creativity can be seen in even the most structured psychomotor task;, it is central and inescapble in any interesting distributed decision-maldng system (Fischhoff, T. anir, and Johnson, in press). Once people have to do any real thinking, the system becomes complex (and hence unperfectable). In such cases, the task of engineering is to help the operators understand the system, rather than to manage them as part of it. A common sign of insensitivity in this regard is use of the term operator error to describe problems arising from the interaction of operator and system. A sign of sensitivity is incorporating operators in the design process. A rule of thumb is that human problems seldom have purely technical solutions, while technical solutions typically create human problems (Reason, in press). THE POSSIBILITY OF DISTRIBUTED DECISION MAKING Pursuing this line of inquiry can point to specific problems arising in destn~uted decision-making systems and focus technical efforts on solving them. Those solutions might include displays for uncertain information, protocols for communication in complex systems, training programs for making do with unfriendly systems, contingency plans for coping with predictable system failures, and terminology for coordinating diverse units. Denving such solutions is technically difficult, but part of a known craft. blue nuclear indust~y's attempts to deal with the human factors problems identified at Three Mile Island provide a number of clear examples. ~ take but two: (a) increasing the number of potentially dangerous situations in which it is n~=ry to shut down a reactor has increased the frequency with which reactom are in transitory states in which they are less well controlled and in which their components are subject to greater stress" (thereby reducing their life arpeclan~y by some poorly understood amount); (b) increasing the number of human factors-related regu- lations has complicated operators' job at the plant and created lucrative opportunities for oper- ators to work as consultants to industry (thereby reducing the qualified labor force at the plants).

OCR for page 25
52 DISTRIBl7ED DEC SION Af4KING Investigators Wow how to describe such problems, devise possible remedies, and subject those remedies to empirical test. When the opportunities to develop solutions are limited, these kinds of perspectives can help characterize existing systems and improvise balanced responses to them. However, although these solutions might make systems better, they cannot make them whole. The pursuit of them may even pose a threat to systems design if it distracts attention from the broader question of how systems are created and conceptualized. In both design and operation, healthy systems enjoy a creative tension between various conflicting pres- sures. One is between a top-down perspective (worldug down toward reality from an idealization of how the system should operate) and a bottom-up perspective (working up from reality toward some modest improvement of the current presenting symptoms). Another is between bureaucratization and innovation (or inflexibility and flexibility). Yet others are between planning and reacting, between a stress on routine and crisis operations, between risk acceptance and risk aversion, between human and technology orientation. A common thread in these contrasts is the system's attitude toward uncertainty: Does it accept that as a fact of life or does it live in the future, oriented toward the day when everything is predictable or controllable? Achieving a balance between these perspectives requires both the insight needed to be candid about the limitations of one's system and the leadership needed to withstand whichever pressures dominate at the moment. When a (dynamic) balance is reached, the system can use its personnel most effectively and develop realistic strategies. When it is not reached, the organization is in a state of crisis, vulnerable to events or tO hostile actions that exploit its imbalances. The crisis is particularly great when the need for balance is not recognized or cannot be admitted (within the current organizational culture), and when an experimental gulf separates management and operators. In this light, one can tell a great deal about how a system functions by looldng at its managers' philosophy. If that is oversimplified or overconfident, then the system will be too, despite any superficial complexity. The goal of a task analysis then becomes to expose the precise ways in which this vulnerability expresses itself. REFERENCES Armstrong, J.S. 1985 Lon~range forecasang. Second edition. New York: Wiley. Bailey, FEW. 1982 Hum' Perfonnance in Engineering. Englewood Cliffs, NJ: Prentice Hall Bar Hillel, M. 1980 Abe base-rate fallacy in probability judgments Acta Psychologica 44:211-233.

OCR for page 25
DISTRIBUTED DECISION AMONG 53 1984 Rcprmentativeness and fallacies of probability judgment. Acta P~yc*olo~zea 55:91-107. Baumol, WJ. 1959 ~ Beer, Vale and Grows New York Mamnillam. Beach, L~R., and Mitchell, I:R 1978 A contingency model for the selection of decision strategies. uncanny of Mana~nera Review 3:439449. Beach Lip, Townes, B.D., Campbell, F.L, and Keating, G.W. 1976 Developing and testing a decision aid for bird planning decisions Organi- mimal Behavior and Hump Pagan 15:99-116. Behn, RD., and Vaupel, J.W 1983 Quick Analysis Or BU{V D~ciswn Makes. New York: Basic Books Berkeley, D., and Humphrgys, P.C 1982 Structunag decision problems and the "bias heunstic" Acta Ps~*ologica 50;201-252. Behth-Marom, R. 1982a How probable is probable? Numencal translation of verbal probability expressions. Jo~mal of Forecasting 1:257-269. 1982b P-r~ntinn calf cation reexamined. Memos and Comic 10:511-Sl9. Beyth-Marom 1985 ~ -& ~ ~ ~ ~^ ~ ~ A _ ~ `, R., Dekel' S., Gombo, R and Shaked1 M. An El~nta~y Approach to 17~g Unbar Uncma~y. lIillscale, NY: Erlbaum. Brehmer, B. 1980 Effect of ale validity on learning of complex rules in probabilistic inference tasks. Acta Psychology 44:201-210. Brown, R.V., Kahr, NS., and Peterson, C 1 974 _ ~ Bunn, M., and lliipis, K 1983 , Decinan Analysis for the Manager. New York: Holt, Rinehart and Winston. Lee uncertainties of preemptive nuclear attach Scientific American 249~5~: 3847. Corbin, R. 1980 On decisions that might not get made. In 1: Wallsten, ea., Cognawe Processes in Chob;c and Decisiort Process. Hillsdale, NJ: Erlbaum. Coser, LA. 1954 1~ Social Functions of Consist. Glencoe, IL; The Free Press. Davis, J.H. 1982 Group P=formarux. Reading, MA: Addison-Wesley. Dawes, AM. 1979 The robust beauty of improper linear models in decision making. Demean Psychologist 34:571-582. Edwards, W 1954 The theory of decision making. Psychological Gibe 51:2111-214. 1961 Behavioral decision theory. Annual Pew of Psy~lo~D~ 12:473498. Einhorn, HJ., and Hoganh, AM. 1978 1981 Ericsson, A., and Simon, H. Confidence in judgment: Persistence in the illusion of validity. Psychological P-ew 85:395416. Behavioral decision theory: Processes of judgment and choice. Annul Reflow of Psychology 32:53~8. 7 ~. 1980 Verbal reports as data. Psycholo~c~ ~ 87:215-~1.

OCR for page 25
54 DISTRIBUTED DECISION MAKING Feather, N., ed. 1982 E~xc~, Incenave and Action. Hillsdale, NJ: Erlbaum. Fischhoff B _ _, _ 1975 Hindsightiforesight: Lee effect of outcome knowledge on judgment under uncenainW. Joumal of E~inz~ntal Psychology: Hums P=cepion and Proms 1:288-299. 1980 Clinical decision analysis Operanons Rich 2~:2843. 1982 Debiasing. In D. Kahneman, P. Slavic and ~ Iverskv. eds ~d~nu7zt Under 19fJ4 1987 ,, , ~ Unc==nty: Heunstms and Bass. New York Cambridge University Press. Setting standards: A systematic approach to managing public health and safeby risks Mana~nent Sconce 30:834 843. Judgment and decision making. In R. Steinberg and E.E. Smith, eds., 17ze P-ology of 1h~g. New York Cambridge University Press 1988 Eliciting expert judgment. IEEE Transactions on Systems, Man, and Cyber nencs 13:448 661. 1980 Fischho~, B., and Beyth-Marom, R. 1983 Hypothesis evaluation from a Bayesian perspective. Psychological Review 90:239-260. Fischhoff, B., and Cox, It, Jr. 1985 Conceptual framework for benefit assessment. In J.D. Ben tkover, AT. Covello, and ]. Mumpower, eds., Bazepts Assessmaz~ 17:e State of the Art. Dordrecht, Lee Netherlands: D. Reidel. Fischhoff, B., Lanir, Z., and Johnson, S. in press Risky lessons: A framework for analyzing attempts to learn in organizations. Or - lion Science. ~schho~, B., Slavic, P., and Lichtenstein, S. 1978 Fault trees: Sensitivity of estimated failure probabilities to problem rep resentation Jown a l of Exp~r~al Psychology: Hum ~ Perc epion on d Perfomu~ce 4:330-344 1980 Knowing what you want: Measuring labile values. In 1: Wallsten, ea., Connive Processes us Choice and Decision Behavior. Hillsdale, NJ: Erlbaum. Fischholl, B., Svenson, O., and Slavic, P. 1987 Active responses to environmental hazards. In D. Stokols and I. Altman, eds-, Handbook of Environmental Psychology. New York. Wiley. Fischho~, B., Watson, S., and Hope, C 1984 Defining disk. Policy Sciences 17:123-139. Fiske, S., and Taylor, S.K 1984 Social Cot~ninan. Reading, MA: Addison Wesley. Gettys, C~F., Pliske, AM., Manning, C, and Cask, J.T 1987 An evaluation of human act generation performance. Or~ganzzatzonal Behavior and He Decision Processes 39 Z3-51. Goldberg, LR. 1968 Simple models or simple processes? Some research on clinical judgment. American Psychology 23:483~96. Grether, D.M., and Plott, CR 1979 Economic theory of choice and the preference reveal phenomenon. Amer ican Econo~ruc review 69:623~8. Hammer, W. 1980 Prodder Safes and Management Engyneermg. Englewood aids, NJ: Prentice Hall.

OCR for page 25
DISTRIBUTED DECISION MAKING 55 Hechter, M., Cooper, L, and Nadel, L, eds. in press Vends. Stanford, Calif.: Stanford Universitr Press. Hershey, J.C., Kunreuther, H.C, and Schoemaker, PJ.H. 1982 Sources of bias in assessment procedures for utility functions. Mana`~nent Sconce 2f3:936-954. Hogarth, R.M. 1982 Beyond discrete biased Functional and dysfunctional aspects of judgmental heuristics. Psychological Can 90:197-217. Humphreys, P., Svenson, O., and Vari, A., eds., 1983 Ana~yzZn,~ and Aided Deciswn Processes. Amsterdam: North Holland. Janis, I.L" 1972 victims of Groupth~c. Boston: Houghton Mifflin. 1982 Counseling on Personal Decks. New Haven: Yale University Press. Janis, I.L, and Mann, L 1977 Deciswn Making. New York: Free Press. Jungermann, H. 1984 The two camps on rationality. In R.W. Scholz, ea., Deczsu~n Manna Under Uncmamay. Amsterdam: Elsevier. Kahneman, D., and Tversky, ~ 19~72 Subjective probability. A judgment of representativeness Collusive Psychol- o'~ 3:430 454. 1979 Prospect theory. Ecorzome~cs 47:263-292. Kahneman, D., Slovic, P., and Iversky, ~ 1982 ~ud&rnent Under Unclamp: H~cs and Biases. New York: Cambridge University Press. Keeney, ILL, and RaiBa, H. 1976 Decawns With hialtiple Objectives: Preferences and Value Tradeoffs. New York: Wiley. Kidd, J.B. 1970 The utilization of subjective probabilities in production planning. Acta P-ychologica 34:338-347. Koriat, A., Lichtenstein, S., and F~schhoff, B. 1980 Reasons for confidence. Journal of E - rinental Psychology: Humus Leaning and Memory 6:107-118. Lanir, T. 1982 Strategic Spouses. Ramat Avid. Tel Aviv University Press. Iichtenstein, S., and F~hoff, B. 1980 Raining for calibration. Organ~nal Beh=:or Ad Human Performance 26:149-171. Lichtenstein, S., F~schhoff, B., and Phillips, LD. 1982 Calibration of probabilities State of the art to 1980. In D. Kahneman, P. Slovic, and An, Iversly, edls., hd~gr7~ Under Uncertainly: Heunsucs and Busses. New Yoric: Cambridge Umversi~ Press. I ichtenstein, S., Slovic, P., and Zink, D. 1969 Effect of instruction in expected value on optimality of gambling decisions. Formal of Mental Psychology 7~.236 2A0. March, J.G. 1978 Bounded rationality, ambiguity, and the engineering of choice. We Bed Jouma1 of Economics 9:587 608. McCormick, N.J. 1981 Re~bili~ and Risk Analysis. New YoAc: Academic Press.

OCR for page 25
56 DISTRIBUTED DECISION EKING Meehl, P.E. 1954 Clbucal vs. Sta~cal Prediction: A TImoreical Analogs and a Review of Me Evi~e. Minneapolis: University of Minnesota Press Miller, G^. 1956 The magical number seven, plus or minus halo: Some limits on our capacity for processing information. Psychological Review 63:81-97. Mischel, ~ 1968 Personal and Assessment. New Yorlc Wiley. Montgomery, H. 1983 Decision rules and the search for a dominance structure: Towards a process model of decision making. In P. Humphreys, O. Svenson, and A. Vari, eds., Ana~yzzng and Aiding Decision Processes. Amsterdam: North Holland. Murphy, AH., and Wrinkler, R.L 1984 Probability of precipitation forecasts. Joumal of He Amp an Sta!tistic~ Association 79:3914~. Myers, D.G. and I^mm, H. 1976 The group polarization phenomenon. Psychological Bulletin 83(~4~:602-627. National Interagency Incident Management System 1982 The What, Why, and How of NIIMS. Washington, DC: U.S. Dept. of Agnculture. National Research Council. 1981Surveys of Subjective Phenomena. Committee on National Statistics. Wash ington DC: National Academy Press. Beseech Needs in Human Factors. Committee on Human Factors. Wash ington, DC; National Academy Press. and Ross, L Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Clips, NJ: Prentice-Hall. Nisbett, R.E., and Wilson, I:D. 1977 Telling more than we can know: Psychological Review 84:Z31-259. 1983 Nisbett, R.E., 1980 Verbal reports on mental processes. Payne, J.W. 1982 Contingent decision behavior. Psychological Bulletm 92:382401. Perrow, C. 1984 Normal Accidents. New York: Basic Books. Peterson, C.R., and Beach, LR. 1967 Man as an intuitive statistician. Psychological Bulletin 63:~ 46. Peterson, C.R., ed. 1973 Special issue: Cascaded inference. Organ~ional Behavior and Human Performance 10:31043Z Pitz, G.S., and Sachs, NJ. 1984 Behavioral decision theory. Annual Review of Psychology 35. Pitz, G.F., Sachs, N.J., and Heerboth, J. 1980 Procedures for eliciting choices in the analysis of individual decisions. Organ~ional Behavior and Human Performance 26:396408. Polanyi, M. 1962 Raifla, H. 1968 I:)ecision Analysis. Reading, MA Addison-Wesley. Rappoport, A., and Wallsten, J.S. 1972 Individual decision behavior. Annual Review of Psychology 23:131-175. Personal Knowledge. London: Routledge and Kegan Paul.

OCR for page 25
DISTRIBUTED DECISION FINKING Rasmussen, J., and Rouse, W.B., eds. 1981 Human Detection and Diagnosis of System Failure. New York: Plenum. Reason in press Hums Error. New York: Cambridge University Press. Rokeach, IvI. 1973 17w Nature of Human Values. New York: The Free Press. 57 Ross, L 1977 The intuitive psychologist and his shortcomings: distortions in the attribution process. Pp. 17~177 in L Berkowitz, ea., Advances u: E~penmental Social Psychology (Vol. 10~. New Yoric: Academic Press. Samet, M.G. 1975 Quantitative interpretation of two qualitative scales used to rate military intelligence. Human Factors 17:192-202. Schoemaker, PJ.H. 1983 The expected utility model: Its vanants, purposes, evidence and limitations. Journal of Econorruc Literature 2~):52f3-563. Shaklee, H., and Mimms, b1. 1982 Sources of error in judging event covariations: Effects of memory demands. J=nm1 of E~erb7~tal Psychology: Learned, Memory, and Cognition 8:208- 22J4. Shaklee, H., and locker, D. 1980 A rule analysis of judgments of covanation events Memory and Cognition 8:459467. Simon, H. 1957 Models of Man: Social and Rational. New Yorl`: Wiley. Slovic, P. 1972 Psychological study of human judgment Implications for investment decision making. Journal of Finance 27:779-799. ~ _ ~ , Slovic, P., and F~schhoff, B. 1977 On the psychology of experimental surprised Joumal of E~7zenta Pi olo~ Human Percepnon and Performance 3:1-39. Slovic, P., Fischho~, B., and Lichtenstein, S. 1977 Behavioral decision theory. An7u~al Whew of Psychology 28, 1-39. Slovic, P., and Lichtenstein, S. 1983 Preference reversals: A broader perspective. American Economic Review 73:59~605. Slovic, P., Lichtenstein, S., and Fischhoff, B. 1988 Slovic. P.. and Iversky, Decision making. In R.C. Atkinson, RJ. Hernstein, G. Lindzey, and R.D. Lllce, eds., Stevens' Handbook of E:`perunen~al Psychology (second edition). New York: Wiley. . _, ., _ _ ,, 1974 Who accepts Savage's axiom? Behavioral Science 19:368-373. Stokey, E., and Zeckhauser, R. 1978 A Pruner for Policy Analysis. New York: Norton. Svenson, O. 1981 Are we all less risly and more skillful than our fellow drivers? Acra Psycholo~ca 47:143-148. Tihansky, D. 1976 Confidence assessment of military air frame cost predictions. Operations Research 24:26-43.

OCR for page 25
58 DISTRIBUTED DECISION MAKING Tversly, ~ 1969 Intransitivity of preference. P~holo~cal ~av 76:31 48. Tversly, A, and Kahneman, D. 1973 Availability A heuristic for judging frequency and probability. Collusive P-ola,gy 4:207-232. 1981 The naming of decisions and the psychology of choice. Science 211:453~58. U.S. Nuclear Regulatory Commission 1983 PRO Procedures Guide (NUREG/CR-2300~. Washington, DC: The Commis- sion. van W~nterfeldt, D., and Edwards, W. 1982 Costs and payout in perceptual r' search. P~ho~.pcal Bulletin 93:609~22. 1986 Decision Making and Behavioral USA. New York: Cambridge Univemi~ Press. Wagenaar, W., and Sagana, S. 1976 Misperception of experimental growth. Percepbon and Psyc):ophysics. Wallsten, T., and Budesa~, D. 1983 Encoding subjective probabilities: A psychological and psychometric review. Management Science 29:151-173. Weinstein, N.D. 1980 Unrealistic optimism about future life events. Joumal of Personal and Social PI hology 39:8(K820. Wheeler, D.D., and Janis, I.L 1980 A Practical Guide Or Making Dcc~. New York: The Free Press. Wilson, R. and Crouch, E. 1982 P~klB~fitAr~ysis. Cambridge, MA Ballinger. Yates, ~ F. 1989 Judgment and Deciswn Making. Chichester, Eng.: Wiley.