Cover Image

PAPERBACK
$119.00



View/Hide Left Panel
Click for next page ( 235


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 234
DECISION MAXING--AIDED AND UNAIDED Baruch Fischhoff INTRODUCTION Decision making is part of most human activities, including the design, operation, an] monitoring of space station missions. Decision making arises whenever people must choose between alternative courses of action. It includes both global decisions, such as choosing a station's basic configuration, an] local decisions, such choosing the best way to overcome a minor problem in executing an onboard experiment. _ . . . . . ... . Decision making becomes interacting and difficult when the choice as non-tr~v~a', earner Because decision makers are unsure what outcomes the different courses of action will bring or because they are unsure what outcomes they want (e.g., what tradeoff to make between cost and reliability). Ah of science and eng~n~ring is devote to facilitating such decision my, where pc~ssible eve eliminating the Ned for it. A sign of good enqin~rinq mar~age~nt is that there be no uncertainty about the objective= of a p~vje(:t. ~ ~ ~ ~ A sign of advanced science is that there are proven solutions to many problems, sharing how to choose actions Nose out; are McCann to achieve the chosen cdojectives. Where the science is less advanced, the hope is to routinize at 1-ass part of the decision-ma-ding process. For example, the techniques of cost-benefit analysis may make it possible to predict the "r~ncmic consequences of a proposed mission with great confidence, even if those techniques cannot predict the mission's risks to lives end pLvpertyor show how those risks should be weighed against its economic costs and benefits (Bentkcv~r et al., 198S; Fischhoff et al., 1981). Or, current engineering knowledge may allow automation of at least those decisions where electronic sensors or human operators can be trusted to provide accurate initial conditions. Indeed, space travel would be impossible without extensive computer-controlled decision mating for problems involving great computational complexity or time pressure (e.g., luring launch). An overriding goal of space science (and other applied sciences) is to expand both the range of problems having known solutions and the technological capability for deriving and activating those solutions without human intervention. In this pursuit, it is aided by concurrent efforts in other fields. Among them is cognitive science (broadly 234

OCR for page 234
235 defined), whose practitioners are attempting to diversify the kinds of problems that can be represented and solved by computer. Yet, however far these developments progress, there will always be some decisions that are left entirely to human judgment and some elements of judgment in even the most automated decisions. For example, there is no formula for unambiguously determining which basic design configuration will pro five best in all anticipated circumstances (much less unanticipated ones). Analogously, there is no proven way to select the best personnel for all possible tasks. When problems vise, during either planning or operation, judgment is typically needed to recognize that something is wrong and to diagnose what that something is. When alarms go off, judgment is needed to decide whether to trust them or the system that they mistrust. When no alarms go off, supervisory judgment is needed to decide whether things are, in fact, all right. However thorough training may be, each operator must contin~1ly worry about whether others have understood their (possibly ambigoc us) situations correctly, and followed the appropriate instructions. When solutions are programmed, operators m ~ t wonder how good the programming is. When solutions are created, engineers must guess at how materials (and people) will perform in novel circumstances. Although these guesses can be aided and discipline] by scientific theories and engineering models, there is always some element of judgment in choosing and adapting those models, cc=pcunding the uncerta Sty due to gaps in the underlying science. Any change in one part of a system creates uncertainties regarding its effects on other yarn components. In ~1 of these cases, wherever knowledge is, judgment begins, even if it is the judgment of highly trained and moti~rat~ individuals (Fischhoff, 1987; McCormick, 1981; Perraw, 1984~. UrxierstarKling how good these judgments are is essential to knowing hear much confidence to place In them and ~ the systems that depend on -them. Understanding how those judgments are produced is essential to improving them, whether through training or judgmental aids. Such understanding is the goal of a loosely bounded interdisciplinary field known ~~ behavioral decision theory. The "behavioral" is meant to distinguish it from the Sony of decision making in mainstream American economics, which rests on the metatheoretical assumption that people always optimize when they make decisions, in the sense of identifying the best possible course of action. Although p~a~.=ible in some circumstances and essential for the invocation of economics' sophisticated mathematical tools, the assumption of optimization severely constrains the kinds of behavior that can be observed. It also leaves economics with the limited (if difficult) goal of discerning what desires people have succeeded in optimizing ~ their decisions. Behavioral decision theory is concerned with the conditions conducive to optimizing, the kinds of behavior that cc me in its stead, and the steps that can be taken to improve people's performance (Fischhoff et al., 1981; Kahne man et al., 1981; National Research Council, 1986; Schoemaker, 1983; von W'nterfeldt and Edwards, 1986~. Research in this tradition draws on a variety of fields, including psychology, operations research, management science, philosophy, political science, and (some) economics. As it has relatively little

OCR for page 234
236 institutional structure, it might be best thought of as the conjunction of investigators with several shared assumptions. One is the concurrent pursuit of basic and applied knowledge, believing that they are mutually beneficial. A second is the willingness to take results from any field, if they seem useful. A third is interest in using the latest technology to advance and exploit the research. These are also the assumptions underlying this chapter, which attempts to identify the most promising and important research directions for aiding space station development. Because of the space station's role as a pioneer of advanced technology, such research, like the station itself, would have implications for a wide range of other applications. The results of research in behavioral decision theory have shown a mixture of strengths and weaknesses in people's attempts to make decisions On complex and uncertain environments. These intuitive psychological processes pose constraints on the decision-making tasks that can be imposed on people and, hence, on the quality of the performance that can be expected from them. These processes also offer opportunities for decision aiding, by suggesting the kinds of help that people need and can accept. The following section provide a brief Overview of this literature and points of access to it, couched in quite general terms. The next section considers some of the special features of decision-maki~g in space station design and operation. The following three sections discuss the intellectual skills demanded by those features and the kinds of research and development needed to design and augment them. These properties are the needs: (a) to create an explicit model of the space station's operation, to be shared by those involved with it, as a basis for coordinating their distributed decision making, (b) to d-~1 with imperfect systems, Capable of resporx~ir~ ~ unpredictable ways, and (c) to manage novel situations. A concluding section discusses institutional issues in managing (and exploiting) such research, related efforts (or needs) in other domains, and the philosophy of science underlying this analysis. SPACE Sl~lION DECISIONS AND T=]R FACILT~llON Most prescriptive schemes for deliberative decision mating (Behn and Vaupel, 1982; Raiffa, 1968; van W~nterfeldt and Edwards, 1986), showing how it should be done, call for Performing something like the following four steps: a. Identify all possible courses of action (including, perhaps, inaction) b. Evaluate the attractiveness (or aversiveness) of the consequences that might arise if each course of action is adopted. c. Assess the likelihood of each consequence occurring (should each action be taken).

OCR for page 234
237 d. Integrate all these considerations, using a defensible (i.e., rational) decision Nile to sewed the best (i.e., Optimal) action. Fin this Dive, decisions are evaluated according to how well they take advantage of what was know n at the time that they were made, vis-a-vis achieving the decision maker's objectives. They are not evaluated according to the desirability of the consequences that followed. Scme decisions involve only undesirable options, while the uncertainty surrounding other decisions means that bad things will happen to some good choices. The following is a partial list of decisions that Light arise in the course of designing and operating a space station. Each offers a set of action alternatives. Each involves a set of consequences whose relative importance must be weighed. Each is surrounded by various uncertainties whose resolution would facilitate identifying the optimal course of action: Deciding whether to override an automated system (or deciding what its current state actually is, given a set of indicators) ; Deciding In advance how to respond to a potential emergency; Deciding where to look for some vital information an a Cauterized database; Deciding whether to proceed with an extravehicular' operation when some noncritical, but desirable safety function is e e 1nopera. :lve; Deciding whether to replaces a crew member having a transient medical problem (either when formulating general operational rules or when applying them at the time of a launch); Deciding where to put critical pieces of equipment; Deciding how to prioritize the projects of different clients, both In planning and in executing missions; Deciding where to look first for the sources of apparent problems; Deciding Aid grand] crew actions deserve an extra double check; Deciding whether the flight crew is up to an additional period In orbit; Deciding Cat to do next In a novel manipulation - ok;

OCR for page 234
238 Deciding on the range of possible values for a parameter needed by a risk analysis of system reliability; Deciding just how much safety will be increased by a design change, relying on a risk analysis to project its system-wide ramifications; Deciding what to report to outsiders (e.g., journalists, politicians, providers of commercial payloads) about complex technical situations that they are ill-prepared to understand. These decisions vary in many ways: who is making them, how much time is available to make them, what possibilities there are for recovering from mistakes, how great are the consequences of success and failure, what cc mputational algorithms exist for deciding what to do, how bounded is the set of alternative actions, and where do the greatest uncerca~nr~es ~'e, in evaluating the importance of the consequences or ~ evaluating the possibilities for achieving them. What these decisions have ~ common is that some element of unaided human judgment is needed before an action is consummated, even if it is only the decision to allow an autcmateJ process to continue unmolested. Judgment is neared, in part, because there is some element of uniqueness in each decision, so that it cannot be resolved simply by the identification of a ~roc_dural rule (or set of rules) that has , ~ ~ _ _~_ ~ _~ 1 1~ In__ - lo_ _ __ - ___ __ ~ ~_'c ,~l\JV-ll 1~:11 ;aU,':ll\Ji Ill i~ =~ll~l~l~- ~ e ~ h for ~ es might be considered an exercise in probing solving. By contrast, decision making involves the intellectual integration of diverse considerations, applying a general purpose integrative rule intended to de=1 with novel situations and "get it right the first time." In "interesting" are==, decision making is complicated by uncertain facts (Wise, 1986), so that one cannot be assured of the outcome (and of which choice is superior), and of conflicting consequences, so that no choirs is superior On all respects (and some tradeoffs must be made) . As mentioned, the hope of behavioral decision theory is to discern basic psychological processes likely to recur wherever a particular kind of judgment is required. One hopes, for example, that people use theta minds in somewhat similar ways when determining the probability that they know where a piece of information is located in a database and when determining the probability that they can tell when a anomalous meter read Meg represents a false alarm. If so, then similar treatments might facilitate performance in both settings 3 (Fischhoff and R~oGregor, 1986; Murphy and Winkler, 1984). The need to make decisions in the face of incomplete knowledge is part of the human condition. It becomes a human factors problem (the topic of this volume) either when the decisions involve the design and cgeration of machines (broadly defined) or when machines are intended to aid decisions. Decisions about machines might be aided by collecting historical data regarding their performance, by having them provide diagnostic information about their current trustworthiness, by providing operators with training On how to evaluate trustworthiness

OCR for page 234
239 (and how to convert those evaluations into action), and by show Meg how to apply general organizational philosophies (e.g., safety first) to specific operating situations. Decision aiding by machines might be 1m pro red by enhancing the display of Formation that operators understand most poorly, by formatting these displays ~ ways compatible with users' natural ways of thinking, by clarifying the rationale for the machine's recommendations (e.g., its assumed tradeoffs, its decision rule, its treatment of uncertainty), and by describing the definitiveness of its reccmmen~ations. A better understanding of how people intuitively make decisions would facilitate attaining these objectives, as well as developing training procedures to help people make judgments and decisions wherever they arise. Just thinking about decision making as a general phenomenon might increase the motivation and opportunities for acquit Meg these skills. ~ DESCRI~l'lONS OF DECISION MAKING One way of reading the empirical literature on ~ tuitive processes of judgment and decision making is as a litany of problems. At each of the four stages of decision making given above, investigators have identified seemingly robust and deleterious biases: when people generate action options, they often neglect alternatives that should be obvious and, moreover, are insensitive to the magnitude of the Or neglect. As a result, options that should command attention are cut of mind when they are out of sight, leaving people with the impression that they have analyzed problems more thoroughly than is actually the case (Fischhoff et al., 1978; Pitz et al., 1980). Those options that are noted are often defined quite vaguely, making it difficult to evaluate them precisely, communicate them to others, follow them if they are adopted, or tell when circumstances have changed enough to justify rethinking the decision (Ben~kov~r et al., 1985; Fischhoff et al., 1984; Fhrby and Fischhoff, 1987; Samet, 1975~. Imprecision also makes it difficult to evaluate decisions in the light of subsequent experience, insofar as it is hard to reconstruct exactly what one was trying to do and why. That reconstruction is further complicated by hindsight bias, the tendency to exaggerate in hindsight what one knew in foresight (Fischhoff, 1975~. The feeling that one knew all along what was going to happen can lead one to be unduly harsh on past decisions (if it was relatively obvious what was going to happen, then failure to select the best option must mean incompetence) and to be unduly optimistic about future decisions (by encouraging the feeling that things are generally well understood, even if they are not working out so well). Even though evaluating the relative importance of potential consequences might seem to be the easiest of the four stages of decision making, a growing literature suggests that people are often uncertain about the Or con values. AS a result, the values that they express can be unstable and unduly sensitive to seemingly irrelevant features of how evaluation questions are posed. For example, (a) the relative attractiveness of two gambles may depend on whether people are

OCR for page 234
240 asked how attractive each is or how much they would pay to play it (Grether and Plott, 1979; Slavic and Lichtenste m, 1983); (b) an insurance policy may become much less attractive when its "premium" is described as a "sure loss" (Hershey et al.. 19821: (c) a risky venture ~ ~ _ , _ , _ _ , ~ , _ _ _ , may seem much more attractive when described in terms of the lives that will be saved by it, rather than in terms of the lives that will be lost (Kahneman and Iversky, 1979; Tver sky and Kahneman, 1981~. Emus, uncertainty about values can pose as serious a problem to effective decision making as can uncertainty about facts. Although people are often willing to acknowledge uncertainty about what will happen, they are not always well equipped to deal with it, in the sense of assessing the likelihood of future events fin the third stage of derision making). A rough summary of the voluminous literature on this topic is that people are quite good at tracking repetitive aspects of the rr environment, but not as good at combining those observations with inferences about what they have not seen (Hasher and Zacks, 1984; Kahneman et al., 1982; Peterson and Beach, 19671. Thus. they moot be able to tell how freouentlv they have seen , , ,& ~ ~ ~ ~ . . . .. _ . . ~ or nears ascot Deaths Prom a particular cause, Out not Be able to assess how representative their experience has been leading them to overest ~ te risks to which they have been overexposed (Cc mbs and Slavic, 1979; Wersky and Kahneman, 1973~. They can tell what usually happens In a Particular situation and r ~ ze how a specific instance , , , ~ , , . . . . . . . . is special, yet have difficulty integrating these two (uncerta ~) facts--with the most common bias being to focus on the specific information and ignore experience (or "base rates") (P=r Hillel, 1980~. m ey can tell how similar a specific instance is to a prototypical case., yet not how important similarity is for making pr~;ctions--usually relying on it too much (Rear Hillel, 1984; Kahneman and Tver sky, 1972~. They can tell how many times they have seen an effect follow a potential cause, yet not infer what that says about causality--often perceiving relations where none exist (Beyth-Marom, 1982; Einhorn and Hogarth, 3978; ShaX1ee and Tucker, 3980~. m ey have a rough feeling for when they know more and when they know less, but not enough sensitivity to avoid a commonly observed tendency toward overconfidence (Fischhoff, 1982; Walisten and Budescu, 1983~. According to decision theory, the final stage of decision making should involve implementation of an expectation rule, whereby an option is evaluated according to the attractiveness of its possible consequences, weighted by their probability of occurrence. Since it has become acceptable to question the descriptive validity of this rule, much research has looked at how well it predicts behavior (Dawes, 1979; Feather, 1982; Fischhoff et al., 1981; Inn et al., 1982; National Research Council, 1986; Sc~hc~malcer, 1983~. A rough surety of this work wed be that: (a) the ~:ation Nile often predicts people's choices fairly well--if one knows how they evaluate the prcibabili~ ark attractiveness of consequences; (b) with enough ingerluity, one can ally find son set of beliefs Ardor the consequences) for With the Nile would dictate choosing the option that was select meaning that it is hard to prove ~t the rule was not used; (c) elation nines can often predict the OUtC=Te of . .

OCR for page 234
241 decision-making processes even when they do not at all reflect the thought processes involved--so that predicting behavior is not sufficient for understanding or aiding it; (d) those processes seem to rely on rules with quite different logics, many of which appear to be attempts to avoid making hard choices by finding some way to view the decision as an easy choice--for example, by disregarding consequences on which the otherwise-best option rates poorly (Janis and Mann, 1977; ongomery, 1983; Payne, 1982; Simon, 1957~. The significance of these results from experimental studies depends upon how well they represent behavior outside the lab, how much insight they provide into improving decision making, and how adversely the problems that they reveal affect the optimality of decisions. As might be expected, there is no simple answer to any of these questions. Life poses a variety of decisions, some of which are sensitive to even modest imprecision in their formulation or in the estimation of their parameters, some of which yield an optimal choice with almost any sensible procedure, and some of which can tolerate occasional inaccuracies, but.not recurrent problems, such as persistently exaggerating how much one knows (Henrion, 1980; Krzysztofawicz, 1983; McCormick, 1981; von Winterfel~t and Edwards, 1982~. Placing decisions within a group or organizational context may ameliorate or exacerbate problems, depending on how carefully members scrutinize one another's decisions, how independent are the perspectives that they bring to that scrutiny, and whether that social context has an incentive structure that rewards effective decision making (as opposed to rewarding those who posture or routinely affirm common ~ sconceptions) (Davis, 1982; Lanir, 1982; Ayers and Lamm, 1976). The robustness of laboratory results is an empirical question. Where evidence is available, it generally suggests that these judgment=] problems are more than experimental artifacts, which can be removed by such "routine" measures as encouraging people to work harder, raising the stakes contingent on their performance, clarifying instructions, varying the subject matter of the tasks used in experiments, or using better educated subjects. m ere are many fewer studies than one would like regarding the judgmental performance of experts working ~ their own ar-~.c of expertise. What studies there are suggest some reason for concern, indicating that experts think like everyone else, unless they have had the conditions needed to acquire judgment as a learned skill (e.g., prompt, unambiguous f=F~h~ck) (Fischhoff, 1982; Henrion and Fischhoff, 1986; Murphy and Windier, 1984). m e evidentiary record is also incomplete with regard to the practical usefulness of this research. The identification of common problems points to places where human judgment should be supplanted or aided. m e acceptance of decision aids (and aides) has, however, been __ _ , _ somewhat limited (Brown, 1970; Fischhoff, 1980; Henrion and Mbrgan, 3985; von W~nterfel~t and Edwards, 1986~. One inherent obstacle is presenting users with advice derived by inferential processes different than their natural ones, leaving uncertainty about how far that advice is to be trusted and whose problem it really is solving. Developing (and beating) decision aids that took seriously the empirical results

OCR for page 234
242 of behavioral decision theory would be a useful research project. With regard to situations where decision aids are unavailable, there is some evidence that judgment can be improved by training procedures that recognize the strengths and weaknesses of people's intuitive thought processes (Kahneman et al., 1982; Nisbett et al., 1983~. Here, too, further research is needed. THE PSYCHOLOGICAL REALm OF SPACE STATION DECISIONS The recurrent demand for similar intellectual skills in diverse derisions means that any research into decision-making processes could, in pr mciple, provide some benefit to the space station program. However, there are some conditions that are particularly important in the space station environment and, indeed, might rarely occur in less complex and technologically saturated ones. The challenges posed by such conditions would seem to be suitable and important foci for NASA-supported research. Three such conditions are described in the remainder of this section. Each subsequent section considers research issues pertinent to one of these conditions. In each case, significant progress appears possible, but would appear to demand the sort of sustained programmatic effort that NASA has historically been capable of mustering. Condition 1: The need to create a widely shared model of the space station and its support systems. The technical knowledge napped to manage the space program is widely distributed Aver diverse locations on earth and in space, in different centers on earth, and across different people within each earth and space center. As a result, there are prodigicus technical problems involved fir. ensuring compatibility, consistency, and concurrency among the computerize databases upon which these scattered individuals rely. Even if these problems of information transmission can be resolved, there is still no guaranty= that the diverse individuals at the different nod== In the system will be aware of the information available to them, nor comprehend its meaning for their tasks, nor be alert to all changes that might affect their work. Even with a static database, there may be problems of understanding when the individuals have very different kinds of expertise, such that their contributions to the database cannot be readily understood (or evaluated) by one another. The management of such systems requires the creation of some sort of sys~m~ride m~el within which individuals can pool they h~a~rlecige art freon which hey can draw newer information. That m~el may be a Icx~sely organized database, with perhaps a routing system for bringing certain information to the attention of ~ Stat people (at ~ ring to strike a balance between telling them too much and too little). Or, it may be an explicit coordinated model, such AL those used In design processes guided by procedure= like probabilistic risk analysis (McCormick, 1981; U.S. Nuclear Regulatory Commission, 1983~. These models assign new information into an integrated picture of the

OCR for page 234
243 Eibysi~1 system, possibly allowing Mutational pr - Actions of system performance, which can be redone Renewer the state of the system (or Jche theoretical understar~i~ of its pperation) ~es. Shard Gels with such National abilities can be USA to simulate the system, for ache sake of Arid the effects of design Aries, trained Operators for emergencies, and troubleshootincr (~ seeing what Charges An the system could have produced the observed aberrations). Such models are useful, if not essential, for achieving NASA's goal of allowing "crews to intervene at extremely low levels of every subsystem to repair failures an] take advantage of disooveries" (NASA, 1986~. Less ambitious models include spreadsheets, stakes displays, even simple eng mee ring drawings, pooling information fain varied human and machine sources (although, ultimately, even machine-sourced information represents some humans' decisions regarding what information should and can be summarized, transmitted, and displayed). All such models are based around a somewhat artificial modeling "language" which is capable of representing recta m aspects of complex systems. Using them effectively requires "fluency" in the modeling languages and an understanding of their 1;m;ts. Thus, for example, decision analysis (Behn and vaupel, 1982; Raiffa, 1968; von W~nterfeldt and Edwards, 1986) can offer insight into most decision-making problems, if decision makers can describe their situations in terms of options, consequences, tradeoffs, and probabilities ark if they can recognize how the problem described in the malel differs fan their actual problem. probabilistic risk analyses can aid radiators ark designers to urxierst~ the reliability of nuclear power plants by pooling the knc~riedge of diver';e gr ~ of engineers and operators--as lord as everyone remembers that such models cannot capture phenomena such as the " mtellectu21 common mode failure" that arises when operators misunderstand an emergency situation in the same way. The creation, sharing, interpretation, and maintenance of such models are vital to those organizations that rely on them. The unique features of such models in the context of NASA's missions are their size and ca~lexity, their clivemi~r fin terms of ache kids of excise that Ash be pooled), and Heir formal)=. Ibat formality Ares not only from the technical nature of much of the information but also frown the need for efficient t=l ~ ==nications among ~SA's distributed centers. Formality complicates the cognitive task of communication, by eliminating the informal cues that people rely upon to understand one another and one another's work. It may, however, simplify the cognitive study of such communication by rendering a high portion of significant behavior readily observable. It may also simplify the cognitive engineering of more effective model building and sharing, insofar as better methods can be permanently and routinely incorporated in the appropriate protocols. Research that might produce such methods is discussed below. . . . . ~ . . Condition 2: The need to make decisions with imperfect systems. Decisions involving uncertainty are gambles. Although it is an uncomfortable admission where human lives are at stake, many critical

OCR for page 234
244 decisions in space Octaves are gambles. The uncertainties in then come fray the limits of scientific knowledge r~i~ exactly hear various elements of a mission wit ~ perform, frcm the limits of engineering knowledge r Shards ~ how different system elements will inn ract, fray the limits in the technical capacity for modeling complex systems, and from the unpredictability of human operators (who are capable of fouling and saving situations in novel ways). Indeed, despite NASA's deep ccmmltment to planning and training, the nature of its mission demands that some level of uncertainty be maintained. It is expected to extend the limits of what people and machines can do. Performance at those limits cannot be tested fully in theoretical analyses and simulation exercises. In order to gamble well, one needs both the best possible predictions regarding a system's performance an] a clear appraisal of the limits of those predictions. Such an assessment of residual uncertainty is needed in order to guide the collection of additional information, in order to guide preparation for surprises, and, most important of all, to guide the decision as to whether a mission is safe enough to proceed (considering Nears overall safety philosophy). Using information wisely require-= an understanding of just how good it is. Perfuse gambling is so distasteful, there is constant activity to collect (and produce) additional knowledge, either to perfect The system or to clarify its imperfections. As a result, the state of knowledge and the state of the system will be in constant flux, even without the coning changes of state associated with its ongoing operations (e.g., testing, training, weary. Somehow, this new information must be collated and disseminated, so that those concerned with the system know what is happening an] know how much one another Mows. ~ this way, dealing with uncertainly is relay to dealing with a shard meek. For operators, this residual uncertainty crib the constant possibility of havir~to override the system, in onierto rescue it freon same unanticipated circumstance or r ~ nse. That override ~ ght involve anything from a mild course correction to a fun~ament=1 intervention signalling deep distrust of a system that seems on the verge of di=~-=ter. AS the physical stakes riding on the decision increase, so do the social stakes (in the sense of the responsibility being taken for system operation and the Implicit challenge to system designers). Us, operators, as well as designers and managers, must be able to assess the system's trustworthiness and to translate that assessment into an appropriate decision. m e variety of individuals with knowledge that could, conceivably, prompt override decisions means that coping with uncertainty is an intellectual skill that nears to be cultivated and facilitated throughout the organization. It also means That the system's overall management philosophy must recognize and direct that skill. For example, a general instruction to "avoid all errors" implies that time and price are unimportant. Where this is not the cases, personnel are left adrift, forced to make tradeoffs without explicit guidance. Such an official belief in the possibility of fault-free design may also

OCR for page 234
252 . . Of hypothetical en Experiences (even if those have yet to be experienced in reality). m e decisions will be made by the contingency planners, leaving the operators to decide that some contingency has arisen and to decide which one it is. men, the correct plan is accessed and executed. Contingency planning requires a number of intellectual skills of which could benefit from study directed at ways to augment it. Ate planning stage, these skills include the ability to imagine contingencies at all, the ability to elaborate their details sufficiently, the ability to generate alternative responses for evaluation, the ability to evaluate those responses critically in the hypothetical made, and the ability to communicate the resultant decisions to operators. At the execution stage, these skills include the ability for operators to diagnose their crisis situations in ways that allow them to access the correct plan. Failures at either of these stages may result in ineffective decisions or in operators wondering abort the appropriateness of the decisions that they are required to implement. These problems are analogous to those facing effective emergency training ~ simulators. One worries, for example, that those who develop simulator exercises, teach the textbook responses, and evaluate cperators' performance share some deep misconceptions about the system's cperation--so that some critical contingencies are never considered. One also worries that spotting contingencies in the simulator eight be quite different from spots mg them in reality, where the system may have a different operating history or different social setting, or where Operators are not as primed to expect prob~em;s (which typically come at enormously high rat== ~ simulators). Understanding how people perform the component tacks in contingency planning might help decrease the number of non-rcut~ne decisions that have to be made (by making contingency planning more effective) and help assess the need for making non-routine decisions (by assessing the limits of contingency planning). Such understanding mlabt also help red use the threats posed by undue reliance on contingency planning. One such threat is taking too seriously designers' idealizations of the system. Such models often provide a convenient basis for generating problems and exercises. They may even be used to run auto met ed simulators. However, it is in the nature of models that they capture but a piece of reality, often without a clew' (and communicated) understanding of just what that piece excludes. In some cases, a model is actually made to do double duty, being used by designers to discover limitations of the system (leading to design changes) and by tra leers as though it represented a stahie, viable operating system. Mbre generally, one neck to worry about how routine system operations affect operators' ability to deal with non-routine sib Cations. Inadvertently inculcating undue faith in a basic design that typically functions well Acrid be one kind of interference, as would acting as though contingency plane mg had routinized the treatment of novel situations. Institutional threats might include failing to train for handling non-rout~ne situations or failing to each At

OCR for page 234
253 reward those who naturally have die skills for doing so (asset tat such skills could be discerned). The previous section suggested the possibili~r that the continuous Contraction of design improvements or the polishing of synthetic data displays ~ ght disrupt operators' ability to "read" the system's state and to diagnose novel situations. A general theoretical perspective for such research would be to consider the particular informational ecology in which judgment is acquired as a learned skill. Whenever that ecology changes, then there is some need to refine or alter judgmental skills, and some threat of negative transfer. A variant on this threat is d~=killing, Hereby useful intellectual skills are allayed to wither or are neutralized by design features or Shades. For Ale, as automation increases, aerators will ~ncr-=singly be faced with n-~r-perfect systems, which fait so seldom that there is little onoorbunitv to learn the ~ ~ ~ , ma:__: new _~1~ ~ ~~' - ~~ ~_. ~_- 1~U ;~ ohm 1~11 '"l - =)lI~a~l~- 111C ~' V~~l~ Vet In V~V== ~~ ''1 ~~ Owe so that they can cope with non-~vut~ne decisions may require some reduction in automation and perfection. The result of deautcmation might be an increased race of error overall, but a reduced rate of catastrophic ones (a result thErt would be hard to prove given the low rate of occurrence for catAc~crophes). Rest on these issues would son hard and important. whenever there is same significant chance that contingency planning will not do, some capability is needed for making decisions in real time, starting f ~ u a raw analysis of the situation (perhaps after going part of the way with an inappropriate contingency plan). Training (and rewarding) the relevant int~llectNa1 skills (i.e., basic decision-ma-ding abilities) waNld seem extremely important. Much more needs to be known about how it can be done. For example, operators need to be able to generate good options regarding what might be happening and what might be done about it. Studies of creativity, in vogue some years ago, ostensibly examined this question. However, they used rather simple tanks and rather simple criteria for evaluating options (typically, the more the better). One potential aid to besting those options that are generated would be on-l~ne, r~l-time system simulators. These ccwld help operators diagnose the situation that they see by simulating the situations that would arise from various possible initiating conditions. They could also allow simulating the effects of various interventions. Getting such systems to work suggests some interesting computing and interface design problems. A somewhat different k Ad of aid would be base-rate info ~ tion describing typical performance of the system (or ones like it) under particular conditions. That information might describe, for example, what kinds of manipulations fin general) give one the best chance of being able to recover if they do not seem to be working, what manipulations provide the most diagnostic information about their failings, what are the best sources of information about current system status. Such statistical information might prove a useful complement to causal information about the system's intended operation. Its collection would represent an institutional commitment to learning from experience systematically.

OCR for page 234
254 It is often assay that the Choice of actions folly Erectly from diagnosing of the situation and anticipating of ye effects of possible interventions. However, all decisions are contingent on objectives. Mbst organizations have Alex dbjectives' scene admitted arxi sane implicit. Decision making can be paralyzed ~ f the icy ications of those general values cannot be excrac~ for particular situations. It can be disastrous if the interpretations are inappropriate. Here, too, ~ mixture of analytical and behavioral work may help to improve that application and anticipate misapplications. (CONTUSIONS Research Mdnagenent The topics described hey were select for their implications for the design and Oration of equipment such as would be fours] In Be space . . . .. _ , _^ _ _ , station arid its support systems. They are, however, describe In teens . of the general psychological proposes that they involve. As a result, they could be pursuer] both as part of the development work for Specific NASA systems ark a.c basic ~r~ issues examined In laboratory settings intended] to represent lar-fidelil;y simulations of the actual N~;A errvirorments. Similarly, NASA card contribute to concurrent remark prompted by other Systems that place similar intellectual derricks on designers and c~ators. Suck connections would help to ensure the transfer of technology fern NOVA to the ger~era1 ~rmn~nity concerned with automation. Insofar as this r~r=h deals with prciblems relevant to other' te~nologi~lly saturate mnriro=Tents, it shad be able to learn from decibel ~ nts there. One relevant trend is the ~ ceasing scrutiny that is being given to the quality of expert judgment in technical systems. Some of that interest comes from within, out of concern for improving the engineering design process. Other interest comes from outside, out of the efforts of critics who wish to raise the standard of accountability for technological problems. In the face of that criticisms en judgment proves to be a particularly vulnerable target. Although there is frequently great faith within a profession ~ Be Easier of its judgments, there is not that T=dh of a r~r~ base on With to base a defense (Feyerabend, 1975; Organ et al., 1981; . . . ~1~;~ 1OQAX ^.~h to .~.1~ A_., a_ .~1~ ~_~;~ =~1 ~~ 1~, 1~0~} . OU~1 Lava WV~" I~V" WI=l=~l~ .'~1~, "bills' and even political interest. A se pond relevant trend is the Introduction of computers Into industrial cattings. m e creation of equipment has always carried an implicit demand that it be comprehensible to its operators. However, it was relatively easy for designers to allow a system to speak for itself as long as operators came into Bisect contact with it. Computerization changes the game by requiring explicit summary an] display of information (Hollnagel et al., 1986~. That, in turn, requires some theory of the system and of the operator, in order to know what to show and how to shape The interface. That Theory'' Bight be created In an ad hoc fashion by the system's designers. Or, there

OCR for page 234
255 might be scone attempt to i~o~ve designers with scam excise in the behavior of ~ators, or even representatives of Ache operators th~ves (even In places Were they do not have the high sambas of, say, pilots). A prejudice of this article, ark other pieces written freon a Norman factors Active, is Aft concern over operability Chid be raised from the very inception of a proj e=' s development. Any in that way is it possible to shape the entire design with operability as ~ pri ~ y concern, rather than as a tack-on, designed to rescue a design that has been driven by other concerns. As a result, raising these issues is particularly suit-d for a long-term development project, such as that cancer m ng this working group and volume. Philosophy A fundamental assumption of this chaps=' is that much of life can be construed as involving decisions (i.e., the deliberate choice among alternatives, often with uncertain information and conflicting goals). A corollary assumption is that the basic cognitive (or intellectuals skills involved in decision making have wide importance--if they can be understood and facilitated. These are hard issues to study. However, even if they cannot be resolved in short order, systen performance might be improved simply by drawing attention to them. A task analysis of where such skills arise can increase sensitivity to them, grant legitimacy to Operators' complaints r~ardi~ problems that they are experiencing, and encourage a folklore of design principles that might sense as the basis for subsequent research. m e decision-ma-ding perspective described here is strongly cognitive, An part, because the decision theory from which it is drawn offers a widely applicable perspective and a well-def~ned set of concepts. AS a result, there is a relatively high chance of results , , _ . ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ rooted in this perspective be Meg generally applicable. Moreover' there may be some some Clue to a general habit of characterizing decision-making situations as such. Within this context, there is still place to ask about issues such as the effects of stress, tension, conflict, fatigue, or space sickness on these high~r-ord~r cognitive processes Whir and Janis, 1980~. This perspective sees people as active in shaping their environment and their decision problems. It could be contrasted with an operation research-type perspective in which people are reduced to system components and behavioral research is reduced to estimating some performance parameters. Focusing on what people do, rather than on the discrepancy between their performance and some ideal, increases the chances of identifying interventions that will help them to use the Or minds more effectively.

OCR for page 234
256 ACKN Support for preparation of this report came Arced National Science Foundation Grant SES-8213452 toPer~p~cron~cs, Inc. Ar~y~pin~ons, finings, and conclusions or recreations express In this pubs ication are Chose of the author arxt do not necessarily reflect the views of the National Science Foundation. the Fo~ation's support is gratefully ach~awiedged. The thoughtful Moments of Lita Fruity, Ken Neumann, Azad Mini, Ola Svenson, ant Irmnbers of the Sy~si~n working group were also greatly appreciated. Correspor~erlce may be addressed to the author at Department of E~gmeering arm Public Policy, Carnegie-Mellon University, Pitt shur=, PA 15213. NOPES The chapters In this volume by Buchanan, Davis, Howell, Mitchell, and Newell pr~ide other po=ts of access to t:~.is literature. 2. 3. 5. The relationship be~reer~ prtiblern solving arx] decision making bears more discussion In is possible here, see National Research Council, 1986 for ac'H;tional information. In this particular case, there see to be such gerleralit,7, Airless experience praises the sort of f~ck needed to acquire probability assessment as a learned skill. Fis~off (in press) is an attempt to provide access to this literature, expressed In the context of the judgments eminent of risk analyses for hazardous technologies. Furry and Fis<6hhoff (1986) discuss related issues In a very different contest. Rem Bar Hillel, M. 1980 The base-rate fallacy in probability judgments. Acta Per hologica 44: 211-233 . 1984 Representativeness and fallacies of probability judgment. Acta Psyc: hologica 55: 91-107 .

OCR for page 234
257 Been, R. D., and Vaupel, J. W. 1982 ouick Analysis for Busy Decision Makers. New York: Eric Books. Berltkover, J. D., Covello, V. T. and Tier, J., Is. 1985 Benefits Assessment: The State of the Art. Dordredht, The Netherlands: D. Reidel. Beyth-M~n, R. 1982 Perception of correlation reexamined. Memory ark Cognition 10: 511-519. Brawn, R. V. 1970 shy executives do not use decision analysis. Harvard Business Review 48:78-89. Combs, B., and Slavic, P. 1979 Causes of death. Journalism Quarterly 56:837-843. Davis, J. H. 1982 Group Performance. Reading, MA: Addison-Wesley. Dawes, R. M. 1979 The rc bust beauty of improper linear models. American Psychologist 34:571-582. Einhorn, H. J., and Hogarth, R. M. 1978 Confidence in judgment: Persistence in the illusion of validity. Psychological Review 85:395-416. Evans, J. St. B. T. - 1982 The Psy~holocy of D~ctive Reasoning. London: Routledge ~ Began Paul. Feather', N. T. 1982 E~ancy, Incentive, and Action. Hills~le, I. Feyerabend, P. 1975 Against Method. London: Verso. Fischhoff, B. 1975 Hindsight Foresight. Journal of Experimental Psychology: Human Perception and Performance 1:288-299. 1980 Clinical decision analysis. Operations Research 28:28-43. 1987 Judgmental aspects of risk analysis. In Handbook of Risk Analysis. Prepare for the Office of Management arm Budget by the National Science Foundation. (In press.

OCR for page 234
258 Fisc~off, B., and Beyth-M~rom, R. 1983 ~po~chesis evaluation fray a Bayesian E~eive. Psychological Review 90:239-260. Fis~hoff, B., Li~htenste=, S., SIc~vic, P., Derby S., and Keeney, R. 1981 Acceptable Risk. New York: Carriage Un~versi~r Press. Fisc~off, B., and McGregor, D. 1986 Calibrating databases. Journal of the American Society of Information Sciences 37:222-233. Fis ~ off, B., Sorbic, P., arx] Liechtenstein, S. 1978 Fault trees: Sensitivity of estimated failure probabilities to problem '~presentation. Journal of Experimental Psychology: Human Perception and Performance 4:330-344. Fischhoff, B., Watson, S., and Hope, C. 1984 Defining risk. Policy Sciences 17:123-139. Fhrby, L., and Fischhoff, B. lg87 Rape self-defense strategies: A review of their effectiveness. Yictimology. (In press.) Gardenier, J. 1976 Toward a science of marine safety. Proceedings of a Conference on Mar me Safetv. Gunther, D. M., and Plott, C. 1979 ~ onomic theory of choice and the preference reversal phenomenon. American Economic Review 69:623-638. Hasher, L., and Zacks, M. 1984 Automatic processing of frequency information. American Psychologist 39:~172-1188. Henrion, M. 1980 Sensitivity of Decisions to Miscalibration. Unpublished Ph.D. dissertation. Carnegie-M~lon University. Henrion, M., and Fischhoff, B. 1986 Uncertainty assessment in the estimation of physical constants. American Jcurnal of Physics 54:791-798. Henrion, M., and ~organ, M. G. 1985 A computer aid for risk and other policy analysis. Risk Analysis 5:195-207. Hershey, J. C., Kunreuther, H. C., and Schoemaher, P. J. H. 1982 Sources of bias in assessment prcacUures for utility functions. Management Science 28:936-954.

OCR for page 234
259 Hollnagel, E., Martini, G., and Woods, D., eds. 1986 Intelligent Decision Support in Process Environments. Heidelberg: Springer Veriag. Hynes, M., and Vanmaroke, E. 1976 Reliability of embankment performance prediction. In Proocciings of the ASCE Engineering Mechanics Division Specialty Conference. Waterloo, Ontario, Canada: Waterloo University Press. Janis, I. L., and Mann, L. 1977 Decision Mixing. New York: Free Press. Kahne man, D., Slavic, P., and Tversky, A., eds. 1982 Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Kahneman, D., and Tversky, A. 1979 Prospect theory. Econometrica 47:263-292. Krzysztofawicz, R. 1983 Why should a forecaster and a decision mater use Bayes Theorem. Water Pescurces Journal 19:327-336. I, Z. 1982 Strategic Surprises. Ramat Aviv: Tel Aviv University E ress. McCormick, N. J. 1981 Reliability and Risk Analysis. New York: Academic Press. Metcalf, J., III. 1986 Decision mating and the Grenada Rescue Operation. Pp. 277-297 In J. G. March and R. Weissinger-Baylon, eds., Ambiguity and Command. M~rshfield, MA: Pitman. Montgomery, H. 1983 Decision rules and the search for a dominance structure. In P. Humphreys, O. Svenson and A. Verdi, eds., Analyzing and Aiding Decision Processes. Amsterdam: North Holland. Morgan, M. G., Henrion, M., arKi Morris, S. C. 1981 Expert Judgments for Policy Analysis. Broken National Laboratory, Brool~averl' NY. Arty/ A. H. ~ and Winkle, R. L. 1984 Probability of precipitation forecasts. Journal of the American Statistical Association 79:489-500.

OCR for page 234
260 Myers, D. G., and Lamm, H. 1976 The group polarization phenomenon. Psychological Bulletin 83:602-627. National Research Council. 1986 Report of the research briefing panel on decision mating and problem solving. Washington, D. C.: National Pow Council. NASA Briefing 1986 Briefing Material, Johnson Space Center. May, Houston. Nelkin, D., ed. 1984 Controversy: Politics of Technical Decisions. Beverly Hills, Cry: Sage. Nisbett, R. E., Krantz, D. H., Jepson, C., and Kunda, Z. 1983 The use of statistical heuristics in everyday inductive reasoning. Psychological Review 90:339-393. Nisbett, R. E., and Ross, L. 1980 Human Inference: Strategies an] Shortcom legs of Social Judgment. Englewcod Cliffs, NJ: Prentice Hall. Payne, J. 1982 Contingent decision behavior. Psychological Bulletin 92:382-401. Perry, C. 1984 Normal Accidents. New York: Relic Books. Peterson, C., and Beach, L. 1967 Man as an intuitive statistician. Psychological Bulletin 63:29-46. PI R. , Miller' D. Ce f and Fedher, Ce Ee 1982 Evaluation of Proposed Control Room Improvements through Analysis of Critical Operator Decisions e EFRI NP~1982e Palo Alto, CA: Electric Power Research Institutes Pitz, Ge Fe ~ & chs, N. J., and Heerboth, J. 1980 Prcoe~urcs for eliciting choices ~ the analysis of individual decisions. Q~anizational Behavior and Human Performance 2 6 3 9 64 08 e Raiffa, H. 1968 Decision Analysis. Paladin, MA: A~ision-Wesley. Rasmussen, J., and Pause, W. B., - a. 1981 Human Detection and Diagnosis of Detection Failure. N - r York: Plenum.

OCR for page 234
261 Samet, M. G. 1975 Quantitative interpretation of two qualitative scales used to rate military intelligence. Human Factors 17:192-202. S6hoema3cer, P. J. 1983 The expected utility model: Its various, purposes, evidence, and limitations. Journal of ~conc~nic literature 20: 528-563 . Shaklec, H., and Tucker, D. 1980 A rule analysis of judgments of covariation emeries. Memory and Counition 8:459-467. Simon, H. 1957 Models of Man: Social and Rational. New York: Wiley. Slovic, P., and lichtenstein, S. 1983 Preference reversals: A broader perspective. ~ erican Economic Review 73:596-605. Tversky, A., and Kahne man, D. 1973 Availability: A heuristic for judging frequency and probability. Cognitive Psychology 4:207-232. Tversky, A., and Xahneman, D. 1981 The fram mg of decisions and the psychology of choice. Science 211:453-458. - U. S. Nuclear Regulatory Commission. 1983 PRA Procedures Guide. (NUREG/CR-2300) Washington, D. C.: U. S. Nuclear Regulatory Commission. von W~nterfel~t, D., and Edward, W. 1982 Costs and payoffs in perceptuzL1 research. Bulletin 93: 609-622. Psychological 1986 Decision Analysis art Behavioral Research. New York: Carriage University Press. Wallst~, T., art Budescu, D. 1983 Encoding subjective probabilities: A psychological and psychometric review. Management Science 29:151-173. Reeler, D. D., are] Janis, I. L. 1980 A Practical Guide for Making Decisions. Books. New York: Basic

OCR for page 234
262 Wise, B. P. 1986 An Experimental Comparison of Uncertain Inference Systems. Unpublished Eh.D. dissertation. Carn~ie-Mellon University.