National Academies Press: OpenBook

Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium (1987)

Chapter: Discussion: Issues in Design and Uncertainty

« Previous: Decision Making-Aided and Unaided
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 263
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 264
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 265
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 266
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 267
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 268
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 269
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 270
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 271
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 272
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 273
Suggested Citation:"Discussion: Issues in Design and Uncertainty." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 274

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

DISCUSSION: ISSUES IN DESIGN FOR UNCERTAINTY William C. Howell Reviewing the presentations of Drs. Davis and Fischhoff, one would be hard pressed to find critical omissions ~ the slate of issue= set forth regarding human participation ~ the space station's judgment/decision/problem-solving requirements. me problem facing the R&D team, like that facing the future operators of the system itself, Is deciding which of the plethora of options to address first--and to what depth--~n the absence of complete knowledge. Agenda will have to be set, priorities established among research objectives (all of which seem worthy), and decisions made on when understanding has reached a sufficient (albeit far from ide~l) level to McKee on to either development or the next agenda item. The present discussion, therefore, will focus on some of these programmatic considerations. It would, of course, be presumptuous for anyone to prejudge the relative merit of research programs yet to be proposed for a maying target such as the evolving space station concept. Nonetheless, current knowledge is sufficient to begin the -process on long as it is with the clear under standing that frequent stock-t~king and consequent reorientation will undoubtedly be required ~= research findings accumulate, design decisions are made, and the entire system takes shape. Research never' proceeds in as orderly a fashion as we anticipate in cur plans and proposals because Hither Nature doesn't read them. One never knows when she will choose to reveal some ~ rtant secret that will divert the whole process! And finally, the discussion of priorities should in no way be construed as a call for serial research. The philosophy endorsed here is consistent with a theme that runs through the ent We symposium: parallel research efforts must be carried cut at various levels of scecificitv on a representative sample of the token problem sears if _ , _ ~ , _ _ _, . · . . . . . . . the program IS to evolve--anti continue to develop-- m the most efficacious manner. The pressure to focus too narrowly on the most well - effaced or immediate problems is all too prevalent In ur~ertalci~s of this magnitude having the level of public visibility that the space station enjoys. many of the problems sure to arise "dc~ns~am" are in areas Here the preset kna~riedge base is at best primitive. Attention must be given near to Spacing those knowledge bases if we are to avoid costly delays in development arbor costly design marshes as the total system~cake= shape. 263

~£4 Model Build Bow presentations emphasize the importance of developing a conceptual mcde! or set of models of the space station. Together, Davis arxl Fischhoff sketch out we essential features of such modeling ark the kirks of research questions that ~ st be addressed ~ order to ma ~ it useful. I shall not repeat their observations, except to note one po mat of contrast and to explain why I believe model building deserves a top priority. First the contrast. Davis makes a distinction between aspects of the total system about which there is and is not sufficient information to construct models. Where it is deemed feasible, chiefly in the physical domain, the trick is to make the mcdels--arKi the systems they represent--t'resourceful" ark} comprehensible. here it is not, the issue hooches one of finding alternatives to ~eling. Fisc~off, on the other hi, sums to have In Druid a more cc~mpr~hensive kind of needing effort: one that en~asses a varier of thins and levels of urxierstanding. Here the basis is on Maturating bat we know en ncc~letely, ark] providing a framework upon Rich to build new ur~erstar~ir~. Whichever concept one prefers, ark I lean toward the latter, the r~r~ issues are largely ache saw. Both call for exploring new ways to capture and express properties of =~e system that will promote understanding access disciplines; botch recognize that to do so required a better grasp clef brown Cognitive functions than we now have. There are, un Tar view, at least four ban reasons to emphasize a broad modeling effort twister, 1985~. First, the prowess of Gel buildir~g is the most Ambitious way to organize our }ma~rlelge ark ignorance, not only at the outset, but as the knowledge base grass and the system evolves. Assumptions, facts, parameter estimates, areas of unc~inty etc. can be clearly articulated; gaps that need to be filled, or estimate= that need to be refined, can be identified. More than anything, a conceptual model can ensure that even the most pragmatic research has a better chance of contributing to the toted effort. Taken literally, for example, the issues raised by Davis and Fischhoff cover virtually the entire dc main of cognitive and social psychology. Were nature to take its course in these various research areas (or even were NINA support to accelerate the overall progress), the odds of le ~ g precisely what needs to be known at critical junctures in the space station's development are quite low. I shall have more to say on this point later. For present purples, the argument is simply that model building is a useful technique for keeping the research efforts at all levels of generality properly focused. One can study confidence in judgement, or interpersonal tension, or hypothesis generation, or human problem solving tendencies, or what experts know and do, or any of the other general issues identified by the presenters in ways that are more or less likely to generalize to the space station situation. I Arc no inherent reason why an experiment designed to advance fundamental knowledge in one of these areas cannot be conducted in a space-station context as easily as in terms of weather forecasting, battle planning,

265 A model is useful for livestock judging, or business management. specifying that context. A second reason that model building merits the highest priority lies in its contribution to the ultimate development of tasks and procedures. The ways in which this contribution would manifest itself are well described in the two presentations. In essence it boils down to making reasoned design decisions from a system-wide persp ctive rather than from some parochial or purely traditional point of view--be that an engineering, computer science, cognitive, biomedical, or even a humanistic perspective. It forces early attention to such critical matters as developing a common language an] frame of reference within which the various specialists can function interactively. If there is one unique requirement for the sum~=sful achievement of this pr~ject's goal, it is that barriers to the exchange of information and intelligence among units human-human, human-machine, mach~ne-machine be munlmized. Systems of the pant have generally had to attack such barriers after the fact because of the initial dominan~- of one or another technical specialty. And they have done so with only limited surreys. Here the opportunity exists to "design in" features that can minimize barriers. Model develcpment encourages this kind of thinking from the very outset provided, of course, it is not entrusted to only one technical specially! A third argument for the priority of model building is its obvious importance for training, and possibly even personnel selection. True, a model is not a simulation. Nevertheless, simulation at some level of fidelity must ultimately be constructed just as it has been for training on all the earlier projects in the space program. To the extent that the model organizes what is known and unknown at a particular stage, it permits development of simulations that have a greater likelihood of providing train mg that will transfer positively to the operational tacks. The kinds of uncertainties and unanticipated contingencies the human is apt to encounter in the space station are more likely to arise in a simulator base] on a comprehensive modeling effort than they would be in a simulator designed to maximize purely technical fidelity. In the absence of a good conceptual model, the criterion of technical fidelity is almost certain to dom mate. To use an extreme example, suppose the modeling . . . . ~ · ~ ~ ~ ~ ~ effort identified a social phenomenon whose course of development extends over a Period of months and who ~ appearance dramatically alters the way certa m kinds of decisions are handled. Naturally, this would argue for incorporating a several month duration requirement into the simulation even if the technical skills could be mastered in weeks. Without this social-precess knowledge, the emphasis would almost certainly be on the face validity of the hardware and software components. In other words, ccmprehe~ ive model development would increase the likelihood that any simulation would capture salient aspects of the operational tasks--even some that cannot be completely anticipated and "programmed in.'' Similarly, it would provide a better sampling of the overall task domain and hence a more content-valid basis for setting personnel selection requirements.

266 In citing the virtues of model development for simulation and train m g, we should never lose sight of Fischhoff's warning against the possibility of overemphasizing the known to the exclusion of the unknown. Training that develops in operators a dependence on routines for handling anticipatab~e contingencies can be counterproductive when truly navel ones arise. However, thoughtful construction of a model can help obviate this problem by ensuring that the unknown is properly recognized. the real danger lies not in the attempt to build the most complete conceptual models we can, but in the temptation to build simulators that operate only within the domains where our knowledge is rest complete. Finally, model development encourages indeed forces--the kind of Interaction among specialists in the design phase that will have to Order among operational specialists if the program is to be a summers. To mount a truly comprehensive modeling effort will demand creation of a shared language and knowledge base; the exercise will serve, in essence, as a case study ~ multidisciplinary coordination as well as the scarce of a design product. In a sense, all the other proposed research directions are subsumed under the objective of model development (or at least are directly related to it). As Davis points cut, constructing an appropriately "robust" and "transparent" model requires j11~;cicus selection of which properties to include and ignore, and at what level of abstraction. How well that can be done is heavily dependent on cur understanding of human cognitive processes in relation to the physical properties of the system. And it is largely to this end that the research suggested by Davis, Fischhoff, and indent this entire conference is aired tea. Nevertheless, one can distinguish more narrowly defined issues, and some of these appear more promising or tractable at this point than others. Several that strike me as particularly deserving of a high priority are establishment of institutional values, manual override and standby capabilities, and transfer of training issues. Establishing Ihstibutional Values Fischhoff explains that a critical issue facing decision makers in the operational system will be that of representing the organization's value= in dealing with non-routine situations. One cannot anticipate all the circumstance= that might arise that would require human judgment, but it is possible to define the value parameters along which those judgements would have to be made and the extent to which insitutional, crew, or individual value systems would take precedence. Mast decisions Incorporate value and expectation considerations in one form or another (Huber, 1980; Feeney and Raiffa, 1976~. There are a lot of ways to help objectify or improve the expectation element, but ~ . _ ~ ~ . . values are inherently subjective. mis is why there are political systems, j~,dicial systems, wars, and advertising agencies. Unless we can articulate the value system under which the decision maker is to operate--or at leant the general process by which s/he is to assign values s/he faces an impossible task. It is somewhat akin to that

267 facing the median community in its allocation of scarce and costly life-saving resources (such as organ transplants) to a much larger and multifaceted population of worthy recipients. Whose interests take precedence, and how are the value considerations to be weighed? This issue is not an -any one to address, in part because it gets to the heart of the most sensitive, controversial and nolitica~lv charred aspects of any important decision domain. We do not like to make explicit the level of able risk in air safety, nuc1 ear power, or military confrontation (e.g. how many lives we are willing to sacrifice for some larger good). However, there is some implicit value system operating in any such decision, and research over the past decade has produced methodologies for helping to pm it down (Howard, 1975; Huber, 1980; Feeney and Raiffa, 1976; Slovic et al., 1980). Extension of these techniques, and perhaps desrel~nt of others, to provide a In value fr~ork for mews and individuals to carry with them into space is essential if decision-maki~ is to be of acceptable parity. Index, without such a fork the concept of decision quality has no meaning. The options are to face the issue s ~ rely and develop a value framework in advance, or to leave it intentionally vague and ad hoc, thereby offsetting whatever progress is made toward improving decision quality through enhancement of expectation judgments. ~ — ~ A Understanding Override and Stand-by Capabilities Clearly an important set of research issues centers around the idea that human judgment represents the last line of defense against the unanticipated. m e ultimate decision that some automated subsystem is malfunct~onlng, or that some low probability or unclassifiable situation has arisen, ark the skill to move quickly fray a relatively passive to an active Moe in response to it are critical elements of the h~nan's role. Both presentations address override and standby skill issues albeit in slightly different ways. For Davis, they fall within the category of 'snaking the best of the situation," or what to dLo when we have no model. He speculates on alternative strategies, and suggests that we need to explore them, but is obviously more concerned with.'~Xing the best situation" ~ increasing the robustness and transparency of the system and its models. For Fischhoff, these issues epitomize a central dilemma in the whole development process--the tradeoff between using everything we know for aiding and contingency planning purposes, and preparing people to deal with the truly unknown. He argues that designing the system to maximize decision accuracy may not really be optimal when one considers the potential costs On human judgment facility. (Here, ~ncident=1ly, is another instance where the problem of establishing a unified value system becomes critical. ~ What strikes me as particularly urgent about research on these issues is that we know just enough to worry, but not enough to say how they should be handled. For example we know about overconfidence bias and can easily imagine its implications for crisis decision-making, but we are far from understanding all the tack and individual-difference

268 parameters that govern its seriousness (Hammond et al.,1980; Howell and Kerkar, 1982). And we know even less about constructs such as creativity in either the inJivid,,~1 or group context. Were we able to identify and measure such individual traits, we sight include these measures in a personnel selection battery. And under standing croup processes might suggest ways to offset deviant individual tendencies. Unfortunately, our present knowledge of group decision making does not allow us to predict with much certainty how group judgments will compare with individual ones (Huber, 1980; Retiz, 1977; Howell and Dipboye, 1986). Similarly, it is fairly well established, as Fischhoff notes, that stand-by skills suffer from disuse ~~ the human spends more and more time "outside the loop" in a monitoring capacity. This is particularly true for cognitively complex and dynamic systems. But how does one "stay on top of things" when active involvement becomes increasingly rare as more and more reliance is placed on auto meting decision functions? Is something as elaborate (and costly) as a totally redundant mangy back-up ever justified simply for the purpose of maintaining stand-by capabilities? And even if that were done, would the human be able to maintain a serious involvement knowing the status of his or her role? One need only take a look at NOMAD operators doing their "canned" training exercises to app glaciate the significance of this point! Would some other form of involvement do as well? For what decision basks should some form of involvement be mainta bed? To answer questions such as these, more will need to be learned about stand-by capabilities in critical tasks of the sort that are likely to be automated or aided ~ the space station. Fischhoff's presentation does an excellent job of identifying the key questions. Issues concerning the override function should be addressed early in the development process at a fairly basic level since more general knowledge is needed before it will be possible to articulate the most critical applied research questions. Stand-by skill maintenance, on the other hand, seems more appropriately addressed at an applied research level after it becomes clear what sports of functions the human would be asked to back up. Training for the Known and the Unknown Issues of training and transfer are closely related to those of standby skill; in fact, the latter are really a subset of the former. The purpose of training is to establish habitual ways of thinking and acting in certain situations that are likely to improve individual or team performance whenever those situations arise. So long as one has at leant some idea of what kinds of situations might develop, there is reason to hope that the right habits might be cultivated. But if one guesses wrong, or the situation domain changes, or the habits that work well for the known situations turn out to be counterproductive for the unknown ones, obvious transfer problems arise. Since the unanticipated is by definition inaccessible for simulation or contingency planning, those charged with training development face the dilemma alluded to

269 earlier. ho heavy an emphasis on the known or suspected task el~ts could den p habits 'chat prove disastrous where scqnething totally novel acmes along. ~ the other hand, t:ra=~ that emphasizes the flexibili~r of response necessary to d~1 winch novel situations cad undermine the potential advantages of habitual behavior. Advances have been frame toward addressing this dilemma ~ recent zes~r~ on fault diagnosis and pn:iblem solving (particularly in connection with complex process control systems, erg. Mbray, 1981; P ~ ussen and Pause, 1981~. Still, as Fish hoff notes, there are a Ic~t of fundamental questions that remain to be investigated before we can even begin to conceptualize how training ought to be structured in a systems as advanced as the space station. Once again, we have here a set of pressing issues on which s he headway has already been made and research Directions have been identified. For these reasons, ~ believe it merits a high priority in the overall reseal sphere. To this point, To ants have focus exclusively on priority setting within the lain of rearm issues raised by the two presenters. TO summarize, ~ tee' ieve the~r~eling effort shcsuldbe an initial arx3 continuing en ~ asis--a framework within which many parallel streams of research activity can proceed coherently and purposefully. Of those more narrowly defined issues, I consider the matter of establishing institutional values or value assessment techniques as primary, followed closely by the need to clarify the override function, to find ways to maintain intellectual standby skills (or define an optima level of automation), and to train operators to deal with changing and unanticipatab~e circumstances. There are two other programmatic issues that I would like to comment on briefly that were not an explicit part of either paper: individual differences, and the age-old basic vs. applied research controversy. On Individual Differences Both presentations suggest quite correctly that cur designs must be geared to typical behavior~of people in general, or potential operators, or "e ~ ". the assumption is that there are commonalities in the way people approach particular decision problems, and cur research should be directed toward understanding them. T agree. But ~ contend there is another perspective that has been all but ignored by decision theorists that ~ ght also contribute to the effectiveness of future decision systems. On virtually any standard laboratory problem, subjects will differ nary in both the Caviar of their performance arx] the way they approach it. True, the majority~of~n the ~erwhe~ni~ majority~rill display a particular bias, heuristic, or preference on cue. But even ~ the most robust demonstrations of conservatism, or overconfidence, or representativeness, or non-transitivi~y there will be some subjects who don't fall into the conceptual trap. What we don't know, In any broader sense, is whether these aberrations represent stable trait differences, and if so, what their structure might be and how they might be measured. There has been some work on risk aversion

270 (Atkinson, 1983; Lopes, in press), information-prccessing tendencies= (Schroder et al., 1967), and d~rision-making "styles" (Howell and Dipboye, 1986), but very little compared to the vast literatures on typical behavior. I suspect, though ~ can't really prove it, that individuals differ consistently in the ~ inclination to attend to, process, and integrate new information into their current judgments. Were this the case, it might be useful to have some means of indexing such tendencies. Speaking more generally, I believe research aimed at exploring the consistent diff~renm--s in the way people approach decision problem is just as valid as ~ though considerably more cumbersome than--that concerned with similarities. It should be enccuraged. On F==ic and Applied Research Strategies At various place= in the foregoing discussion I have suggested that certain issues might be attacked at a more basic or more applied level given the state of our current knowledge and the demands of the design problem in that area. I should like to conclude my discussion with scare elaboration on this general strategic issue. ~ If there is one limitation on cur understanding of judgment/decision process==, in my c pinion, it is that of context specificity. Work on judgmental heuristics, diagnosis and opinion revision, choice anomalies, group decision making, individual differences in judgment or decision, etc. each has developed using its own collection of preferred research tasks, strategies, and literatures (Hammond et al., 1980; Schroder et al., 1967~. Consequently, it is not always possible to judge how far a particular principle will generalize or whether some human tendency 1S likely to pose a serious threat to performance in a particular system. Nevertheless, as the two presentations have cleanly d~nstra~, these basic literatures provide a rich sore of hypotheses arm leads for oonsideration ~ an evolving pr~rmn such as the space station. me judgmental heuristics arxt rating biases city by Fis~hhoff, for example, are indeed r ~ st risen ~ na, principles to be r ~ koned with in shaping the space station environment. However, despite their ubiquity, such modes of cognition are more prom ment ~ some contexts and under some conditions than others a point emphasized by Hammond in his "cognitive continuum theory" (Schum, 1985~; and the sericusn~ss of the consequent "biases" depends to some extent on one's definition of optimA1ity (Hammond, 1981; Hogwash, 1981; Schroder et al., 1967; Phillips, 1984, VonW~nterfel~t and Edwards, 1986~. Consider the overconfidence bias. Cone indication of this well est~lisshec] cognitive ghencn~on is that decision Callers wed be likely to act in haste and believe bury in the correctness of their action, a clergy dysfunctional Wendy. Or is it? A In coolant in the literature on organizational management is that managers are all too often reluctant to act when they Chard (Peters and Waterman, 1982~. Perhaps a~rerconfidence may serve to offset an Rally dysfunctional bias toward inaction in this setting. Similarly,

271 decisions must often be made under considerable uncertainty, and this will clearly be no less true of space station than of business or military decisions. However, once a decision is made, albeit on the basis of what objectively is only a 51% chance of success, is There not a certain practical utility in actually believing the odds are better than that? If, as often happens, the decision is not easily reversed, what is to be gained by second guessing or '~waffling", and is there not a potential for benefit Hugh the inspiration of confidence In others? In same cases that alone can increase the "tales odds! The point us, overconfidence, like other h~nancognitive~dencies, may have functional as well as dysfunctions implications when Tiered In a particular context Charm d, 1981~; anti even then, its magnitude may be partly a function of that context. Thus the more clearly we can envision the context, the more likely we will be to generate the right research questions, and what that research adds to our basic understanding of overconfidence or other such phenomena will be no less valid than that done in other contexts. All judgment and decision research is done in some context; generalization accrues via convergence of evidence over a variety of contexts. My basic pa lot is this. The space station offers a very legitimate--indeed, an unusually rich--r~=l-world context within which to explore a variety of "basic" and "applied" research questions concurrently. Properly coordinated, the comb wed effort holds considerable promise for advancing cur understanding of fundamental judgment/decision processes in part because of the shared context. Three considerations would, I believe, promote such coordination. First, as noted earlier, some effort should be made to encourage basic researchers to consider salient features of the space station situation In the design of their laboratory tasks and experiments. While it could be argued that putting any constraint at all on such work violates the spirit of "basic research," I believe some concessions can be made in the intE rest of increasing the external validity of findings without compromising the search for basic knowledge. Secondly, research of a strictly applied nature, addressing specific judgment/decision issues that must be answered in the course of modeling, simulation, and ultimately design efforts, should proceed in parallel with the more basic endeavors. In some cases, the question might involve choice of a parameter value; in others, identification of how subjects approach a simulated space-station task. Necessarily, such research would tee less programmatic, more responsive to immediate no, and more narrowly focus than the fi~tal work. Finally, and most importantly, NASA angst do everythir~ possible to ensure that the basic and applied efforts are mutually interactive. As hypotheses ark generalizations are identified at the basic level they should be placed on the agenda of the applied program for test or refinement; as features are built into the evolving system concept, they should become salient considerations for the basic research effort; as questions of a fundamental nature arise in the course of the applied work, they should be incorporated into the basic research agenda.

272 This all sounds quite obvious and "old hat.' Certainly it is the way DoD research programs, for example, are supposed to work (Meister, 1985). I submit, however, that no matter how trite the notion may seem, having closely coupled research efforts at basic and applied levels must be more than just an aspiration if the judgment/decision challenges of the space station project are to be met successfully. It must be planned and built Into the very fabric of the program. The fact that the space station must develop by its own research bootstraps, as it mere, permits little slippage and wasted effort. Yet the state of our knowledge does not permit neglect of either basic or applied research domains. There are, of course, a number of ways this coordination of basic and applied work might be achieved ranging from centralized administrative control to large-scale projects that are targeted to particular sets of issue= and encompass both basic and applied endeavors under one roof. I am not prepared to recommend a strategy. Rasher, I suggest only that the issue is an important one, and one that deserves special attention at the very outset. How it is managed could spell the difference between enlightened and unenlightened evolution of the whole system regardless of how much resource is allocated to judgment/decision research. KEFE=N~F~ Atkinson, J. W. 1983 Per~;ona~ity, Motivation, and Action. New York: Praeger nr~rx], K. R. 1981 Principles of Organization In Intuitive and Analytical - . Cognition. Report No. 231, For any. University of Colorado, Center for Pesear~ on Judgment arm Policy. H0rr:)rx1, K. R., McClelland, G.H., and Mumbler, J. 1980 mean Judgment arm Decision Marks: lheori-=, Methods, and Ores. New York: Prayer Hogarth, R.M. 1981 Beyorx] discrete biases: functional and ~rsfunctiona1 ants of judgmental heuristics. Psychological Rllletin 90: 197-217 . Hogan, R. A. 1975 Social decision analysis. Proceedings of the I::: 63: 359-371 Howell, W. C., and Di~boye, R. L. 1986 Essentials of Mistrial and Organizational Psychology. Chicago: Dorsey

273 Howell, W. C. and Kerkar, S. P. 1982 A test of task influences in uncertainty measurement. Organizational Behavior and Human Performance 30:365-390. Cuber, G. P. 1980 M~nag~'ial Decision Mixing. Glen view, Ill.: Scott Fore sman Feeney, R., and Raiffa, H. 1976 Decision With Multiple Objectives: Preferences and Value Tradeoffs. New York: Wiley lopes, L. L. 1987 Between hope and fear: the psychology of risk. Advances in Experimental Social Psychology. (In press.) Meister, D. 1985 Behavioral Analysis and Measurement Methods. New York: Wiley Mbray, N. 1981 the role of attention in the detection of errors and diagnosis of failures in man-machine systems. J. Rasmussen and W. Rouse, eds. Human Detection and Diagnosis of System Failures. New York: Plenum. Peters, T. J. and Waterman, R. H. 1982 In Search Of Excellence. New York: Warner Books. Phillips, L. 1984 A theoretical Dative on heuristics and biases In prdbabitractic thin. Preys, Svenson ark Vari, ~c., Analyzer awl Aiding Decision Problems. North Holland Publishers. Raiser, J., and Pause, W., eds. 1981 Human Detection arx] Diagnosis of System Failures. New York: Plenum Reitz, H. J. 1977 Behavior In Organizations. Homewood, Ill.: Intern Server, H. M., Driver, M.J., arxt Streufert, S. 1967 Ran Information EN xessir~. New York: Holt, Rinehart, and Winston Schum, D. A. 1985 Evidence and Inference For m e Intelligence Analyst. Draft prepared for the Office of Research and Development, Central Intelligence Agency. Copyright: D. A. Schum, 7416 Tlmberock Rd., Falls Church, Va. 22043.

274 Slavic, P., Fis~hoff, B., and Liechtenstein, S. 1980 Facts and fears: understating perceived risk. R. C. Scaring and W. A. Airs, Jr., At., Societal Risk Assessment: Has Safe Is Safe Enough? New York: Plen~n Von Winterfeldt, D., and Edwards, W. 1986 Decision Analysis ar~BehavioruIResear~h. Near York: Cambridge Univer~;ity Muss .

Next: Synopsis of General Audience Discussion »
Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium Get This Book
×
 Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium
Buy Paperback | $125.00
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!