Click for next page ( 72


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 71
6 Human Performance It has long been established that human error plays a major role in the malfunctioning of complex, technological systems and in accidents associated with their operation (Meister, 1971~. In the nuclear industry, estimates of the incidence of human error as a percentage of all system failures range from 20 when Licensee Event Reports (reports to the NRC from utilities that describe events that were a threat to plant safety) are analyzed (Potash, 1981) to 65 when Loss of System Safety E unction Events (reported events in which there was a total or partial loss of a function related to the maintenance of plant safety) are examined (Trager, 1985~. Whatever estimates are usedand there have been many even the lowest one of 20 suggests careful attention and some form of remedial action. Research on human reliability has always occupied a central position in NRC research because it is widely accepted that hu- man actions account for a large proportion of the initial causes of plant faults and accidents. In fact, it was the only human fac- tors research topic that continued to receive support from 1986 to 1987. A principal goal of this research has been to provide estimates of the probability of human error for use in probabilistic risk assessment. The research done to date has been concerned with improving methods of eliciting expert judgments of the prow ability of various kinds of human errors (e.g., NUREG/CR-1278, 1983b; NUREG/CR-2743, 1983g; NUREG/CR-4016, 1985a) and developing a human error data bank. 71

OCR for page 71
72 While the intent to predict the probability of human error is commendable, current methods do not do this adequately. Most research has relied on subjective estimates by experts and has aimed at improving what are, in effect, sophisticated guesses. Further development of this methodology will result only in addi- tional guesses. The pane! believes that research to further improve subjective estimates of human error should not receive a major em- phasis in the future. A more fundamental understanding of the nature and causes of human error is needed if the nuclear industry is to make further progress in measuring, predicting, and reducing human error and the human contribution to risk. Implicit throughout much of this report is the need for knowI- edge about four aspects of human performance in nuclear power plant systems: (1) to be able to measure human performance in existing systems, (2) to understand and predict the effects of changes in human performance on performance of the overall human-technical system and the effects of changes in human per- formance that would result from proposed modifications to the technical system, (3) to predict human performance in situations that are by design expected rarely, if ever, to occur and cannot be tested in the operating system, and (4) eventually, to predict human performance in new systems especially before they are operational, indeed very early in the conceptual design stages. At present, only limited data and capabilities exist to measure human performance in nuclear power plant systems. The collection of data on human error since Three Mile Island has resulted in increased emphasis on obtaining and employing feedback from operational experience; however, the extent of data collection is still quite limited and, more important, the data are typically not related to any underlying theoretical framework or model of human performance and behavior. Without such a framework, the ability to interpret data and advance beyond simply counting and categorizing human error is severely restricted. The methodology that has been chosen for examining human performance in systems is human reliability analysis within the framework of quantitative probabilistic risk assessment. Current methods for human reliability analysis are inherently limited in their ability to mode! human performance, to mode! the effects of human performance on the overall system, and to model the dynamic interrelations of the human and technical elements of the system. They are particularly limited in what is felt to be the most

OCR for page 71
73 critically important area of problem solving (cognitive) behavior: human decision making. A comprehensive, systematic research program is necessary to characterize human performance in nuclear power plants; to develop adequate measures of performance along with techniques, tools, and processes for measuring performance; and to develop iteratively the models and data base to be able to predict perfor- mance within a reasonable band of uncertainty. Human perfor- mance denotes not only the performance of individuals, but also that of teams and organizations. This program needs to be an industry-wide effort and does not have to be centralized under the direction of one organization, as long as a single organization or team exists whose job is to integrate the theoretical, analytical, and empirical results from the various sources in the nuclear indus- try and other applicable fields. The latter would be an appropriate role for the NRC. Characterizing performance means systematically describing, categorizing, and discriminating between aspects of performance and the behaviorally important aspects of the context of the per- formance. It also means discriminating levels of quality of per- formance identifying good versus better versus best performance. This characterization needs to be made within a theoretically based framework of sufficient rigor and specificity to permit com- munication with, and possibly mapping to, alternative ways of describing the same performance. The characterization would also permit systematic investigation of variables affecting perfor- mance. In other words, one would like to be able to recognize good performance (or better or worse performance) when it occurs, be able to describe it at least wed enough for it to be recognizable when it occurs again, and be able to compare measurements of performance taken by different people in different places at differ- ent times. It is insufficient for an experienced supervisor to Know a good operator when he sees one." To make progress in identifying the underlying causes of good and poor performance in nuclear power plants, one needs to be able to characterize performance in terms of the parameters of an underlying theoretical mode! in such a way that other investiga- tors, with data on other performance in other contexts, will at least be able to say that this performance and context is similar to or different from that one and compare measurements made in different contexts. Some sort of characterization is typically

OCR for page 71
74 derived as part of the development of a task analytic data base. A performance or behavioral taxonomy is developed, either explicitly or implicitly, and usually subjectively, as part of the process of a task analysis. The taxonomy or basis for characterization depends on the purposes of the analysis. Considerable effort has been expended, prunarily by INPO and the NRC, to develop extensive data banks of task analyses for different purposes. There is no need to continue to develop task analytic data banks of this nature. Those taxonom~es and task analyses may or may not be consis- tent with a theoretically based mode} that is useful for advancing knowlecige about human behavior in nuclear power plants. Thus, the INPO task analysis base, generated for development of train- ing systems, may define the correct performance of an operator in taking specific actions, within a specific scenario, at a particular type of plant. And, because the knowledge, skills, and abilities thought to be necessary are cataloged for each task, it is possi- ble to make some decisions about what should be included in a training program. However, such an analysis may indicate little about the likelihood that an operator so trained will perform the task correctly under less than ideal conditions. It may not indicate what should be modified; (except for the training program) to im- prove the likelihood of correct performance. It does not indicate what is "better" or "outstanding" performance, only what is min- imally acceptable in order to meet the specified success criterion for the task. Such questions require that the characterization of performance match the key parameters and concepts of a mode} of human performance that can be manipulated in a systematic way to drive an empirical program. It is this theory-driven, empirically based research program that the pane! recommends. To improve our understanding and ability to predict per- formance, the development and application of more formalized models are necessary. There exists a considerable base of human performance modeling technology on which to draw, in particu- lar for supervisory control tasks typical of nuclear power plants. Most of it has been developed for military and aerospace appli- cation, but certainly the general approaches, and in many cases specific models, are applicable to the nuclear power plant con- trol room context. Some of the first models to go beyond the analytical approach of the Technique for Human Error Rate Pre- diction (THERP) (NUREG/CR-1278, 1983b) were the network

OCR for page 71
75 simulation models of Siegel and Wolf (19693. The NRC supported development of such a simulation model, Maintenance Personnel Performance Simulation (MAPPS) (NUREG/CR-4104, 1985j), to describe the performance of nuclear power plant maintenance per- sonnel. The power and potential of such simulation models, how- ever, extends far beyond the narrow use apparently intended by the NRC for MAPPS (i.e., generating human error probability estimates for probabilistic risk assessment). Further development and application of such modem to design, design testing, trade-off studies, etc., should be encouraged. Reviews of such modeling approaches appear ~ NUREG/CR-4532 (1986b) for the nuclear context and in a broader context In two Committee on Human Factors reports currently in pregration (reports of the Working Group on Human Performance Modeling and the Pane} on Pilot Performance Modeling in a Computer-Aided Design Facility). Another framework for supervisory control modeling ~ the control-theoretic approach (NUREG/CR-298B, 1982a). Problem solving models and the rapidly developing array of models com- ing from cognitive science (artificial intelligence, expert systems, parallel distributed processing models) need to be brought to bear when applicable. An example specific to the nuclear industry Is given in NUREG/CR-4862 (1987a). Empirical data from available sources, operational experience, laboratory-scale experiments, part-task simulation and mockups, and full-scale experiments in high-fidelity training simulators need to be integrated with the modeling development, thereby itera- tively validating models and suggesting modeling needs. In the longer-term future, models could be employed in ad- vanced computerized design methodologies that would include both technical system models and human models (e.g., Pew et al., 1986~. Research issues in this area include human performance mea- sures and measurement tools, human performance modeling, hu- man reliability analysis and its incorporation into PRA, data col- lection, and analysm. We recommend a research program that involves progression from descriptive to predictive models based on an iterative design-test-design approach between modeling and data collection. Data collection should include feedback directly from operating experience as well as from a broad program of em- pirical studies ranging from laboratory experiments to controlled studies on high-fidelity simulators or actual plants.

OCR for page 71
76 CAUSAL MODELS OF HUMAN ERROR, ESPECL\~[Y FOR SITUATIONS WITH UNPLANNED ELEMENTS Rationale and Background Research on human error has always occupied a central posi- tion in NRC human factors research. Its principal goals have been to provide estimates of the probability of human error for different tasks that can be used in conducting probabilistic risk assessments of plants in identifying the factors that contribute to human error. These are appropriate goals that the panel endorses. However, the pane} recommends new directions for future research. A number of NRC-sponsored projects (e.g., NUREG/CR- 127S, 1983b; NUREG/CR-2743, 1983g; and NUREG-4016, 1985a) have been concerned with unproving the methods for eliciting the subjectively based estimates of experts on the probability of var- ious types of human error. To be usable, these subjective judg- ments, even when made by experts using the best available meth- ods of estimation, must be validated by comparison with objective data on the actual probabilities of error. Without such objective ~ data, one cannot know how accurate the judgments of experts are. But if objective data on human error are available, why are subjective estimates needed? The only justification, in the panel's opinion, for the use of expert judgments of human error and for research on methods for Reproving the judgmental process, is to provide estimates on an interim basis on tasks for which objec- tive error probabilities do not yet exist. We believe that rather than expend limited resources on further studies to improve expert judgment, a high priority should be given to methods to obtain objective estimates. There is no doubt that a comprehensive bank of objectively based human error rates is desirable. However, one of the draw- backs inherent to an embedded performance measurement system is the fact that the errors it identifies are limited to errors made in the overt behavioral responses of a control room crew. At present, performance measurement methods are unable to detect cognitive errors. The design of these and similar error-capturing systems would benefit substantially if some means of obtaining information on cognitive errors were to be developed.

OCR for page 71
77 Research Recommendations Research on human reliability has traditionally concentrated on statistical estimates of human error rates. Such work cannot identify when an error is likely to occur, nor propose changes in systems design or operation that will prevent errors on a particular occasion. What is required is research on causal models of human error that can have a direct impact on design and operation of plants. The important behavioral science questions in this area in- clude the following: We need to understand the sources of error (especially im- portant is sources of cognitive failure forms such as fixation and missing side effects and group problem solving); We need to generate causal models of human performance that include models of error mechanisms; We need to understand how to avoid the causes of error in order to reduce the likelihood of these failure forms and improve human performance; We need to understand how changes in the person-machine system (e.g., new support systems and aids) affect error types. Our estimates of the probability of human error have not, in general, been validated; and human error rates are well known to vary over very large ranges (NUREG-1278, 1983b) in response to "performance shaping factors. It is far more desirable to under- stand at what particular moment an error is likely to occur, or in what circumstances and at what time the probability of error increases. If an error is even stochasticalRy predictable in being more likely to occur at some time or in some situation, steps can be taken to monitor an operator's behavior or to provide the op- erator with extra assistance at that time. Such knowledge will also often suggest preventive redesigns of equipment, procedures, management, selection, or training. To do this it is necessary to understand why errors occur. Human error is a failure of some kind in human information processing. We have a good understanding of certain aspects of human information processing, particularly of many aspects of signal detection, perception, attention, and motor responses (Boff et al., 1986~. We can describe properties of the environment that predispose people to make errors. Several

OCR for page 71
78 models for error causation have been proposed for limited aspects of human behavior, and from them it ~ possible to make a num- ber of predictions as to circumstances in which errors are likely to occur, and also suggest a means of dramatically reducing the likelihood of errors (Reason and Embry, 1985; Reason and Myciel- ska, 1982; Norman, 1981, 1983; Senders et al., 1985; Rasmussen, Duncan, and Leplat, 1987~. But there remains a number of issues that are not well understood. These include errors of planning, the tendency to become fixated on part of a problem, the relation of the operator's mental model to errors, and the extent to which errors are really "chances events. It should be especially noted that many incidents in nuclear power plants have their origin in errors of maintenance. Valves are left in an incorrect position; maintenance personnel work on the "wrong unit, wrong train, procedures are not followed, etc. The cause of such errors, if un- derstood, can be dealt with. This is of far greater importance than having a probabilistic estimate of how often human errors may occur. In addition, research on human performance in unplanned-for situations is absolutely essential in a world that has proven time ant] again that not all factors can be anticipated in advance. For example, in a recent analysis performed on about 30,000 events ~ that occurred in nuclear power plants and collected in the Abnor- mal Occurrence Reporting System, half of them were represented by unique combinations of systems, components, and human faiT- ures (Mancini, in press). Improving the ability of nuclear power plant personnel to handle unplanned-for situations depends on behavioral science re- search in a number of areas: multiperson and multifacility decision making (distributed problem solving); knowing how to prepare personnel for rare or unanticipated situations; using new com- putational technology to assist operational personnel, especially in plan monitoring, adaptation, and repair; and knowing how to avoid the problem of ~brittle" problem solving (that is, behavior that is efficient providing rules can be followed, but collapses when standard practices do not work). A better understanding of the nature of human error, where the error-prone points occur, how likely these errors are to occur, and how to eliminate the error-prone points are the most critical research needs for improved nuclear power plant safety. Progress here drives or interacts with virtually all other behavioral science

OCR for page 71
79 research msues. After ah, it is the performance of the operational personnel that ~ the front line of safety from the human per- spective, and understanding how to measure and improve human performance ~ the essential guide for how to find and use the possibilities resident in technology. Laboratory, sunulator and field studies should all be supported and integrated. In addition, the NRC should make better use of the large amount of knowledge available for reducing human er- ror. (One significant source would be a collection of information from the many detailed control room design reviews now avail- able.) Experimental and quasi-experimental studies should con- centrate on investigation of cognitive errors errors of thought, planning, diagnosis, reasoning, and decision making. The military and NASA have recognized that to have effective man-machine systems in a complex world requires sophisticated human-machine performance modeling capabilities. They have active programs to develop sophisticated analytical modem of human performance (e.g., Hartzell, 1986; Pew et al., 1986~. Theoretical and empirical work in several directions should begin immediately, one on control room activity and one on main- tenance activities. A third should consider the role of social in- teraction among the members of a team, the effect of hierarchical and other organizational features, communication, etc. Particular attention should be focused on errors that occur as a result of a large number of small incidents that cascade, rather than one major "single cause" incident. A major reduction in the role of human error in nuclear power plant operations is the intended goal. It may be valuable to target the research at some of the errors with more important conse- quences, but the overall plan must be to support our generic un- derstanding of the causes of human error. This research will also be needed to address enhanced technical support centers, computer- based supervisory centers in the control room, team performance, the structure and organization of emergency procedures, training to handle accident conditions (decision training), severe accident management, and measurement of human reliability. This work should begin immediately and become a permanent central element of human factors research.