Click for next page ( 53


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 52
Applications o This chapter discusses the application of human performance mod- els (HPMs) to four classes of real-world problem areas. The first two concern human-machine interaction in relatively well~efined operational situations: piloting of aircraft and control room operation of nuclear power plants. The third category concerns maintenance a type of activity which, although relatively well defined and economically critical, has been some- what neglected. The fourth is a broad class of human-machine interactions wherein the human operator does not perform the task directly, but instead supervises one or more automatic control systems that execute the direct control. The latter area, which includes autopilots in aircraft, semiauto- mated nuclear or chemical plant control, and robots in factories, space, or undersea, poses new challenges for human performance modeling. In the following sections it may seem that certain HPM approaches are constrained to specific application areas (i.e., that procedure/task network and reliability models are specific to the needs of the nuclear power industry or that information processing models are specific to the needs of cockpit designers). This is not the case. The appearance is due to the fact that each methodology was developed initially for a specific area of application. Most of the models discussed in this report are being expanded to other areas, but it is still reasonable to expect to find more instances of a model's use within its area of origin than outside. This does not necessarily imply either that a model, or an approach to modeling, is the only appropriate choice for a particular application or that it is an inappropriate choice for some other application. 52

OCR for page 52
APPLICATIONS HUMAN PERFORMANCE MODELS IN AIRCRAFT OPERATIONS 53 The rapid development of aircraft during World War II gave rise to increasing problems for aircrew members. By the late 1950s, significant analytical efforts were underway in three human-machine areas that had been especially affected by changes in aircraft design and their missions: 1. flight control problems associated with new flight regimes and modified handling qualities; 2. crew workload problems associated with an expansion of mission requirements and a proliferation of aircraft subsystems with their cor- responding displays and controls, and aggravated by generally shortened response times available to the crew, and 3. air-to-surface search and targeting problems associated with new flight regimes, new sensors, and unproved surface-to-air defenses. Each of these areas Is treated briefly in the following pages, with reference to summary documents for more details. Flight Control Background The expansion of operational envelopes and mission requirements for flight vehicles that occurred in the past two to three decades, and the resulting increase in task difficulty and pilot workload, have shmulated a strong need for systematic means of analyzing the pilot-vehicle system and predicting closed-loop performance and workload. This, in turn, has led to substantial efforts aimed at developing quan- titative engineering models for the human pilot performing closed-loop manual control tasks. As a result of these efforts, there exist an exten- sive HPM-directed data base, two well-established HPMs for continuous manual control tasks, and a long list of applications of these models in the flight control arena. Applications of these models include display and control system analysis, flight director and stability augmentation system design, analysis of vehicle-handling qualities, analysis of the limits of piloted control, analysis of pilot workload, and determination of flight simulator requirements. A useful, alternative categorization of these applications that empha- sizes pilot-vehicle system problems addressable by HPMs for the human controller is to relate them to flight test, design, and simulator planning problems; this was done by McRuer and Krendel (1974) and, more recently, by Ashkenas (1984~. Each of these references provides three tables that illustrate quite succinctly the broad scope of application of human per- formance modeling to aircraft control-related problems. Ashkenas (1984)

OCR for page 52
54 QU~TAT~ MODELING OF ~~ PENCE also provides a reference list by application category. These references focus principally on applications of the quasilinear modeling approach. Applications of the optimal control model (OCM) and related monitoring and decision-making models are indicated in Baron and Levison (1977, 1980) and Rouse (1980~. A major source of references on the application of these and a varieW of other HPMs to control problems is the series of proceedings from the NASA-un~versity conferences on manual control (1967-present). Glenn and Doane (1981) used the Human Operator Simulator (HOS) to simulate pilot eye-scan behavior during manual, as well as more auto- mated, flight control modes for both s~aight-in and curved approaches to landings of a NASA Terminal Configured Vehicle (TCV) aircraft. lithe HOS produced eye-dwell times on various flight display systems that had a high correspondence (r = .91) with empirical results obtained in an independent study of actual pilots who had flown those same approaches. Although this was the initial application of HOS to simulating complex piloting tasks, it provides some evidence that aggregated information processing models can also provide useful predictive data in cases where manual control and automated systems monitoring dominate an operator's tasks. Current Issues The evolutions of aircraft, control and display systems, and mission requirements are posing new problems in control: innovative aircraft con- figurations with different dynamic characteristics and, especially, with highly augmented controls; new types of control including six degrees of freedom controls; and different paths to fly. These new systems are not wholly understood, to say the least, and there have been persistent difficulties in design, including pilot-induced oscillations, excessive pilot workload and inadequate pilot-vehicle interfaces. There is a need both for data and for extension of the predictive capability of pilot models to such tasks. Because of the increasing costs associated with simulation and training of flight control sldlls, it has become desirable to use models to assist ~ specifying simulators and In defining or monitoring training programs. In this area, a major limitation is the lack of adequate models for the way in which flight skills are acquired or learned. The concern most often raised in connection with future modeling and understanding of the pilot in the aircraft control loop is the changed and changing nature of the pilot's taslo; owing to the introduction of substantial amounts of automation. Thus, the roles of flight management and supervisory control (monitoring, decision making, interacting with intermediary computers) are becoming dominant in many pilot-vehicle display applications. As might be expected, the data and models needed

OCR for page 52
APPLICATIONS 55 for understanding these roles are not at all up to the standards of those for manual flight control tasks and are clearly In need of further development. Summary The changing, not fully understood, nature of flight tasks, the costs associated with aircraft development and production, as well as those of training operational personnel,- and the history of unanticipated pilot- vehicle interface problems arising in developmentall argue for the need for systematic, crew-centered design techniques. These techniques must be capable of addressing the problems of the pilot (erew) in the total system context of mission, vehicle, environment, automation, displays, etc. Although much work remains to be done, the lessons learned in analyz- ing manual flight control and some of Me modeling techniques that have emerged from that endeavor can provide a sound foundation for the devel- opment of suitable analytical and experimental methods for the problems Of interest. Some evidence for this is given by the Procedure-Oriented Crew (PROCRU) model (and its potential vamtions and generalizations) discussed ~ Pew and Baron (1983) and Baron (1984~. Aircrew Workload Background Crew workload and me allocation of functions to humans and machines in aircraft have been recognized as significant and related problems at least since the early 1950s (for example, see Fitts, 1951~. A more recent survey (Air Force Studies Board Committee, 1982~. documents that both problems are still with us. Prediction of crew workload is a complex and labor-intensive task One of the first published models developed for this purpose was based on a task network approach (Siegel, Miehle, and Federman, 19623. It calculated the times required for discrete operator actions from an extension of information theory. Many subsequent estimations of worldoad for discrete tasl~, including more recent work by Siegel and Wolf (1969), have reverted to the use of measured or estimated task dines or task time distributions. Exceptions include the HOS (Wherry, 1969, 1985), which was designed to calculate task times and to predict and diagnose such workload problems as poor d~splay/control layouts or too many allocated tasks by aggregating the times required for microbehaviors (eye movements, information absorption, etc.~; and Boeing's Computer-Aided Function Allocation System (CAFES), which contained Function Allocation Modules AM-I and FAM-II) and

OCR for page 52
56 QUANllTATIVE MODELING OF HUALAN PERFORALANCE Siegel-Wolf type network approach Workload Assessment Modules (WAM and SWAM). The Vought Worldoad Simulation Program (WSP) was developed in the early 1970s to aid in Me analysis of workload problems in carrier landings by Navy aircraft It was later expanded to cover all phases of flight. The WSP had separate modules for discrete and continuous control tasks, with a scheme for blending them. As in most models of the time, task sequences, task times, flight path tolerances, cockpit geometry, and system configurations were all developed externally and entered the model as inpum. The Pilot Simulation Model (PSM) was in active use at McDonnell Douglas from 1975 to 1978. It utilized stored data on task times to generate worldoad estimates on discrete tasks, with particular attention to the effects of G-load on performance. Greening (1978) provided a critical renew of the then-known crew workload models for aircraft operation which indicated that three aircraft companies, Vought, McDonnell Douglas, and Boeing, were employing different computer models to estimate crew workload. The models reviewed were, in essence, bool~eeping models. The Greening report showed that significant parts of the aircraft industry were using HPMs to estimate workload. Ask time distn~utions and priorities were inputs to the models; worldoad emerged from a comparison of task times with available tune. As part of this working groups' effort, the three companies that reported using workload models in 1978, plus six other airframe contractors, were contacted to update the status of aircrew workload modeling. Of the three airframe manufacturers who were using workload models some years ago, two (McDonnell Douglas and Boeing) have replaced the models, and the third (Vought) still uses the WSP model when needed but has not exercised it for several years. The primary reason for the shift to newer models is the rapid expansion of computer capability. The new models are interactive with the designer and have much more capacious and sophisticated data bases. In Me case of McDonnell Douglas, the newer models also involve different approaches to human performance modeling, including the OCM and operator models developed in the simulation language SLAM. None of the six other manufacturers contacted indicated a use of work- load models. It seems that these companies rely wholly on human factors expertise (including manual time line analyses) and manned simulation for uncovering and relieving problems of workload. During the 1970s, both HOS and CAFES were run on large main- frame computers belonging to the Navy and were, therefore, not generally available to outside users. Similar restrictions applied to the use of Systems Analysis of Integrated Networks of Asks (SAINT; funded by the U.S. Air

OCR for page 52
APPLICATIONS 57 Force) Therefore, many aircrew workload problems were investigated dur- ing the 1970s and early 1980s by human factors groups within the military, rather Earl by airframe manufacturers. For example, HOS and WAM were applied to the development of several emerging Navy aircraft (e.g., LAMPS helicopter, P-3C Update, VPX, and F-18~; SAINT has been used to study workload problems in several Air Force aircraft and over systems; and the Army is currently investigating the use of several Apes of HPMs for studying workload problems in its MANPRINT program. The brief history presented here indicates that much of the funding for HPM development, as well as the study of workload problems, has been stimulated by the military services. Although not all airframe manufacturers use computerized techniques for studying aircrew workload problems, the U.S. Navy, Air Force, and Army continue to recognize and advocate the Utica of Hems for mvestigat~g and Prolog these problems. Current Issues Although task analysis of aircraft missions has provided an acceptable basis for modeling aircrew workload, a number of fundamental definition and measurement issues have been raised over recent years. One of these is that task-based measures are not deemed an acceptable definition of workload by some researchers and users. Some investigators feel that a clean distinction should be made between human operator perfonnance re- quirements, such as result Dom task analysis, and human operator mental effort expended (i.e., a trained operator might perform a task with time and cognitive resources left over, whereas a novice may be fully occupied). They emphasize that the human mental effort expended (not physical calonme- try, which is largely irrelevant) in psychomotor skills or cognitive tasks is important, and if an individual human-centered measure (performance and effort expended) could be found, it might become a more sensitive predictor of human limitation and system failure than either a task-based measure or a system performance measure. One performance related measure occasional used is secondary task performance, which helps assess how well the operator can do an artificially imposed task added to the pnma~y task. However, this measure is often deemed unacceptable by pilots and others because it interferes with the primary task. Many physiological measures of workload have been tried, but all exhibit significant measurement noise and require many seconds or even minutes of data to establish a single workload data point. Probably the most acceptable mental effort measurement technique is the subjective rating scale now employed by Airbus Industries and the U.S. Air Force. Recent research has sought to determine whether psychomotor busi- ness, emotional stress, and pure cognition can be measured separately, and

OCR for page 52
58 QUALITATIVE MODELING OF HUMAN PERFORMANCE whether the components are additive in determining total subjective mental worldoad. Summaly Risk analyses have yielded models for pilot workload in terms of percentage of hypothetically available time required by sensing, motor, and cognitive activities. Recent efforts have sought to measure and model mental workload. Air-to-Surface Search and Targeting Background The problem of finding objects on the earth's surface from a moving aircraft has been recognized since the early days of flight. One of the earliest models of the air-to-surface search process was published as part of a study of the especially difficult regime of nap-of-the-earth flight (Ryll, 1962~. This and many other early modeling efforts were summarized by Greening (1976~. As new sensors were added to aircraft equipment, the search and tar- geting activity became more distinct from piloting and was often performed by a separate crew member. A number of models for the use of quasivi- sual sensors television (TV) and Forward Looking Infra Red (FLIR) were summarized in a report by General Research Corporation for the Naval Weapons Center (Stathacopoulos and Gilmore, 1976~. The HOS model was used as the basis for an Operator Interface Cost Effectiveness Analysis (OICEA) by Lane et al. (1979) to examine the effect of proposed additions of FLIR-related tasks to an electronic countermea- sures (ECM) sensor operators job in a Navy P-3C aircraft. 1b provide comparative data, three versions of the aircraft were simulated: the base- line version without FLIR equipment or tasks, the prototype version that had added (but not integrated) FAIR equipment and tasks, and a proposed Update version with more integrated and automated FLIR equipment and tasks. Comparison of HOS simulation results for the baseline and prototype versions confirmed actual fleet results which had shown that performance of the normal ECM tasks would be significanth,r degraded and performance of the FL-JR tasks would rarely be successfully completed in the proto- Wpe aircraft. However, the study also showed that the Update version would permit all of the FI~IR-related tasks to be successfully completed and performance on the ECM tasks to be enhanced.

OCR for page 52
APPLICATIONS Current Issues 59 The multiple-sensor aircraft poses problems of a spinal sort, especially when used in high-intensity convict where line~f-sight exposure to the target area may be dangerous, and an active sensor such as radar can be used only briefly and intermittently. The targeting Bunion process then becomes one involving difficult trade-ofl\ between the nslo; associated with search and the need for current target data. Nonimage data (such as flight vectors or coordinates) must be blended win the output of automatic classifiers and with intermittent imagery in the most efficient way. A modeling approach to this problem is being developed at the Naval Weapons Center and elsewhere (Greening, 1986~. Summary Numerous target acquisition models have been developed and used over the past 25 years. However, the bunk of model development and validation work is becoming obsolete because of changes in tactics, the proliferation of sensors, and advances in sensor technology including a variety of automatic targeting systems. Because of the substantial Log in modeling relative to advances ~ tech- nology and changes in tactics, models have not, in general, had substantial impact on the development of new or improved sensors. Their utility has been greater in tactical plannung and related, postdesign acquires. The most active air-to-surface sensor modeling areas currently are those directed toward (1) enlarging the scope to include more of the relevant context and (2) keeping up with developments in sensor technology. lIUMAN PERFORMANCE MOOEI5 IN NUCLEAR POWER OPERATIONS Background The number of human performance modeling simulations actually applied within the nuclear industry is at present very small. Although considerable theoretical work has been done (e.g., Shendan, Jenkins, and Kisner, 1982), translation of that work into everyday plant operations has been limited. Other than instances in which cognitive modeling has been incorporated into operator aids (e.g., West~nghouse's DICON work; U.S. Department of Energy, 1983), the majont~r of applications have been associated with risk assessment and nuclear power plant safety (U.S. Nu- clear Regulatory Commission, 1982~. Most of these cases have involved responses to requirements of the Nuclear Regulatory Commission. l

OCR for page 52
60 QUANTITATIVE MODELING OF HUMAN PERFORMANCE In a recent meeting, nuclear experts discussed the capabilities of methodologies currently used for risk assessment. That meeting provided useful insights into the status of human performance modeling as well as the reasons behind that status. It became evident that human performance modeling should not be considered as isolated from other techniques be- cause actual plant usage was the result of many implicit decisions about strengths and weaknesses in available methods. Consequently, this discus- sion considers HPMs within a framework of the available technology. There are five main techniques used for the assessment of human related risks: Unique for Human Error Prediction (THERP), Oper- ator Action Tees (OATS), Maintenance Performance Prediction System (AL9PPS), Sociotechnical Approach for Human Reliability (STAHR), and SLIMIMAUD (Success Likelihood Index Methodology/Multiattribute Util- ity Decision). Of the five, only MAPPS utilizes discrete task network simulation as the sole basis for prediction. Why the large body of theoreti- cal models that exists has not been utilized more completely is best seen by a relative comparison of the strengths and weaknesses of other techniques. A brief summary of the methods follows. The THERP technique (Swain and Guttman, 1980) is probably the oldest and most established human reliability assessment method and was originally developed by Swain (19643 at Sandia National Laboratories for military applications. The method relies on task decomposition into micro- scopic actions via a highly detailed task analysis. This analysis breaks down operator behavior to a level of indMdua1 actions such as reading a graph, reading an instrument, or turning a control knob. Each series of operations is described by a probability tree composed of sequential actions in which the probability of an action at any branch is drawn from tables. In a few cases these probabilities are based upon objective evaluations, but in most cases subjective expert opinion is used. The OATS also utilizes probability trees to structure operator actions but has much larger units of analysis, usually plant functions rather than operator tasks (U.S. Nuclear Regulatory Commission, 1984~. A probability is placed on each [unction, based on the time that would be available to perform the function within particular scenarios. AS a result, heavy use is made of time/reliabili~ curves relating probability of performance to available tune. Times are computed based on the time required to recognize and diagnose a plant condition. Each time is defined as total time available minus He time required to execute an operator action. Currently, OATS uses three types of curves to provide a human reliability value; each is based on the nature of the operator action. The MAPPS system is a discrete event simulation (Siegel, Bartter, Wolf, and Knee, 1984~. In its current form, it addresses only maintenance behavior. It is menu driven and includes a variety of parameters whose

OCR for page 52
APPLICATIONS 61

OCR for page 52
62 QUANTITATIVE MODELING OF HUMAN PERFORMANCE it appears better than existing methods for maintenance analysis, possibly because the over approaches are oriented toward operators. Analysis of the other methods' strengths suggests the following. Fast, HPMs may require more parametric data than are usually available in actual industrial settings. Second, labor-intensive techniques can provide subtle decision rationales that may be lost in stochastic methods such as Monte Carlo simulations. Third, expert group techniques provide greater flexibility for considering situation-dependent tasks. Finally, dimensions of plant cooperation and ease of use weigh heavily in applications. Human simulation methods currently do not have an effective interface to normal plant user environments. A comparison of where the approaches are deficient provides addi- . - tional insights. Regarding THERP, its strengths are that traceability of final event probabilities to original situations is good. Flexibility is high because it can deal with unusual tasks. It is weak in that it requires what many consider to be an inordinate amount of training, it is extremely resource intensive because it requires task analysis for every task, and it can be very vulnerable to misuse or biasing if not used as prescribed. The latter results from a tendency of users to skip to probability tables and bypass important intermediate steps. For the OATS approach, traceability is also good because only one variable is involved. Reproducibility (i.e., interrater reliability) is good, and there is a low requirement for Gaining, which is largely due to the somewhat simplistic nature of the method. This approach tends to be inflexible because it can be used only for certain events, and it is very low in completeness because it operates at too general a level of analysis to encompass the full range of probabilistic risk assessment problems. In the MAPPS simulation, reproducibility, but not traceability, is high because MAPPS uses stochastic branching. Compared to the analysis level of THERP, the resources required are minimal The model is strong on completeness because the erects of variables have been quantified carefully and are drawn from a systematic analysis of years of research into factors important for maintenance performance. In terms c)f weaknesses, MAPPS currently deals only with maintenance; it is also weak in its ability to handle unique task factors. The STAHR approach is strong in the last area. It can be readjusted quickly by changing influence diagrams; it has good traceability because the reasons for using each value are documented, the group procedures reduce individual biases, and extended discussion of plant characteristics and actions pennits great specificity of task definition. Weaknesses are similar to THERP in that training is needed tO permit groups to work effectively together, and it is both resource and time intensive. In contrast to THERP, the resources are people rather than data.

OCR for page 52
APPLICATIONS 63 lithe SLIMIMAUD approach is high on traceability, flenbili~, and specificity of task definition. One of its principle weaknesses is that the structure appears to preclude the evaluation of performance variable inter- actions. 1b draw conclusions, it appears that the greatest gains for the in- dustry may come from using human performance models as a part of a hybrid technique rather than in a stand-alone mode. Because the above approaches do not all operate at the same content level, analysis may best be made through combinations of techniques rather than a single approach. Current Issues Within the domain of plant safety, four issues currently appear to be the most important. The first is how analysis can best be applied to cognitive tasks, particulars in such areas as confusion between competing symptoms of plant events. Such questions have been studied by using confusion matrices that have symptoms on one axis and plant events on the other. Additional issues concern identification of those human variables that are really important in plant performance. What constitutes a satisfactory cognitive model of the operator, how the costs of HPMs can be compared to their benefits, how potential users should be acquainted with the technology available, and how human and power plant hardware models can best be integrated are all examples. Validation is clearly the most important current issue. It manifests itself in three ways: data collection problems including the acceptability of hardware simulator data and the difficulty of field data collection; the interpretation and reduction of collected data; and the comparison of potential approaches. The most fundamental mtenon is how well a model works in the field. lb answer that question, better data are needed. Because obtaining data is difficult, the use of human performance modeling techniques is slow, particularly for rare accident events. A second area concerns issue selection. The questions involved are whether the selected performance variables are correct ones and how the nuclear Industry can be certain they are. A third area concerns the ability of models to deal with events outside the realm of the expected because rare accident events are central to plant safebr. A fourth area is misdiagnosis behavior and how it can best be ad- dressed. This area may or may not become less important because of recent emphasis on symptom-based (i.e., unknown cause of abnormality) diagnostic procedures instead of event-based (ie., known cause of abnor- mality) procedures.

OCR for page 52
64 QUANTITATIVE MODELdNG OF HUMAN PACE A fifth area is the previously mentioned question about coupling of methodologies. Specifically, can human simulations effective couple to already existing techniques such as THERP or SLIM/MAUD? Another area is the use of human operator models in design specification, particularly for purposes of increasing human reliability. The final area concerns what can be done to eliminate confusion and increase correct diagnosis probabilities, given the occurrence of a misdiagnosed event. Summary This section has examined human performance modeling for the nu- clear industry from a particular perspective, namely, human reliability and risk assessment That perspective was adopted for two reasons. First it depicts the way in which it Is actually applied in industry. Second, insights into why models are and are not used were discussed by comparing an existing model (MAPPS) with the limited set of methods currently used for risk assessment. By considering other approaches, it was possible to place a human performance model into the perspective of an entire technology area. This has often been difficult in many broad-based technology areas such as military applications. As a result, direct comparisons of strengths and weaknesses could be made to highlight not only what role the methods serve, but also to identify more directly what Uade-offs had been made among recurrent questions such as ease of use, resource requirements, specificity of analysis, reliability, and traceability. As mentioned at the be- ginning, the actual use of human-related models in the nuclear industry is extremely limited. The models applied appear to be the result of a practical mix of many of the factors described above. The extent of future model usage win probably hinge more on the result of changes in available data and resource support than on He actual state of human model technology. HUMAN PERFORMANCE MODELS IN MAINTENANCE OPERATIONS Background ~la~ntenance Is different from many of the other tasks discussed in this report. In particular, although time is an important attribute (i.e., the sooner something is repaired, the better), system maintenance is usually a static task because the system state does not change without human input. Of equal importance, maintenance can be a very complex task when un- expected and unfamiliar faDures occur. In such situations, problem-solving skills are central and psychomotor skills are of secondary }mportance.

OCR for page 52
: - APP~CATIONS 65 This section briefly reviews HPMs for predicting maintenance per- formance. One model, MAPPS, has been discussed in the context of nuclear power operations. For the purposes of this review, maintenance performance Is characterized at three levels: 1. action-by-action sequences of observations, tests, and repairs (re- ferred to as SEQUENCES); 2. overall times and errors associated with particular sequences (re- ferred to as TIME/ERRORS); and 3. mean time to repair and probability of error across sequences or equipment systems (referred to as M=R/PERR). The maintenance models discussed here produce outputs in one or more of the above levels. Inputs to these models include one or more of the following: 1. representations of the equipment, either physically or functionally; 2. representations of the maintainer in terms of 3. general characteristics (e.g., parameter variations), action selection criteria (e.g., maximum information gain or minimum time), knowledge and skills (e.g., understanding of equipment); and results of task analyses (i.e., maintenance SEQUENCES). Based on the above characte~tions of outputs and inputs, six repre- sentative maintenance models are summarized in Bible 3-~. It is interesting to note that the approaches underlying these six models (second column of Bible 3-1) represent the full range of modeling approaches discussed in this report. Thus, there is no one-tone mapping from application domain to appropnate modeling methodologies. In distinguishing among the models in Able 3-1, Wohl's (1982) model and that of Siegel et al. (1984) emphasize global performance measures such as M,l~lK. Traditional labor-intensive maintainability analyses have a similar focus (Goldman and Slattery, 1964~. In contrast, He models of Hunt and Rouse (1984) and of 1bwne, Johnson, and Convin (1982) emphasize fine-grained predictions of SEQUENCES. The model of Madni, Chu, Purcell, and Brenner (1984) falls on the global side of these fine- grained approaches. Therefore, the choice among the models in Able 3-1 depends on the level of performance to be modeled. Summary It would seem feasible to use fine-grained models to produce the SE- QUENCES to meet the task analysis requirements of the global models. This approach would reduce the analytic effort required and produce perfor- mance predictions at all levels. However, the knowledge-engineering effort

OCR for page 52
~ - 5 con Lit C ~ A sit a =5 sat G) ~ . _ A, ._ ~ Con C} - o = _ A, _ O ~ CQ G) ._ O oo ~ _ ~ E ~ ' ~ ~ . E =~ = .~ ~ ~ . _ ~ o ~ ~ ~ =3 ~ ~ ~ ~ ._ sat o CO :^ :^ :^ ~ ~ >~ Cal O O ~ O C) ~ ~ ~ Cq CC ~ V) ~ ._ :^ o U) :^ ~ ~: ~ , , ._ , , ._ s~ ~ , , ~ 1 :^ ~ t: o ~ ~ ._ q, ~ CO ~ . l l =>b ~ V o~ V V o~ ov ~b: ~ O ~ ~ ~ ~ ~ ~ ~ ~ 0 0 ~ 0 0 ~ CId Ce ~ O V = ~ ~ V = ~ 0 ~ C} O L) ~ U ~ ~ cr ~ ~ .~ ~ C O ~ ~ c ~ ~ e c O Q ~ '~ ~= ~ ~ = ~ C C ~ ~ ~ ~ 2 ~ ~ . E 3 ~ ~ o o o, U C U~r ~ U U U U o ~ o ~ ~ ~ o 66

OCR for page 52
APPLICATIONS 67 required to undertake this (relative to both equipment and maintainer) is probably unpractical when the modest levels of investment normally made in maintainability analyses, which are sometimes viewed as a necessary evil, are considered. HUMAN PERFORMANCE MODELS IN SUPERVISORY CONTROL Background Supervisory control is an example of an important merging class of human operator activity in which HPMs are needed but for which proven models do not now Ernst. Simply stated, supervisory control refers to all the activities of the human superior who interacts via a computer with a complex semiautomatic process. It can substitute for direct manual control of vehicles or plants. The term supervisory control is derived from the close analogy between the characteristics of a supervisor's interaction with subordinate human staff members and interaction with automated subsystems. A superior of peo- ple gives general directives that are understood and translated into detailed actions by staff members. In turn, staff members aggregate and transform detailed information about process results into summary form for the su- pe~or. The degree of intelligence of staff members determines the level of involvement of their supervisor in the process. Automated subsystems permit the same type of interaction to occur between a human supervisor and the process Darrell and Sheridan, 1967~. As indicated in another re- port of the Committee on Human Factors (Shendan and Hennessy, 1984), supe~ory control behavior is interpreted to apply broadly to vehicle con- trol (aircraft and spacecraft, ships, undersea vehicles), continuous process control (oil, chemicals, power generation), and robots and discrete task machines (manufacturing, space, undersea mining). In the strictest sense, the term supervisory control indicates that one or more human operators set initial conditions for, intermittently adjust, and receive information from a computer that closes a control loop through ex- ternal sensors, effecters, and He task environment, as illustrated in Figure 3-1. Typically, supervisory control involves a Sve-step cycle of the superv}- sor's activity (Sheridan, 1986), which includes the following functions: 1. planning what to instruct the computer to control automatically, which involves the supervisor in (a) coming to understand the nature of the controlled process, inputs, and other physical constraints, (b) deciding on the tradeoffs between various benefits and costs, and (c) thinking through a strategy for arranging the task;

OCR for page 52
1 ' -- - 1 o ~ o o in V ~ ~ ~ ~ 0 cn A c: ~ ~ ct o ~ ~ ~ ~ - ct ~ - - o o An - u, tn ct ~ ~ o it in ~ - - ~ o a' Q ~ cn c,n ~ 1. , ~' 1 ~ in CE ~ a, LL ct ~ Ct.O ~ Cal O t ~ cal) c: ~d 1 ~1 to a) 0 in ._ in 'I a' a) 3 Ct i_ > 68 Ct Ct ~7 a) in - i_ 8 .E o ._ i_ ._ in ~5 . a) c) ~ Ct Ct C) _ _ Q `: .cn , , ~5 of Ct _1

OCR for page 52
APPLICATIONS 69 2. instructing or actually programming plans into the computer to do (or start to do) certain things automatically for normal operation or to stop some actions when they are complete or abnonnal; 3. monitoring, that is, (a) allocating attention among many sources of information about what is going on, including direct sensors, biocomputer knowledge bases and expert advisory systems, documents and human ex- perts? or others in order to watch the (usually normal) automatic operation of the system, and (b) estimating the current system state and deciding if it is satisfactory, or if not, to diagnose what has gone wrong; 4. intervenm&, that is, breaking into the automatic control loop either in a minor way to adjust set points of automatic control or in a major way to stop one task and start a new one, to take emergency actions (fault management) manually, or for maintenance or repair; this involves reprogramming (loop back to step 2~; and 5. Iear7zut,g that is, acquiring from experience what is necessary for better future planning (loop back to step 1) or other supervisory functions. Each of these [unctions and subfunctions may be said to involve a separate mental model, though the term as used today Is mostly restricted to step leaf. Each may be augmented by a computerized decision aid of some type, in addition to the computerized automatic control. Although a variety of models of supervisory control have been pro- posed, including the PROCRU model discussed earlier, there is little consensus on which way to proceed. One of the major problems in model- ing human supervisory control is that formulation of the objective function is an active role of the supervisor; it is not given a priori. There are usually as many objective functions as there are people or occasions where one objective has different strategies. None of these is easily specifiable in other than fuzzy linguistic terms. A model can be (1) a paper description of a system, i.e., a theory. Alternatively, it can be (2) a functional model implemented on a computer, which emulates the function of a system, or (3) a mental model, an internal representation of a system held in the mind of an operator, designer, or researcher. Supervisory control is particularly complex because multiple elements of (2) and (3) must be combined into a single physical system which must, in turn, be combined with (1~. From Figure 3-1, it is clear that a model of supervisory control must be a model of the entire system, not just the human. The situation is similar to that found in models of the human operator in manual control systems, where the particular realization of the model depends in each case on the properties of the rest of the system. Humans modify their behavior to compensate for, or complement, other elements of the system; for that reason, all of them must be represented.

OCR for page 52
70 QUANTITATIVE MODELING OF HUMAN PERFORMANCE A supervisory control system is one in which there Is little or no overt human activity for considerable periods. The tasks that a human must early out are initiated by plant states or by operators receiving goals from a higher authority such as management. Each special function requires a model, and a model of the supervisory controller would describe the interaction of human and computer as a function of plant state. Several models of limited scope may be relevant to supervisory control. Moray (1986) reviews some 10 models of monitoring. There are many models of decision making and several models of fault detection for a varieW of different tasks (Rouse, 1983; Moray, 1986; Wohl, 1982~. Intervention to trim the system set point could probably be modeled by a conventional expert system. However, none of these models supervisory control. If other models of planning, monitonng, and fault management existed, they might be used to predict behavior in supervisory control, provided that the state of displayed information was also known. For example, if the operator had recently looked at variables LYE, Y2, ..., Ye, ..., Yn) and those values were known, a model might suggest that the operator would recognize that the system was in state Si, and that, by using a production system, one might predict planing or action Pj, Hi from S.' and a knowledge of the operator's goals. A unique feature of supervisory control is the passing of control backward and forward between operator and computer (see, for example, Sheridan and Verplank, 1978~. There is an ability of the system to make judgments on the bash of its knowledge and to enter into a dialogue with me human operator. There have been a few attempts to provide solutions for the allocation problem, such as those of Sheridan (1970) and Moray, Sanderson, Sluff, Jaclo;on, Kennedy, and Ting (1982), but these are algorithms (comprehensive procedures for obtairung a desired result) rather than models of human performance. The situation is reminiscent of Simon's (1981) attitude toward human behavior: A man viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time- is largely a reflection of the complexity in which he finds himself.... We can often predict behavior from knowledge of the system's goals and its outer environment, with only minimal assumptions about the inner environment. In this regard? Sanderson (1985) notes that It is obvious that the sort of goals being pursued in basic cognitive research and those being pursued in applied cognitive engineering are very different.... The questions being posed in basic research are often conceptually sweeping and are ideally task-free.... Ike concern is that ... fundamental principles ... emerge.... In an applied cognitive setting

OCR for page 52
APP' [CATIONS however the task takes precedence.... When trying to understand, say, how an expert does a task a great deal of the researchers' time and effort goes into understanding the task itself. The model of human behavior which emerges has more reference to the task than is normal, or even considered proper, for basic research. . . 71 This may be a good point of departure for developing a model of supenriso~y control. Because the human plays a quantitatively slight but qualitatively important role in such systems, a model of the machine is as important as one of the human. Of the existing models, PROCRU is a good start because the expert system portion of it allows planning, reasoning, and procedure choice to be modeled. Summary Supervisory control is an emergent class of systems wherein humans su- pervise computers and computers perform the direct control. It poses new demands for integrated human performance modeling, inherently demand- ing component models of high-level activities such as planning, teaching, monitoring, failure detection/inte~vention, and learning. It also poses a new perspective with respect to dependence on both the task and the initiative of the human operator.