National Academies Press: OpenBook

Quantitative Modeling of Human Performance in Complex, Dynamic Systems (1990)

Chapter: 2. Approaches to Human Performance Modeling

« Previous: 1. Introduction
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 16
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 17
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 18
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 19
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 20
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 21
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 22
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 23
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 24
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 25
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 26
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 27
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 28
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 29
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 30
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 31
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 32
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 33
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 34
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 35
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 36
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 37
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 38
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 39
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 40
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 41
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 42
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 43
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 44
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 45
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 46
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 47
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 48
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 49
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 50
Suggested Citation:"2. Approaches to Human Performance Modeling." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 51

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Approaches to Human Performance Modeling MODELS OF LIMrrED SCOPE The primely concern of this report is models that describe and predict the complex behavior of humans as components of human-machine systems. However, there are a number of models that represent aspects of human information processing ~ more limited domains. Generally, these models are the products of laboratory research on very specific human tasks devel- oped to model human information processing rather than human-machine interaction. They, therefore, tend to ignore aspects of the environment or task that would modify the model's predictions. For example, models of human reaction time typically predict response time primarily as a function of the number of possible signals or their relative probability, and give sec- onda~y consideration to physical factors such as how far apart Me response keys are, whether eye movements are needed to monitor signal occurrence, or anatomical dimensions of the operator that might affect performance. Several of Me human information processing models have been adapted from engineering models to represent human behavior. Typically, they are based upon a single theory or technique. Such models invoke information theory, the theory of signal detection, sequential decision theory, theories of reaction speed and accuracy, sampling theory, psychophysical scaling theory, and fuzzy set theory. Many are described in Sheridan and Ferrell (1974~. All of the models mentioned above have been used successfully to account for human performance at some time in a laboratory setting. For example, it has been well established that the reaction time of an observer to one of several possible signals is related to the uncertainly of the signals in the way predicted by Information Theory. This is a 16

APPROACHES TO HUMAN PERFORMANCE MODELING 17 highly replicable result (Garner, 1962). Similarly, the frequency with which observers monitor instruments and the duration of Weir fixations when they look at an instrument have been predicted by Information Theory (Senders, 1983~. Discrete movements are well described by Fitt's law (:Fitts, 1954~. Single axis closed-loop tracking is adequately modeled by the Crossover Model (McRuer and Krendel, 1957~. There are also many models of short- term memory (see, for example, Norman, 19 70~. Yet, however successfully it is validated in a laboratory setting, each models only a small part of human information processing, and the interaction among models of limited scope cannot be specified nor can the overall behavior of the human be predicted. An example of the strengths and limitations of a typical model of limited scope can be found in the application of Information Theory to predictions of pilot workload and the design of instrument displays. Senders and others (Senders, 1964; Senders, ELkind, Gngnetti and Smallwood, 1964) applied the Nyquist Sampling Theorem and Shannon's Information Theory to predict the frequency, duration, and pattern of eye movements when an observer monitored a group of instruments. The obse~ver's task was to report any excursions of instrument pointers to extreme values. The instruments used were driven by band-limited, zero-mean Gaussian white noise forcing functions, with bandwidth differing from one instrument to another. The Sampling Theorem describes the necessary and sufficient sampling strategy to ensure that all information is extracted from the display, and Senders successfully used it to predict the observer's visual sampling behavior. Senders was also able to predict the relative duration of fixations and the pattern of transitions among instruments. Clement, Jex, and Graham (1968) applied the model to predict work- load in the cockpit of a real aircraft and to predict the optimal layout of instruments. In doing so, they were forced to make a number of arbitrary corrections to the model In particular, they had to assume, on the basis of empirical evidence, that the sampling rate was considerably higher than that predicted by the sampling theorem. Although they gave no theoretical justification for the values they chose, their predictions of the instrument layout matched the actual cockpit design that evolved for the particular aircraft they studied. As Senders (1983) himself has pointed out, a number of crucial as- gumptions were made in the model. Operators were all highly practiced. Forcing functions were statistically stationary. The instruments were all of equal importance and had no intrinsic meaning or interest to the operator, who was not required to reset Me instruments or exert any control actions when extreme values were observed. The model is in no sense a general model of human performance or human information processing. Also, insofar as the operators are in situations where costs and payoffs are im- portant, where the conscientious exercise of strategies is important, where

18 QUANTITATIVE MODELING OF HUMAN PERFORMANCE emergencies make one instrument more important than another, where monitoring is shared among several operators, or where so many displays must be monitored that there is insufficient time for eye movements and short-term memory becomes important, the model becomes increasingly poor at predicting behavior. The most important requirement for applying a model of limited scope is knowledge of the boundary conditions within which the model may be applied. Outside those boundary conditions other models may be preferred or empirical parameter values must be determined. It is because of such limits that an overall model of human performance, which expressly includes a variety of causal factors, is to be preferred for human-machine system design and assessment. In applied settings, particularly in system design, only a small subset of human behavior can be predicted by a model of limited scope. If other aspects of information processing affect the output of the model in uncertain ways and if the properties of the environment (such as the spacing of displays or the force required to activate a control) are not represented in the model, it becomes apparent that more elaborate models are required. Those described here must have as their goal to model the performance of a human-machine system as a whole, rather than modeling the behavior of the human alone or understanding the psychological mechanisms by which behavior is mediated. LARGER, OR INTEGRATIVE, APPROACHES The remainder of this chapter provides examples of comprehensive, or macro, models. These examples were selected to illustrate the varied of possible approaches to the development of global overall human per- formance models (HPMs) and to provide the foundation for subsequent discussion of the current issues and research needs in the field of human performance modeling. Although all of the modeling approaches discussed here may be em- played to model the same general class of problems, they differ in a number of important ways. These differences arise largely from the differing origins of the approaches, both disciplinary and institutional, and from the fact that, in most instances, model development was driven by a particular class of person-machine problems. Models of limited scope, aimed at the analysis of single-task subsets of the comprehensive problem, serve as a resource for each of these macro- models. The approach is basically eclectic, drawing on various disciplines for theories and techniques. Four general approaches to macromodeling

APPROACHES TO HUMAN PERFORMANCE MODELING 19 are described: (1) information processing, (2) control theory, (3) task network, and (4) knowledge-based. The assumptions of the information processing approach are based on psychological theories of human information processing and the belief that observed or predicted human performance can be explained (i.e., modeled) by the aggregation of the (micro) internal processes required to execute a series of procedures that define the taste A task consists of a set of sublasks, and each subtask can be modeled. All of these models are then employed to explain the overall task behavior. Because Human Operator Simulator (HOS), the exemplar of the infonnation processing approach, also has a strong systems orientation and includes a system model, it predicts total (closed-loop)] performance, which is unusual for psychologically based models. Control theory models come from an engineering discipline and are principally oriented toward continuous time descriptions, optimization of closed-loop person-machine performance, and process representations of human performance at a macrotask level (such as state estimation or manual control). The task network approach, which emerged from operations research, is oriented primarily toward the sequencing of large numbers of discrete tasks arranged in an appropriate network so as to achieve a particular goal; models based on this approach focus on the time required to com- plete individual and total tasks and the error probabilities associated with performing these tasks. The knowledge-based approach has roots in cognitive psychology and in computer science/artifiaal intelligence. The field of cognitive science, which represents the intersection of these disciplines, has as its goal the development of formal representations of human cognitive processes, such as decision making, problem solving, planning, understanding, or reason- ~ng. Sometimes these representations are algonthms; more often, they are expressed in the form of simulations of the processes believed to be under- taken by the humart The tools of the artificial intelligence specialist, such as object-ortented programming, are beginning to be used to implement these simulations and are being applied to the modeling of person-machine systems. They provide the basis for very flexible models that can be tapped easily to produce performance metrics or augmented with computer graph- ics summary outputs. 1 A closed-loop system is one in which the output controls or regulates the input. An open-loop system, on the other hand, is one in which there is no feedback control.

20 Background QUANTITATIVE MODELING OF HUMAN PERFORMANCE Information Processing A plethora of models of limited scope exists to describe the information- processing abilities of the human operator. Many are described in Boff, Kaufman, and Thomas (1986~. Classical information theory describes the relation of signal probability to reaction time (Hick, 1952~. Signal detection theory accounts for the relative effects of signal strength and the observer's response bias in the detection of sensory Formation (Green and Swets, 1966~. Quantitative models have been proposed both for short-term mem- ory and for the retrieval of information from long-term memory (Norman, 1970~. Models exist for both discrete movements (Fists, 1954) and contin- uous tracking (McRuer and Krendal, 1957~. In fact, for almost every block in the typical flowchart proposed for human information processing, several models can be found in the literature. It would therefore seem attractive to create a global, comprehensive mode} by aggregating a group of models of limited scope so that all aspects of information processing are included. By incorporating anatomical and physiological models as required, it should even be possible to account for such factors as the time required to move about the environment, position the body, reach, and grasp an object such as a control. Exemplar The human operator simulator is a computer system, a collection of programs for simulating a user-machine system performing a complex mission. Illustrated in Figure 2-1, HOS simulates the total system: the hardware and software of interest, venous "external" systems (friendly, hostile, or neutral), and the behavior of humans operating within the system. It provides a general "shell," a user-onented, Human Operator PROCedure (HOPROC), language, and a resident Human Operator Model (HOM). A model for a particular system is instantiated when the user specifies, via HOPROC, the equipment characteristics and the procedures to be followed by the operator. HOPROC is an English-like language that can be used to define hardware, software, and human processes or actions at any desired level of generality or specificity. Fortran-like statements can be incorporated in the language, which is useful for describing, where necessary, the dynamic equations of motion of simulated hardware systems and information that can be mentally calculated by humans based upon other knowledge available to them. Human tasks and actions need only be defined and described in HOPROC at a level that might be found in a typical operator's manual.

APPROACHES TO HUMAN PERFORMANCE MODELING 21 User Inputs ' Operator Tasks, Crewstation Layouts Hardware and Software _ HOP ROC _ Specs, etc. Assembler and <(written in HOPROC) ~ Loader (HAL) HODAC Outputs HOS Outputs Time Lines. ~ . _ , Link Analyses, Procedures Analyses, Anatomy Loadings, Human Operator Data Analyzer and Collator (HODAC) Human Operator Simulator Own System . . Operator Knowledge and Specifications- —Hardware/Software [ Time Synchronizer _ Human Operator Model (MOM) External Systems Hostile Friendly Other . ~ FIGURE 2-1 Structure of the Human Operator Simulator (HOS) showing inputs, outputs, and major subsections. With HOS, all human responsibilities, functions, and tasks, and all hardware and software processes, regardless of their complexity, are re- ferred to as procedures. The operator's procedures represent an important part of long-term memory for the simulated operator, who is assumed to be fully trained In using those procedures. The locations and types of the operator's displays and controls are also entered by the user and assumed to be in the operator's long-term memory store. The passage of time during a HOS simulated mission is primarily dependent on time changes determined by submodels of the HOM that is part of the HOS structure. Execution of human actions necessitating human-machine interaction (i.e., the transfer of information from displays or to controls) causes the simulation of the systems to advance to the point in time where the transfer would occur. What makes HOS more than a simulation language is its resident general-purpose HOM. This contains and controls a highly integrated set of information processing submodels, each with its own set of algorithms and rules of operation. The rationale underlying development of the HOM process submodels is that, although thousands of different operator tasks exist, they require only a limited number of different microactions such as reaching for and manipulating control devices, recalling information from short-term mem- ory, looking at displays, and absorbing information from them. Each microaction requires some amount of time to perform. Other things being equal, similar microactions in different tasks should require similar times

Long-Term Memory Learned Procedures Procedure Statements Device Types and Locations Equations and Parameters IF... THEN... Information Estimation ABSORB CALCULATE ~ RECALL L Short-Term ,, Memory , ,, Traces 1 Legend: | MicroprocessSubmodel | 22 QUANTITATIVE MODELING OF HUMAN PERFORMANCE Attention and Recall I r of Current Task "Active" _ _ Procedures _ ~ I Rev tv~lalc;lll~ll~ Statement Processing _ START Procedure.' End Procedure ~ _' rAnatom '~ ~ Locations r "Anatomy Movement _ L - REACH (TO) ... _ ~ - ~}1 _ _ GRASP ~ 1 LOOK AT Lo Invoking of Microactions ( Knowledge List ~ ~ - - _ Invoking Transfer/Recall FIGURE 2-2 Major submodels and knowledge lists in the Human Operator Simulator (HOS). Of any given operator. Thus, efficient and internally consistent predictions of human task performance should be derivable from a HOM organized around a mutually exclusive and exhaustive set of microactions. Further- more, performance times for each microaction should be predictable by (1) evaluating the physical difficulty of microactions (e.g., extent of required reor~entation/movement of anatomy parts) and (2) knowing, for each Ape of microaction, the level of skill of the particular operator being simulated. The major HOM process submodels in HOS are shown in Figure 2-2 and discussed briefly below. More detailed descriptions of HOM submodels and HOS can be found in WherIy (1976), Lane, Strieb, Glenn, and WherIy (1981), Meister (1985), and Harris, Glenn, Iavecchia, and ZaLkad (1986~. · Long-term memory retrieval: Learned procedures and the types and location of display and control devices are assumed to be resident in

APPROACHES TO HUMAN PERFORMANCE MODELING the simulated operators lonP-term memory. 23 , ~ , · Attention and recall of current task responsibilides: The HOM as- sumes operators can work on, or attend to, only one active procedure at a time, although rapid changes in attention among active tasks are permitted. The attention submodel, when accessed, computes a figure of merit (FOM) for each active procedure and selects the one with the highest FOM lo attend to. · Statement processing: Compiled HOPROC statements are treated as goals. The Statement Processing submodel uses its rules and algorithms to determine the next microaction to invoke in its attempt to satisfy the overall goal of the statement. · Information estimation: This submodel contains strategies for es- timating required information Depending on the current situation and type of information needed, it may invoke short-term memory recall, infor- mation absorption, or information calculation to obtain needed estimates. Successful estimation by any of the three methods results in a short-term memory trace for that specific information. · Short-tenn memory removal: Probability of recall for a previously estimated value or state is computed by this submodel, based on the strength of the trace when last estimated, the time elapsed since the last estimation, and the capability of the simulated operator for this process. This submodel is also used to determine the need for physical manipulation of controls or displays and hence the need to take account of movement time. · Infom~ation absorption: This HOM process corresponds to the per- ception of information from external sources such as displays and controls. Anatomy movement submodels for venous sense modalities are used, when required, to model touching or visual fixation prior to the actual absorption of displayed information. Time required to absorb information is deter- mined by the nature of the information source and may require several sampling instances to build up sufficient evidence. Information calculation: When information cannot be directly ab- sorbed from external sources or accurately recalled, it may be calculated by using HOPROC-written calculation equations. Users must supply the model with times required to perform these calculations. · Anatomy movement: This submodel determines the parties) of the anatomy that must move in order to access a display or control, and whether the desired anatomy part is currently busy. If busy, the submodel may decide to use an alternative method (e.g., swap handsy, and determine the appropriate time charges for the movement. For example, the time to perform procedure LOOK AT Is a function of the required angular changes for the head and eyes. · Decision making: Users can incorporate decision rules into proce- dures using the HOPROC format "IF (assertion) THEN (consequences)."

24 QUANTITATIVE MODELING OF HUMAN PERFORMANCE Assertions can be simple (e.g., ALTITUDE IS LESS THAN 1,000 FEET) or highly complex (i.e., by using logical ANDs, ORs, and NOI§). A decision- maldng time charge is levied for evaluating each assertion following the IF until the assertion Is judged to be TRUE or FALSE. When an assertion is judged to be true, satisfaction of the goalies) for the consequences following the THEN will be attempted. If assertions are judged to be false, the stated consequences will be ignored. · Accessing relevant portions of procedures: Complex operator and hardware procedures often have multiple pathways to successful comple- tion. The HOPROC language contains ~ function that makes it possible to bypass portions of procedures that have become irrelevant to the current situation. Constants in equations for HOM microaction times are based on re- views and reanalyses of hundreds of research studies found in the open psychological literature and from experiments conducted by HOS devel- opment teams. Although HOS is typically run by using default constants representing an average operator, users can manipulate the time equations to determine whether system performance would be dramatically altered by operators having more or less than average skill for completing various microactions. There are parameters or equations, such as those needed to define criticalities for the attention model, that are system, mission, or task specific and must be supplied by the user. The HOS system provides a number of outputs of use to system designers and analysts. The starting and ending times for all actions and events occurring during a simulated mission are recorded by HOS. Levels of detail for logged events vary from macroevents (e.g., deciding to work on a particular task) to microactions such as orienting the head and eyes to a particular display. A data analysis package—part of the HOS syste~yields standard statistical human factors analyses and descriptions clef logged events (e.g. time lines, link analyses). Analyses of human and system performance at various levels of aggregation can also be constructed, and descriptive statistics are available for the tunes of all tasks to be simulated. Because the HOS system simulates a total system, it also produces an expected mission time line as an output rather than requiring it as an input. Existing versions of HOS require mainframe computers, but a microcomputer version Is under development Harris, Iavecchia, Ross, and Schaffer, 1987~. Strengths A major strength of HOS is that it is a complete system and was conceived as such. Much care and effort went into those aspects of HOS that make it both general and relatively easy to use. Thus, the HOPROC

APPROACHES TO HUMAN PERFO0{ANCE MODELING 25 language is capable of describing both operator procedures and other con- stituent portions of the system in an English-like language. A resident, general model of the human operator (MOM) frees the user from devel- oping the operator model, except for specifying procedures and necessary parameters for the HOM. The HOS also includes a package of programs called the "Human Operator Data Anab,rzer/Collator" (HODAC3 for an- alyzing the human operator data generated by a simulation. Finally, user and programmer manuals exist for each version of HOS. As a human-machine simulation, HOS can produce data similar to that produced in person-in-the-loop simulations. Thus, its basic output is a time history of the simulation, including significant events as well as human operator actions. These simulation histories can be analyzed to evaluate performance as a [unction of operating procedures or other system variables. In addition, operator loading, down to individual body parts, can be examined. A significant advantage of HOS lies in He manner in which task times for the resident HOM are determined. Unlike HPMs that require completion times to be input by an analyst or determined by sampling time distributions provided by the analyst, the time to perform tasks is built up from the times determined from execution of HOS's human performance submodels. This reduces the data input requirement for HOS. Furthermore, at least in theory, it allows HOS to be used to predict completion times for new tasks involving combinations of micromodel activities, rather than requiring that they be estimated or determined empirically. It also guards against invalid conclusions about higher-level system functions that may be drawn from simulations, which fail to adequately consider detailed human- machine interactions that must occur in the real system. Various aspects of HOS have been tested in a series of studies of increasing complexity (Strieb, 1975; Glenn, 1982~. These investigations demonstrated several important attributes of HOS: · The HOPROC language is flexible and robust with respect to modeling operator and equipment procedures, and the sequence of actions generated as a result of these procedures is reasonable. . The resident micromodels in the HOM reproduce baseline exper- unental task data from which they were derived with sufficient accuracy to ensure that micromodel interactions do not introduce unanticipated artifacts. Model simulations of additional, carefully controlled, human per- formance experiments are of sufficient accuracy to continue with application of HOS to more complex situations. · Ike HOS simulates full-scale, complex systems, as demonstrated by simulations of operators in Navy and NASA aircraft applications: the Air Tactical Officer in a LAMPS helicopter, sensor station operators 1 and

26 QUANTITATIVE MODELING OF HUMAN PERFORMANCE 3 on board three different versions of P-3C ASW (and-submarine warfare) patrol aircraft, a pilot in a NASA lbrrn~na1 Configured Vehicle (TCVy, and the Tactical Officer (TACCO) on board a P-3C during antisubmarine warfare (ASW) missions. These studies (see Chapter 3) demonstrate that HOS can identify actual system/operator problems and provide a user with insights that can lead to solutions. The HOS is particularly sensitive to the types and layout of displays and controls in a simulated operator's workstation, as well as to the number and type of multiple-task responsibilities allocated to the simulated operator. Thus, HOS appears to be useful for uncovering problems in control/display design, workstation layout, and task allocations, as well as evaluating ways of improving operator and overall system perfot~ance through changes in them. Although almost every HPM dealing with the prediction of operator performance times assumes either additn~tr of component activities or a model of the ways in which activities interact, the aggregation of times in HOS concerns microlevel events not represented in other HPMs. A rationale, theoretical basis, and methodology for identifying microprocesses whose times can be aggregated has recently been described (Wherry, 1985~. Caveats The HOS currently contains no simple way of specifying an operator's mental or internal model for controlling rapids changing, multidimensional, complex systems. Acquired through experience and practice, such internal models permit operators to determine needed amounts of control device changes and to anticipate system responses without venfying all of them from displayed information. Any internal model can be represented in HOS by using HOPROC-written information calculation equations, but this does not solve the problems of deciding what constitute appropriate equations or how much mental calculation time should be charged when they are invoked. Although micromodel outputs have been tested against data, and the results of part-task HOS simulations have been compared with experimental results, there has been little quantitative comparison of experimental data and HOS simulation results for complex systems. Further data are needed before the extent of HOS's abilitr to make statistically valid quan~ci~tive predictions of human performance in complex tasks can be evaluated. The simulation level of HOS includes each human-machine interac- tion; however, there is no interactivity of components. The HOS simulates human performance at a level mat may be inappropriate to those interested only in higher-level system functions. Such detail is required for evaluation of control/display design and layout. If HOS is to be used for simulating

APPROACHES TO HUMAN PERFORMANCE MODEM 27 complex systems during early design when valid simulation data would be most helpful to the design team, the system parameters must be developed. These HOS inputs would include types and locations of displays and con- trols, written procedures for how the simulated operator will use displays and controls, and procedures for the hardware/software subsystems. For a new, complex system, this can be difficult and time-consuming. Experience with HOS indicates need for an input development team composed of sub- system engineers and human factors specialists who can rapidly bnog their expertise to bear on the decisions to be made. Subsequent modifications of initial inputs can usually be made rapidly to test the impact of suggested changes on any portion of system design, and HOS can have its greatest impact on system design when used in this way. A goal of HOS development was to minimize the need for users to estimate the means and variances of the hundreds of task times that might be required by a task network approach model. However, for HOS to determine the microprocesses to be invoked, demiled descriptions of each operator task to be simulated, estimates of the criticality of these tasks, and specification of the types and locations of displays and controls are required. Users of HOS, like users of knowledge-based models, find themselves more involved with problems of describing operator protocols and less involved with predicting task times. It must be recognized that the quality of a model's predictions always depends upon its basic premises. The HOS should produce useful results when three requirements are met: 1. users have adequately described operator procedures and any necessary internal models; 2. the resident HOM · attends to appropriate operator procedures at the right times, · contains and invokes appropriate microprocesses, and · calculates valid microprocess times; and 3. the microprocess times are additive. Meeting these premises limits generalizabilibr to other situations; however, the robustness of the model when one or more of the above premises is not met, is unknown at present Control Theory Background The modeling of continuous manual control of dynamic systems, such as aircraft or automobiles, has received a great deal of attention in the human performance literature. Investigations have ranged from modeling

28 QUANTITATIVE MODELING OF HUMAN PERFORMANCE human performance in basic and simple tracking tasks to applications of models to complex, multivanable control problems. From the standpoint of human performance modeling, manual control problems have proven to be rich in content and importance, and have provided experimental situations in which extensive measurement of human performance is pos- sible. Thus, they have provided fertile ground for the development of HPMs. Furthermore, the literature reveals that it is a mistake to view the manual control area as one of limited scope which only requires, or is a source of, simple models of human psychomotor performance. On the Contras, manual control models for problems involving several variables and complex, dynamic interactions tend to include submodels for a range of perceptual, cognitive, and motor activities. For example, manual con- trol models include submodels for activities such as instrument scanning, attention sharing, state estimation and prediction, and neuromuscular per- formance. In addition, techniques used to develop manual control models, as well as some of the models themselves, have been used successfully to model human performance in tasks other than manual control. The most successful approaches to modeling human manual control performance have drawn on the theory and techniques of control system design, analysis, and evaluation. The resulting class of human performance models Is known commonly as control theory models. These models begin with a consideration of the system to be operated and its performance goals, in which the inanimate systems of interest are dynamic in nature and describable by differential or difference equations. Into central integrating concepts or assumptions underlie control theory models. First, the human operator is viewed as an information-processing or controVdecision element in a closed-loop system. This is sometimes referred to as the cybernetic view of the human. In this context, infor- mation processing refers to the processes involved in selectively attending to various sensory inputs and using this information' along with the oper- ator's understanding or model of the system, to arrive at an estimate of the current state of the world. Second, ~ most models based on this ap- proach, it is assumed that trained operators approximate the characteristics and performance of good, or even optimal, inanimate systems perfo~,~ing the same functions, but that their performance, and therefore that of the overall system, is constrained by certain inherent human sensory, cognitive, and response limitations. Control theory models require that these human limitations be described in terms commensurate with other elements of the dynamic system description. This imposes a need for human performance data appropriate to limitations in dynamic processing and response, rather than those appropriate to discrete task completion. The performance issues of interest in control theory models are as- sociated with overall person-machine performance and tend to relate to

APPROACHES TO ~~ PENCE MODE 29 such measures as accuracy of control and information processing, system stabilizer and responsiveness, and ability to compensate for disturbances. A major focus is the interaction between system characteristics and hu- man limitations and the consequences that flow therefrom. Thus, these models are intended to help system designers determine whether or not the information provided and the control or handling characteristics of the system are adequate to allow a trained operator to perform the task with a reasonable amount of physical and mental effort. Without question, the most developed area of control-theoretic mod- eling is continuous manual control This field has been dominated by mo models, named, quasilinear deserting function models based on frequency- domain techniques (see McRuer and Krendel, 1974, for a review) and the optimal control model (OCM) based on time-domain techniques (see Baron and Levison, 1980, for a review). These two models differ in important respects. One difference is the nature of the submodels that each approach aggregates. Both approaches incorporate submodels for sensory and neu- romotor dynamics but in different ways. Also, the treatment of visual scanning and its impact is quite different in the two approaches. The quasi- linear model uses Senders' information theoretic, visual sampling model (Senders et al., 1964~. The OCM uses an attention shanog model (Levison et al., 1971) oriented toward optimizing control performance. Finally, the OCM incorporates models for state estimation and prediction as well as an explicit representation or model of task requirements. These are basic to the OCM approach but are not generally part of the quasilinear models. Notwithstanding these important differences, each model has been shown to be capable of describing or predicting human performance in a variety of manual control tasks; however, their predictions have not been compared in the same situations. Both have been extensively validated and applied. They have been used to analyze aircraft and other vehicle control and display problems, to determine the ejects of various stressors on per- formance, to evaluate simulator requirements, and to assist in experimental and simulation planning. Although the results of these applications and tests of the models demonstrate that they can be applied successfully to a large class of manual control problems, each requires further development. The principal areas of manual control modeling needing further work are mult~variable control, control of nonlinear systems, control of highly auto- mated and slowly responding systems, and modeling the performance of less than fully trained operators. Ho distinguishable trends in HEM development using control theory have emerged over the past decade. One trend is the advance from s~ngle- variable to multivariable control tasks. The other is a trend from problems concerned mainly with skilled motor performance tie., manual control) to those involving a significant degree and varieUr of additional activities such

30 QUANTITATIVE MODELING OF HUMAN PERFORMANCE as monitoring, failure detection, and decision making. These trends may be viewed as a shift from relatively simple manual control problems to complex problems involving higher levels of control that may also include a significant manual control component. The extension of control-theoretic models to tasks other than manual control has been based largely on the approach and models associated with the optimal control model (OCM). This can be understood in light of the structure of that model as illustrated in Figure 2-3. In this figure the model of the system to be controlled is an integral part of the OCM: it is a person-machine model. The diagram indicates that the OCM of the human operator incorporates submodels for perception, state estimation, and state prediction. These submodels provide an overall model for in- formation processing in a dynamically changing environment that is robust and general. It accounts for human sensory limitations and for selective attention sharing. The information processing model represents the opera- tor's ability to construct from his understanding of the system, and to derive from incomplete and imperfect knowledge of the moment-by-moment state of the system, a set of expectancies concerning the actual system state as needed for control or decision making. The OCM structure described above, with the continuous control por- tion replaced by appropriate decision elements, has been used as a basis for human performance models of failure detection (Gal and Curry, 1976; Wewerinke, 1981), monitoring (Kleinman and Curry, 1977) and decision making (Lev~son and leaner, 1971; Pattipati, Ephrath, and Kleinman, 1980~. Recent efforts have been directed at applying control theory ap- proaches to the development of comprehensive models of the type of prime interest here. These models cover a range of operator activities including monitoring, continuous and discrete control, situation assessment, decision making, and communication The models were developed in a variety of application contexts. Baron (19843 discusses the models and provides the outlines of a general model for supervisory control based on the control theory approach. Exemplar Of the control-theoretic models developed thus far, the Procedure- Oriented Crew Model (PROCRU) best illustrates how the approach can provide a framework for developing comprehensive models. This model was developed with the goal of providing a tool that would permit sys- tematic investigation of questions concerning the impact of procedural and system design changes on the performance and safety of commercial aircraft operations in the approach-to-landing phase of flight. It is a closed-loop system model incorporating submodels for the aircraft, the approach and

i~ c—~ cn cn LL ~ ~ ~ ~ l <: co UJ on ~ z o ~ co > — ~ o ~ z a) m o 1 m~ ~ = - r ~ A: Z O _ , _ CO lL i' a: o LL a: - <Xl' . t~ , ~1 ~ ~ _ ~ cn ~ ~~ O o 0~ 31 lL o 1 o 1 LL 1 ~ o 1 z - lo: ~ s ~ C TIC o T ~ _ - C~ - o A: to :- - C~ ~ -

32 QUANTITATIVE MODELING OF HUAL4W PERFORMANCE landing aids provided by the air traffic control system, three aircraft crew members, and the air traffic controller (ATC). For convenience in devel- opment, oIIly two members-the Pilot-Flying (PF) and the Pilot-Not-Flying (PNF) - re represented by detailed HPMs. The models for PF and PNF had the same basic structure. Differences in behavior result from specifying different task assignments, task priorities, and information sources for the two models. The models for PF and PNF are comprehensive in accounting for the wide range of crew activities associated with conducting a typical commercial ILS (instrument landing systems) approach to landing, display monitoring, information processing, decision making, flight control and management, execution of standard procedures, and communication with other crew members and with the ATC. The PF and PNF models employ derivatives of the basic information processing structure used in me OCM and other control-theoretic models mentioned above. 1b this structure, mechanisms are added for dealing with the multitask environment, including those necessary to account for task selection and the execution of routine procedures or discrete tasks. The necessary extensions are provided by defining a crew member's overall goals in terms of a set of procedures or subtasks and by incorporating models for procedure selection and execution. In general, a procedure in PROCRU may be comprised of discrete steps (e.g., execution of a checldist), or it may involve continuous actions (e.g., regulation of the aircraft's flight path). In both cases, procedures consist of several elements: an enabling event, which is a condition that must be satisfied before the procedure is eligible for execution; an expected gain function that determines the importance or urgency of executing the procedure at a given time; a recipe or prescription for carrying out the procedure; and for discrete procedures, a time to complete the procedure or individual steps in the procedure. An enabling event may be viewed as a situation or predicate for executing a procedure, Bus making procedures analogous to production rules of the form IF (situation) THEN (action). In PROCRU, if more than one situation is evaluated as true at a given time, the expected gain calculations provide the control structure for selecting the appropriate rule to activate. Moreover, situations are assessed or evaluated by the modeled human information processor and, therefore, on the basis of information corrupted by modeled human perceptual and cognitive limitations. Similarly, actions that result from the execution of procedures reflect appropriate human performance limitations. Thus, although not developed from an expert system or artificial intelligence perspective, PROCRU may be viewed as a complex, albeit somewhat unusual, production system whose inputs, outputs, and control structures account for human limitations and goals.

APPROACHES TO HUMAN PERFORMANCE MODELING Strengths 33 The control-theoretic approach to developing comprehensive HPMs, as exemplified by PROCRU, has several strengths. It leads to modular structures allowing for the inclusion of submodels of limited scope that have been developed and validated separately for such activities as detec- tion, decision making, and control The principal integrating mechanisms are the information processing and task selection aspects of the model. The information processing model, which has been validated in numerous con- texts, provides relatively direct ways of handling multiple sources and types of information (e.g., information available from different sensory modal- ities). The task selection portion of the model allows system goals and priority structures to be formalized as part of the model specification. With this structure, when a particular task ~ selected' the comprehensive model will be executing (i.e., will reduce to) a single-task model that has been developed, and possibly validated, for that task. In addition, the structure lends itself to a synthesis of venous approaches to modeling human per- formance. For example, in addition to aspects drawn from existing control theory models, PROCRU models discrete tasks and rule-based procedural activities in fashions that are analogous to those used in the task network and knowledge-based approaches, respectively. The models account for human limitations in information processing and response execution, often in a manner that allows these limitations to be defined independently of the specifics of the task This feature increases the predictive potential of the models to the extent that it allows data concerning the operator's inherent performance limitations to be context independent. The comprehensive models developed with the control theory approach are analogous to persons the-loop dynamic simulations. Therefore, they can provide the same kind of performance data that would be available from such simulations. These models also yield predictions of internal states of the operators which, although not verifiable through measurement, can be extremely useful for uncovering or diagnosing system problems. Finally, the models provide a vanety of outputs related to task demands and operator workload. For example, they produce activity time lines which, unlike those provided by traditional human factors analyses, are dynamically generated in response to the model of the evolving situation. Operator actions are not completely preprogrammed but, instead, depend on previous (possibly random) events or disturbances and responses to them. This allows the analyst to change model parameters related to the system, the scenario, or the human operators and have a new, different time line generated automatically.

34 Caveats QUANTITATIVE MODELING OF HUMAN PER~O~lNCE The major caveat concerning comprehensive control-theoretic models such as PROCRU is the lack of experimental validation for the overall integrated models. The core, continuous information-processing model has been validated many times in different contexts, as have some of the single- task, limited-scope models that would also be used. However, even if all submodels have been validated, it does not guarantee that this aggregation and integration will yield a valid comprehensive modes The control-theoretic approach appears to be well suited to highly structured situations with well-defined goals. However, it is likely to run into difficulties when this is not the case and operators have a great deal of discretion In how they perform their tasks. Even when the goals are well specified it is unlikely that mathematically "optimal" solutions can be calculated. This imposes a need for developing approximate, or suboptimal, solutions that compromise the normative nature of the model and increase the modeler's subjective input. An important drawback to the control-theoretic approach has been the level of mathematical and control theory background and sophistication necessary to develop or use the models confidently. This has limited the user population significantly and may continue to do so. Another drawback is that the software required to implement such models is quite complex and, presently, not of a general nature. Although work is in progress to alleviate this problem, unlike some of the other models and methods discussed here, no software package exists that could readily be applied to a new problem. For the near future, a modeler interested In applying this technology faces full development of the computer implementation of the model. This fact is likely to slow model development, validation, and application. Task Network Background The task network approach views the human operator interacting with the environment through a sequence of activities or tasks. The environment includes other operators and equipment as well as the world. A task is usually described by an operator action, an object of that action, and other qualifying or descriptive information, for example, time to complete the tasL A procedure is a collection of tasks required to accomplish some goal. A task network is a collection of procedures and tasks that contains hierarchical and sequence information The task network approach has been He basis of many early uses of human performance models in complex, practical, real-world systems. The

APPROACHES TO HUMAN PERFORMANCE MODELING 35 pnma~y focus of these early modeling efforts was to determine the time required to complete procedures and tasks, as well as error rates, under different conditions (Siegel and Wolf, 1969~. There are several important reasons for the success of these early efforts: 1. Procedures and tasks are simple to comprehend. 2. Ask network descriptions are a natural by-product of functional requirements analyses in system design. Furthermore, task analyses are the basis for many equipment designs and human factors and training analyses. A standard for military task analysis has just been proposed (Myers, Tijenna, and Geddie, logy. These functional requirements and task analyses can be the basis for many task networks. 3. For the above two reasons, procedures and tasks require less investment of analyst time to obtain useful results; moreover, and not insignificancy, the task network approach can easily be comprehended by higher management. 4. The task network paradigm encourages top-down modeling and allows use of existing libraries of models and procedures. 5. The task network approach may be used at many levels of human performance modeling from high-level mission performance to low-level button-pushing tasks. 6. The task network approach is general enough to accommodate a wide variety of situations that will be necessary in modeling human performance in complex human-machine systems. These reasons are valid today, more than 20 years after the original applications of the task network approach. Illustration The task network approach is descried here by means of an example, which is pursued far enough to show the strengths and weaknesses of the approach and to reveal why other macromodels have been developed. The primary outputs of the original task network models were the time and accuracy to complete certain procedures. Suppose it is desired to determine the average time to "Go to Work"; the first step is to construct the basic medium of communication, the task network (see Figure 24) which is a diagram of procedures and tasks. The highest-level procedure is "Go to Work." This is composed of the two procedures "Get Up" and "Get to Office." The arrow in the diagram indicates that the procedures must be performed in that order. The lowest-level blocks in any procedure are the tasks. For the "Get to Office" procedure, these taslo; are "Leave House," . . ., "Walk to Office."

o c, - o o c' - ~ o a, {~' - ~o - Hi [L $~ ·~ ct c) c: a) ° al I _~_ _ , ~ - CD _ _ LL ~ m 1 R: In ~ > OE _ . . is C' Ct m O a) <,, m C) 0 -! ME -< 36 Q) Ct - To o - o v - 3 c V _-

APPROACHES TO HUAfAN PERFORMANCE MODELING 37 The "Get Up" procedure contains two procedures ("Wake Up" and "Get Dressed") followed by the task "Eat Breakfast." This network shows that there is more than one way to "Go to WorL" The three paths for the "Get Dressed" procedure show that it is possible to brush teeth before or after taking a shower, or to even skip brushing the teeth, but not the shower. Time/Accuracy Models There is no explicit human performance information shown in the task network, but there are some implicit assumptions about human perfor- mance: tasks will be done in the order shown, and a procedure/task cannot be started until the preceding procedure/task has been completed. The early applications of the task network approach assigned attributes to each task such as time to complete a task and probability of correct execution; these attributes were used to compute performance. Go classes of information are required to compute the average time: the time to complete each task and the path through the procedures. A first approximation is to assume that each task takes a fixed, constant amount of time and that each of the three possible paths is equally likely. Then the time to complete the network is the sum of task limes plus the average time to complete the promdure~s). No additional modeling is required if a single point estimate of time will suffice. However, it is easy to see how more realistic estimates can be obtained by using more realistic estimates of the task times. Usually, task times are assumed to come from probability distributions that are estimated by an analyst or derived from real-world measurements. When the time distributions are independent of one another, it is straightforward to perform a Monte Carlo simulation and statistical analysis to estimate the mean, standard deviation, and other properties of the total time. There Is no end to the improvements that can be made to task time distributions or branching decisions. For example, a variety of factors may reduce the time devoted to certain procedures. The branching decision is influenced by the current situation. If the operator is rushed, then an error is more likely to occur. These factors have been identified by Siegel and Wolf (1969) as moderator functions, or functions that change time/accura~ performance in response to the state of the simulation. Other Performance Measures Time and accuracy were the primary focus of the early application of the task network approach. However, other performance measures have since been found to be useful. These performance attributes are assigned

38 QUANITL4TIVE MODELING OF HU7L4N PERFO~INCE to each task and a simulation of the network produces a time history or profile of the attribute. Operator loading is an example. Workload estimates for aircraft op- eration have been estimated by developing a task network for piloting an aircraft and operating the on-board equipment (e.g., radios and weather radars. The aircraft/equipment operation has been charactenzed by the human resources required, typically at the level of right hand, left hand, right foot, left foot, vision, etc. (Miller, 1976~. The task network is then executed, usually without random task times, to determine a time profile of operator loading. These models are useful for the identification of points in tune at which the normally expected sequence of tasks can lead to operator overload or other problems. The model based on use of the operator's hands and feet has been criticized because it does not take into account the thinking required by an operator. This observation led to tasks being characterized by the load, or requirements of four information-processing components: vision, audition, cognition, and perception (Corker, Davis, Papazian, and Pew, 1986~. Subjective values of each vital component are provided by subject matter experts. Execution of the network predicts situations in which the information-processing load on the operator may be excessive. Processing Models The "Go to Work" and operator loading examples highlight different aspects of a human performance characteristic that is not visible in the task network: task processing. The "Go to Work" example is typical of ne-required processing. Each task is done in sequence and may not be started until prior tasks are finished. The time from start to finish depends on the times for the constituent tasks. There is an implicit assumption that the individual is working at full capacity or else the spare capacity would be used to reduce task execution time. The operator loading models descried above are typical of demand- required processing Each task is done at a scheduled time, and the demands for all tasks are added together to give the total demand required to accomplish the tasks on schedule. Demand-required processing makes the assumptions that there is no upper limit to operator processing capability and that all tasks can be accomplished in parallel. Open- and Closed-Loop Models The time-required and demand-required models represent another attribute of human performance models: open loop or closed loop. Both open-loop and closed-loop models of the operator may respond to the

APPROACHES TO HUMAN PERFORMANCE MODELING 39 environment (e.g., execute an engine-fire procedure in response to an engine-fire wa~ng). However, an open-loop model does not complete the circuit by feeding information about environmental changes due to Mat response back into the simulation, whereas a closed-Ioop model generates and incorporates such information In the engine-fire example, the actions of the open-loop pilot model cannot influence the outcome of the remaining simulation, whereas with a closed-loop model, the outcome of the eng~ne- fire procedure depends on whether or not the model of the pilot selects the correct engine when pe~o~g the procedure. Models of Limited Scope The task network approach Is a useful framework in which to embed isolated and independent single-task models of human performance. The characteristics of each task can be specified by a model of that task, rather than by analyst estimates or an underlying human performance model which, as in HOS, can be applied to all tasks. Workload models, previously discussed, are an example. Examples of other models that can be applied to estimate task perfonnance are manual control models to determine performance in the control of dynamic systems, signal detection models to determine the time and accuracy of detecting events and signals, and information-theoretic models to determine choice reaction time. Decision models, using, for example, multi-attribute utility functions, Luce's choice model, or Dynamic Decision Model (Pattipati et al., 1980), or knowledge- based rules can be used to determine the path of execution through the task network Aggregation Issues and Macromodels The first aggregation issue is the assumed additive of task attributes. In the "Go to Work" procedure, it is assumed that the time to "Gee Dressed" and "Eat Brealdast" is the sum of the two task times. This may not be the case when these two procedures are in sequence because of shortcuts taken by the human, such as tying shoes while waiting for coffee water to boiL Similarly, the operator loading attn~utes are assumed to be additive in the demand-required model, but the actual loading could be better or worse when tasks are performed simultaneously. Another aggregation issue is the integration of models of limited scope in the network: Does the aggregate model predict actual human performance? This Is especially important because many of these models were developed in isolated laboratory experiments. There is no single macromodel for the task network approach to human performance modeling because there is no unique method to model the two

40 : - QUA~lT~TIVE MODELING OF HUMAN PE~ORMANCE most important features of a macromodel: task selection and simultaneous task execution. The most direct way to build a macromodel is to have the analyst specify task order in procedural form without variation and with no simultaneous tasks. Other forms of task selection are probabilistic branching and knowledge-based branching (see the section on production systems). A lot of effort has been devoted to modeling task selection logic within the existing macromodels. Simultaneous task execution is difficult to model. Suppose the person going to work is also attending to another task network called "~king for a Raise." It can be imagined that a lot of mental activity would be devoted to this task and much of it could be going on during the execution of some of the "Go to Work" network (e.g., dunog the "Ike Shower" task). How is the joint accomplishment of tasks represented? What are the resources being shared? How are they being allocated? What are the effects of one task on another? Ask selection and simultaneous task execution are dealt with by macromodels. Most macromodels avoid these questions by developing the task net- work down to a level at which it can be argued that the tasks are really performed serially rather than in parallel This involves much more detail than desired in some instances, and requires setting tasks and task selection logic for ongoing tasks such as mon~to~g. Exemplars The task network approach was extensively developed by Siegel and Wolf (1969), who used simulation and tasks described by completion times and accuracies. The U.S. Air Force sponsored the development of Systems Analysis of Integrated Networks of Asks (SAINT, a simulation language to support the development of task network models (Pntsker et al., 1974~. SAINT has been used to evaluate a variety of systems, including avionics systems (Kuperman, Hann, and Bensford, 1977) and submarine displays (Kraiss, 1981~. The task selection logic emphasizes task precedence, re- source availability, and random choice, but there is no specification of how to accomplish simultaneous tasks. In addition, SAW allows the use of resource parameters that could be employed to represent human information-processing resources. THERP Technique for [Iuman Error Rate Prediction; Swain and Gutiman, 1980) is an example of the assessment of human reliability by using the task network approach. The network is actually a fault tree, and empirical data are used to predict probabilities of errors. 13~ are not selected and are not done in parallel; rather, the probability of reaching certain nodes is assessed.

APPROACHES TO HUSSEIN PERFOR~CE MODELING 41 Queuing models (e.g., Chu and Rouse, 1979) are another example of the task network approach. The macromodel controls tasks consisting of controlling an aircraft and correcting subsystem faults. The model selects tasks based on a computed priority and processes one task at a time. The tasks queue up until they are processed, as in the time-required model described earlier. Strengths The advantages of the task network approach to human performance modeling are its intrinsic generality and the ability to formulate HPMs at any desired level of demiL The task network approach encourages tOp- down modeling. It also offers a promising approach to system modeling when it includes knowledge-based branching, symbol manipulation capabil- ities, sampling distributions, and limited-scope models. The task network approach also provides means for the specification and incorporation of uniquely human traits such as task stress, goal gradient, and proficiency to increase the probability of making an accurate prediction. It can be seen that the task network is quite intuitive and self- explanatory for the procedure/task sequences and hierarchy. This example also demonstrates several advantages of the task network approach: (1) there is a natural and convenient hierarchy to the tasks and procedures; (2) the task network encourages, if not enforces, top-down modeling; and (3) with touchdown modeling, it is easy to expand those procedures that must be examined in detail ("Get Up" in the example), whereas other, less important procedures need not be developed ("Get to Office"~. Caveats The disadvantages of the approach also arise from its generality. If interactions between two or more task network modules are known, these interactions can be modeled-in principle. In practice, however, highly interacting modules lead to levels of complexity that make checkout and validation very difflculL As with other models, the quality of the results depends on the quality of the supporting data: many times, subjective estimates of times and probabilities, or data derived from incorrect contexts, are used when more reliable data could be gathered. Other disadvantages of the task network approach indude the follow- ing: · The identification and development of subprocedures and tasks are often not unique; that is, there are many possible procedure/task descriptions to determine how long it will take to "Get to Work." This may lead to an inadequate model if important tasks or events are not modeled.

42 QUANTITATIVE MODELING OF HUA!AN PERFORMANCE · Libraries of commonly used procedures can be included in the task network, even though the network is counter to top~own develop- ment. This can be an advantage if the libraries contain assumptions and procedures that are appropriate but a disadvantage if they do not. It must be noted again that the task networks represent some, but not all, constraints among tasks and are not, per se, models of human perfor- mance because most HPMs are employed to descn~e how the network is executed, e.g., what resources are required for each task, how the tasks are sequenced, and how tasks are performed simultaneously. (Note how little human performance modeling is displayed in Figure 24.) Inasmuch as each new task network can be, in a sense, a new human performance model, the validity of extrapolations to new domains or modifications of new tasks in a domain must be evaluated carefully. linowledg~Based Background Knowledge-based models of human performance are explanations of how people decide what is to be done to solve a problem. This is different from the Apical goal of human performance modeling, which is usually to predict how accurately or reliably a person will execute a procedure under the assumptions that the person knows what is to be done and that failures occur only because of imperfect sensing or inadequate motor movements. This distinction can be illustrated by a hypothetical example from aviation. Suppose human performance is to be modeled in a situation in which a commercial aircraft requires more than normal power during the climb immediately following takeoff. A traditional modeling question would be to determine the distribution of times before the crew noticed the problem. A ~owledge-based study might begin with problem detection and identification, and then ask how the crew diagnosed the situation. Knowledge-based approaches grew out of Newell and Simon's seminal research on computer simulation of human problem solving (Newell et al., 1958; Newell and Simon, 1963, 1972~. Newell and Simon realized that computer programs can be thought of as manipulating symbols, rather than doing arithmetical calculations. They argued that human thought is also an example of symbol manipulation and therefore can be modeled by computer programs. This discussion is restricted to the more limited issue of the impact of their work on modeling human performance. The basic idea behind the computer simulation approach is that knowl- edge can be represented by symbol structures and rules for operating on them. ~ take a trivial example, an automobile driver may have knowledge that says (1) the warning light is on, and (2) when the warning light is on,

APPROACHES TO ~~ PE~O~CE MODELING Working memory: Representation of the problem currently being attacked r I Pattern recognition process ~ ~ T Long-term memory: Knowledge of facts and procedures (productions) for denying new facts FIGURE 2-5 The organization of knowledge in memory. 43 examine the instrument panel As principle can be extended greatly. For instance, some modern expert system programs contain 500 or more rules of the sort just described. The problem-soKing processes of experts are modeled by programming computers to execute the expert's knowledge of what to do and are used to alter a symbol structure representing what the expert knows about the current program. The idea that thought can be modeled by computer programs does not in any way imply that the machinery of the human brain is logically similar to modern digital deuces. In particular, knowledge-based models use an architecture of the mind that Is quite different from the architecture of a conventional Ton Neuman machine. As Figure 2-5 shows, knowledge is organized into two distinct classes: information in working memory and information in long-term memory. Each of these is considered here in turn. A problem solver (in this context, the person being modeled) is as- sumed to have a set of beliefs about a problem at hand. These are collectively called the problem representation. The problem representation is stored in working memory as a set of propositions. Propositions may refer to knowledge about the problem to be solved or about the problem solver's own intentions. In addition, the problem solver knows a varier of potentially relevant facts and problem-solving methods. Ads information about how to go about solving problems is assumed to be resident in long- term memory. The facts and methods are referred to as declarative and procedural inflation about problem solving. The basic idea can be grasped by considering how problems are solved in plane geometry. The initial statement of a problem presents certain facts. Geometry students know inference rules (e.g., the side-angle-side

44 QUAN17TATIVE MODELING OF HUMAN PERFORMANCE rule) that permit them to derive new facts from old ones. A geometry problem is solved by applying inference rules to deduce new facts from old, until the statement to be proved is generated as a fact. In theory, any geometry problem could be solved by rote application of all inference rules, iteratively, until the desired statement was generated. In practice, however, this is not feasible because it leads to a combinatorial explosion of facts, most of which are irrelevant to the desired proof. Therefore, a good geometry problem solver will give priority to the development of propositions that are related to subgoals chosen because they are likely to be part of the eventual proof. For instance, suppose a geometry student wants to prove that two triangles are congruent and already knows that two corresponding angles are congruent. A good student will then set as a goal proving that the sides between the angles are congruent. This is a specific example of a general problem-solving rule, "If the goal is to prove statement X, and a rule of the form 'statement Y implies statement X' is known, then try to prove statement Y." The problem-solving procedures stored in long-term memory are coded as "if-then" statements, called productions. Note that the rules of inference in geometry and the general problem solving rule just illustrated can be stated in if-then format. Goal~irected problem solving can be achieved by making the presence of goals in the working memory part of the "if" section of a production, and the placing of these goals into working memory a possible action of some other production. The geometry example illustrates another important aspect of know- ledge-based models, the distinction between domain-specific rules, such as the side-angle-side rule, and rules that apply to problem solving in general, such as the rule about establishing subgoals. General problem-solving rules are called weak rules because they are only weakly dependent upon the context in which they are used. In expert systems research, weak rules are sometimes referred to, collectively, as the inference engine, because they control the process of inferential reasoning that is applied within a specific problem-solving domain. Problem solving proceeds by pattern recognition. Time is organized into discrete time cycles. At each cycle the problem representation is examined to see if it contains information that satisfies the "if" condition of any of the productions in long-term memory. When a match is found, the associated action the "then" part of the production- is taken. A variety of different rules have been proposed for modifying this general scheme, but discussing them would be too detailed for the purposes of this report. The sequence of pattern matches and actions is continued until working memory contains information equivalent to a problem solution. When production systems are used to implement knowledge-based

APPROACHES TO HUMAN PERFORMANCE MODEr rNG 45 When production systems are used to implement knowledge-based models, limits in performance are expressed in three ways: by the complex- in,r of the propositions admissible in working memory, by the accuracy of the pattern recognition process, and by the information that the problem solver is assumed to have stored in long-term memory. In general, knowledge- based models are concerned with the intellectual aspects of knowledge use and response selection. They do not normally contain models of the per- ceptual detection of signals or the execution of motor movements, although attempts have been made to extend knowledge-based processing to these fields (Hunt and Lansman, 1986~. Because of this limitation, current knowledge-based problem-solnng models are likely to be most useful in situations in which system perfor- mance is limited by what the human operator decides to do, rather than how quickly or how accurately it is done. Put into the terms of modern systems engineenug, these knowledge-based models are appropriate ways to understand the supervisory aspects of operator performance, but are less likely to help in understanding how humans act as detectors or effecters. In the sense that the term is used in this report, knowledge-based modeling was historically a comprehensive (macroscopic) effort, then be- came more limited in scope (microscopic), and now shows some signs of becoming macroscopic again. Newell and Simon's initial studies were macroscopic, in the sense that they were aimed at uncovering general laws of problem solving that were applicable to many specific situations. This is shown most clears in studies of the General Problem Solver (GPS), a program that relied on context Dee inference rules to solve problems, given only a minimum of domain specific knowledge. For example, given appropriate minimal definitions of the domain, He GPS program solved problems in chess, calculus, and symbolic logic (Ernst and Newell, 1969; Newell and Simon, 1972~. In terms of this report, the GPS was a macro- scopic model of the human problem solver because it could be applied to many tasks. The literature on knowledge-based problem solving puts this somewhat differently, by describing GPS as a program that emphasized weak inferential rules rather than context-bound inference rules. In the late 1970s, the emphasis shifted from a search for the weak problem-solving methods that humans use to a concern for domain-specific knowledge. The shift was prompted in part by observation of human problem solving in extralaborato~y situations, such as elementary physics and thermodynamics, and in part by the desire to create expert system programs that could emulate human reasoning in economically important domains such as medicine. It was found that human problem solving is characterized more by the use of domain-specific heuristics than by reasoning from first principles embodied in weak problem-solving rules. This is particularly true when the human being modeled is familiar with

46 QUANTITATIVE MODELING OF HUMAN PERFORMANCE the problem-solving domain. In general, this would be the appropriate assumption to make in modeling human performance in human-machine systems. Quite successful domain-specific models of human performance have been constructed, covering areas ranging from problem solving in school arithmetic to college-level physics. In this work the focus has been on the problem solver, working in a fairly simple environment. This contrasts with the Apical human perfonnance situation in which the modeling ef- fort also focuses on person-environment interactions. More recently, the knowledge-based approach has been applied to the latter class of situations. Rouse (1983) provides a review of applications in detection, diagnosis, and compensation for failures in complex systems such as aircraft, process plants, and power plants. Rasmussen (1986) has also offered an ambitious treatment of human performance and problem solving in complex systems. Knowledge-based models are seldom used to make quantitative pre- dictions about performance. They provide a way of summarizing complex sets of obsenations about past performance. The model-based summa- rization is then used to make a qualitative prediction of how people are likely to perform in a new system, on the assumption that they use the same knowledge base and reasoning rules that generated performance in the previously observed system. Models for knowledge use that are derived in this way may also be embedded in computer-based systems for aiding and training personnel in complex Stems (Anderson, Boyle, and Reposer, 1985; Rouse, Geddes, and Curry, 1987~. In these applications the model should be evaluated by its utilizer in training and decision making, rather than by scientific evaluation of its truth as a model of human reasoning. Exemplars Although the current literature emphasizes microscopic models of single tasks, macroscopic considerations have been introduced in three ways. Each is discussed here in turn. A number of programs are commercially available for designing expert systems. These programs are shells or inference engines containing weak rules that organize the domain-specific rules established by the user (Aim and Coombs, 1984; Goodall, 1985~. Examples of shells include EMYCIN, KAS (Knowledge Acquisition System), and EXPERT. The first two were developed for constructing rule-based diagnostic systems, whereas the third is most suited to classification problems. The programs are intended to facilitate the rapid development of expert systems. They succeed in doing this by removing the burden of programming once the domain-specific rules are known. However the labor-intensive part of the effort lies in establishing these rules. Some

APPROACHES TO HUMAN PERFORMANCE MODEr rNG 47 attempts have been made to develop programs that would conduct formal- ized interviews with experts, thus reducing the labor costs of establishing the rules themselves (e.g., Boose, 19863. There are at present insufficient data to evaluate the utility of this approach The details of production execution imply a model of information processing. From Figure 2-5 it can be seen that a production executing system assumes the existence of certain undefined primitive elements. The chief ones are · tne mechanism for pattern matching, which determines whether or not the "if" part of a production rule is satisfied by the propositions in the problem representation; lion; and the data structures that are used to state the problem representa- . the conflict resolution rules used to determine which productions are to be executed when more than one production's pattern part is recog- nized in a problem representation. In an analogy to computer programming, these elements play the role of operations within a programming language. They are organized into a model by writing specific productions, just as the operations of a programming language are organized into a specific computation by a program. There is another sense in which macromodels can be introduced into owledge-based modeling. Knowledge-based models, as presented here, depend on the execution of specific productions. Numerous authors have argued that production systems are themselves organized into higher-order frameworks, venously called schema, frames, and scripts (Minsky, 1968; Schank and Abelson, 1977~. These are organized systems that direct the execution of certain rules, as soon as the system itself is seen to apply. Larldn's (1983) study of physics problem solving selves as a good example. She found that experienced physicists classier problems as being of a certain type (e.g., balance-of-force problems). As soon as the classification is made, the problem solver immediately executes the computations appropriate for that type of problem and then examines the results to determine whether or not the problem at hand has been solved. A knowledge-based model of problem solving that uses schema is in some sense intermediate between our definitions of comprehensive and limited-scope models. Programs have been developed that utilize schemes appropriate for solving certain classes of problems in different fields, ranging from word problems in school arithmetic to problems in elementary physics. These programs are general in the sense that they solve more than one type of problem within a field, but special in the sense that they are still limited to a particular domain of endeavor.

48 Strengths QUANTITATIVE MODELING OF HUA~IN PERFORMANCE The strength of knowledge-based modeling is that it offers a way of modeling the manner in which people use what they know to solve difficult problems when the person being modeled is in a situation that offers several options in choosing actions. For the reason, knowledge-based modeling Is particularly likely to be of assistance in understanding the way in which people execute supervisory control in man-machine systems. This is especially true if the sort of control being exercised depends on complex judgments and cannot be reduced to a set of instructions that anticipate every foreseeable circumstance. How accurate are knowledge-based models? As a quite broad gen- eralization, it appears that models can be constructed which account for a substantial portion of the actions that people make in certain problem- solving situations. Identifying situations in which a model does not account for performance can be an illuminating exercise in itself. For example, if a model cannot perform successfully when given the knowledge that people would be given in a training course, then the adequacy of the course can be questioned. The kr~owledge-based approach to modeling is undergoing extensive and rapid development, particularly in the education and training fields. As the above examples illustrate, knowledge-based modeling has been used to explain how people solve problems in a variety of difficult educational fields. The approach has been extended to learning by including within knowledge-based modeling a model of how the problem solver incorporates new knowledge into old (Anderson et al., 1985~. Although the applications of modeling plus learnung have thus far been used only in industrial training and conventional school education, mere is no reason they could not be extended to modeling the way a person becomes an expert supervisory control operator. Such applications have not yet appeared in the general literature, but they are being explored as basic research endeavors by the U.S. Air Force and Navy. In summary, knowledge-based modeling appears to be a very promising way of modeling human cognitive activities in complex supervisory control situations. Caveats Knowledge-based problem-solving models have only recently been ap- plied to traditional human performance problems. Exemplary studies are now being conducted in such areas as aircraft operation (Rouse et al., 1987) and nuclear power plant operation (Woods, Roth, and Pope, 1987), but there is not yet an extensive literature on the success of these ventures.

APPROACHES TO ~~ PE~O~CE MODELING 49

so QUANTITATIVE MODELING OF HUAL4N PERFORMANCE underway Lewis, 1986; Lewis and Hammer, 1986) but cannot be regarded as mature at this time. Fourth, knowledge-based models are extremely hard to evaluate. Pro- ponents of knowledge-based modeling have argued, convincingly, that con- ventional statistical tests of agreement between model and data are simply inappropriate. Unfortunately, the same proponents have failed to specie what criteria for model evaluation are appropriate. Until this problem is solved, the use of knowledge-based models will, to an unfortunate degree, depend on social acceptance rather than formal validation. One of the purposes of modeling is to be able to generalize beyond situations that have already been observed. The logic for generalizing knowledge-based models is certainly not as well understood as the logic for generalizing results applicable to conventional mathematical models of in- formation processing. In fact, there may be a logical limit to generalization. The knowledge-based approach was developed to handle behavior that was dependent upon the context-specific knowledge used by particular ~ndivid- uals. 1b the extent that context specificity and individual problem-solv~g soles are important, generality should not be expected, no matter what the method of modeling. Major developments in the use of knowledge-based models in the areas of supervisory control and maintenance operations are expected to occur over the next 10 years. SUMMARY OF MODELING APPROACHES In this chapter, four of the more promising or heavily investigated approaches to modeling human performance in the operation of complex systems have been reviewed. As the discussion of the individual approaches shoes, each has its strengths and each has caveats that should be borne in mind when considering it for a particular application. Where these strength/caveats differ significantly across approaches, they are important to note. However, to overemphasize the differences between the approaches discussed herein would, to some extent, miss three unportant points. First, in many respects, the various approaches are converging. For example, HOS developers are considering the inclusion of continuous control concepts; control models are incorporating discrete tasks and procedures; and task network models are beginning to include dynamic state variable models. In addition, proponents of the information processing, optimal control, and task network approaches are in venous stages of explonng, implementing, and testing methods for including knowledge-based behavior of one sort or another within their respective models. Second, some significant general concerns apply to all of the ap- proaches:

APPROACHES TO HUM PERFORMANCE MODELING 51 · As one moves from limited-scope models to comprehensive models, their complexity introduces a new set of problems for designers and users. · All models, to a greater or lesser degree, require users to supply parameter values prior to execution; the more complex and comprehensive the problem and model, the greater is the number of parameters to be provided. · None of the approaches can claim an exemplar having full tradi- tional validation · None of the approaches, as yet, has effectively dealt with the problems of operator discretion and cognitive behavior. · Most modeling efforts, thus far, have dealt primarily with the ideal, well-trained operator and have largely ignored individual differences. Finally, it is significant to note that there are several viable approaches to developing comprehensive human performance models of pragmatic utility to system designers and developers. Presently, and for the foreseeable future, no single approach is likely to dominate the field; rather, it is to be expected that the various approaches will be applied most effectively in problems closest to their original focus of development.

Next: 3. Applications »
Quantitative Modeling of Human Performance in Complex, Dynamic Systems Get This Book
×
Buy Paperback | $45.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book describes and evaluates existing models of human performance and their use in the design and evaluation of new human-technology systems. Its primary focus is on the modeling of system operators who perform supervisory and manual control tasks. After an introduction on human performance modeling, the book describes information processing, control theory, task network, and knowledge-based models.

It explains models of human performance in aircraft operations, nuclear power plant control, maintenance, and the supervisory control of process control systems, such as oil refineries. The book concludes with a discussion of model parameterization and validation and recommends a number of lines of research needed to strengthen model development and application.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!