National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 1
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 2
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 3
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 4
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 5
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 6
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 7
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 8
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 9
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 10
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 11
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 12
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 13
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 14
Suggested Citation:"1. Introduction." National Research Council. 1990. Quantitative Modeling of Human Performance in Complex, Dynamic Systems. Washington, DC: The National Academies Press. doi: 10.17226/1490.
×
Page 15

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

1 Introduction SCOPE This report discusses human performance models (HPMs) and their potential use in system design, development, and evaluation. The primary focus is modeling system operators performing supervisory and manual control tasks. The report does not address models of the designer or manager of a complex system, and it addresses models of maintainers only briefly (see Unkind, Card, Hochberg, and Huey, 1989, for a discussion of models pertinent to designers and managers). However, if a model cannot be understood by higher management, it is not likely to be used by them. Of interest are complex technological systems of a dynamic nature in which humans play a central role in any of the functions: monitoring, control, decision making, supervision, and maintenance. Examples include vehicles (air, sea, or land), process control operations, power plants, some weapons systems, and a variety of manufacturing systems. Such systems are invariably costly and time-consuming to design and develop, and substantial risks are often involved ~ their operation. Faulty design or operation can be very expensive or dangerous, and systematic means of accounting for the performance of the human component in these systems is imperative. A model is a representation or description of all or part of an object or process. A varietr of models have been developed for a variety of reasons. Early models, which were often verbal, statistical, or mathematical descrip- tions or theories of some limited aspect of human performance, could not represent the complexity and comprehensiveness of human performance. However, modern computer technology is changing this situation. Until fairly recently, most human performance models were numerical or quan~i- tative, but as a result of the progress in artificial intelligence and cognitive 1

2 QUANTITATrVE MODELING OF HUMAN PERFORMANCE science, a substantial body of nonnumerical, qualitative, but calculable, models has been developed. These models are necessary for representing cognitive behavior and, although qualitative, are nevertheless computa- tional. Although the literature is replete with models that represent paradigms and tasks in which an individual's attention is fully committed to a single process, the challenge addressed here is to represent human performance in typical working settings in which operators perform a collection of tasks that overlap in time. For example, the submarine commander is engaged In navigation, control, and threat detection. At various times, these activities compete for attention. This added level of complexity poses important problems in modeling human performance. In addition to models that are appropriate for single tasks or activities, it is necessary to model the ways in which human operators manage their own resources so as to cope with the changing and sometimes conflicting demands of disparate activities. A major question that arises is: Can this be accomplished by integrating single-task models that have been developed previously for the activities performed in isolation, or is it necessary or better to model the complex task in a completely unified manner? The extent to which simple task models can be usefully integrated to represent more comprehensive behavior depends on the nature of the gaps in coverage of the models and on the completeness of the linkages between them. A report by ELkind et al. (1989) addresses this issue in the visual and cognitive areas win specific reference to the tasks of a helicopter pilot. On the other hand, most Busting comprehensive models contain little detail about specific aspects of human performance, reflecting the trade-off between breadth and depth Therefore, at present, some trade-off decisions must still be made. It should be noted that human performance modeling has additional purposes and uses beyond those of prime consideration here. Of special interest and import is the use of models in theory development and evalua- tion. Indeed, in the psychological literature a model of human performance is often used as a synonym for a theory of performance. In the psycholog- ical literature the model frequently is, or is intended to be, independent of the specific system or task context and thus is applicable to a varied of systems. This is an undoubtedly important area of human performance modeling, but it is not of central interest in this report. WEIAT IS HUMAN PERFORMANCE MODELING? The term human performance models, as used in this report, refers to quantitative (analytic or computer-based) models of human operators or maintainers of complex dynamic systems. Many different kinds of HPMs

INTRODUCTION 3 have been developed. The characteristics that help distinguish among them can be represented along several important dimensions: output versus process orientation, predictive versus descriptive, prescriptive (normative) versus descriptive, top-down versus bottom-up, and single-task versus mul- titask Models can also be characterized according to the types of theories or tools used in their development. Output Versus Process The dimension of output versus process relates to the degree to which a model (or modeling approach) focuses the system output versus the processes by which output is generated. An output model is a set of rela- tionships between input and output states that is capable of (1) beginning with input states and (2) generating output states. This type of model pre- dicts or describes the outputs of a person or a person-machine system for a given set of inputs. Such an output-oriented model places no requirement on the structure, or even the validity, of the internal mechanism (processes) of the model. All that is desired is that the model produce "correct" (i.e., useful in the context of the application) outputs for specified inputs. On the other hand, a model can be a theory of how people perform certain tasks. The HPMs with this characteristic describe processes by which an output is generated and, as such, describe what humans do within the system, rather than just predicting the results of their actions. In this sense, process models are more complete descriptions than are output prediction models. For many purposes, though, output prediction is all that Is needed. Human performance models typically combine output prediction with some degree of process prescription. No general answer can be given to the question, What is the "appropriate" level of internal detail for an HPM? because the necessary level of process description depends on the application of the model. Predictive Versus Descriptive It is important to distinguish between two distinct methods of em- ploying HPMs: (1) predicting human-system performance with the model prior to collection of data and (2) describing (fitting the model to) human- system performance by adjusting free parameters of the model to conform to existing data. Fitting models to data can be an end in itself (i.e., for descriptive purposes). It can also be a step toward developing predictive models. Virtually all HPMs have some parameters that must be estimated from

4 QUAN7CITATI~E MODELING OF [IUMAN PERFORMANCE experimental data. Predictions can be made for new situations by using the parameter estimates available from earlier, descriptive studies. Clearly, predictive models, where they exist or can be developed, are intrinsically of more value than models that merely describe or summarize data; prediction Is the real need of the system designer (prior to building the actual system). Moreover, a truly predictive model will also describe actual performance. Prescriptive (Nonnative) Versus Descriptive Models for human performance can either describe how a human is likely to perform a task or predict ideal behavior, given human and situational limitations. In the former case the model is called descriptive, whereas the latter type of model, which preaches how the human should perform if he were to behave in a rational way that takes into account the information available, the constraints that exist, the risks, rewards, and objectives, is called prescriptive or normative. The distinction between normative and descriptive can be blurred because prescriptive models often describe quite well the performance of humans that have been well trained for the task. This is particularly true when prescriptive models include in their formulation, representations of human limitations that constrain performance. Top-Down Versus Bottom-Up The top-down/bottom-up distinction refers to the extent that a model is dictated by system goals or by human performance capabilities. A top- down approach begins with a statement of system goals, then progressively elaborates subgoals and functions until the modeler reaches a level at which functions are accepted as primitives and are not explained further. A bottom-up approach begins by defining a set of primitive elements at both the human performance and the engineering levels. A system model is then developed based on the Redefined set of primitive elements. Note that this distinction refers to the evolution of the model, rather than to the final model. Because of the nature of their evolution, top-down models are lively to focus on output (system performance), whereas bottom-up models are likely to focus on the processes leading to performance as well as output

INTRODUCTION Single-Task (Limited Scope) Versus Multitask (Comprehensive) s Most quantitative models have been developed with a single task mind, although that task may involve several subtasks or processes. Single- task models are models that range from simple movement to models for manual control or signal detection that can involve perceptual, motor, and even cognitive processes. ~~ respect to the concerns of this report such single-msk models are viewed as being of limited scope. Multitask models, on the other hand, are those that treat a variepr of such tasks within a single uniting framework These models are referred to as comprehensive HPMs. MODELING METHODOLOGY Another important way of characterizing HPMs is by the theories or tools that underlie the model or serve as a basis for its development For example, there are task network models (network and reliability models), information processing models, control-theoretic models, and h~owledge- based models. This is a particularly useful way of classifying comprehensive or multitask models and is the basis for much of the discussion of modeling approaches in Chapter 3. One should not be confused by the many ways that HPMs can be described or defined. In simplest terms, a model may be viewed as a "thing" of which questions are asked about the real world. The ultimate role of a model is to produce simulated performance (output or behavior) data. The resulting data should be sufficiently similar to real performance data to be useful to decision makers. Thus, a model is "good" if the same answers are obtained from the model that would ultimately be obtained from the real world, regardless of the particular modeling approach employed. One final general point: A model of human performance implies the existence of a model of the environment or system: in which that per- formance takes place. Thus, in this report, human performance modeling will almost always combine human with system performance models. The manner in which the environment is modeled generally will dictate the way in which the human is modeled and Vice versa. For example, discrete event modeling of the system will tend to lead to task network models for the operator, whereas continuous time system models would involve corresponding representations of the humans. 1 "System, " in the report, refers to an interconnected set of parts making up a whole entity that has a common purpose. Thus, one example of a human-machine system would consist of human, turbine, reactor, etc., which collectively make up a nuclear power plant.

6 QU~NTI~TIYE MODELING OF HUMAN PERFOMklANCE WHY USE HUMAN PERFORMANCE MODELS? Processes That May Benefit from Their Use Human performance models are used in two ways: (1) to develop theories of human performance and (2) to design and evaluate systems. These applications are not mutually exclusive. Lessons learned in theory development can be of benefit to system design and vice versa. Theory Development and Evaluation ~ develop a model, one must be specific about one's theories of human performance. If a working model has been developed, the model may be exercised to detains if the simulated behavior of the modeled constituents corresponds to the behavior of those same constituents in the real world under similar conditions. If the data obtained from the model do not correspond to data obtained from the real world, it may be possible to determine which aspects of the theory need to be reconsidered. If the model is exercised under a variety of conditions and found to yield satisfactory results, then confidence is gained for using the model to predict the behavior of the constituents under novel conditions. Thus, the very attempt at developing a model is highly useful in discovering where such ambiguities exist. System Design and Evaluation Human performance models can play a role throughout the life pycle of a system. They can be used in design to help establish system configura- tion, parameter values, and operating procedures; in operation as integral components of a system (against which actual human performance may be compared); and in evaluation (e.g., of normal performance, accidents and incidents, or specific missions). The greatest contn~ution. however. is probably in design. The importance of considering human performance during the design process has become increasingly apparent in recent years. People are an essential part of human-machine systems. It is substantially easier and less expensive to consider how human capabilities will affect system operation and modifsr the system before it is built, than modify it to conform to human limitations after it has been constructed. Generally, the first stages of system development involve specifying functional requirements for the system and allocating those functions to human or machine components. Later stages involve translating functional a ,

INTRODUCTION 7 and performance requirements into design specifications; translating pro- posed design specifications into a statement of projected performance of each component, including people; and comparing projected performance. The sequence, in general, consists of four stages: 1. Analyze the purpose of the system and identify the tasks that must be accomplished to achieve it. Describe the goals or performance requirements for the system. Select a potential method for achieving those goals (i.e., a system configuration at either a gross or a detailed level). 4. Model the configuration to obtain performance estimates and compare the performance estimates to the stated goals. Then, · if predicted performance does not satisfy the goals, redefine the goals or rethink the method and try again or · if the predictions and goals seem to match fairly well, simu- late the configuration, test it with human subjects, and, based on the results, proceed with development, make additional adjustments to the goals, or modify the model as dictated by the experimental data. This iterative procedure helps to extract those system characteristics that are essential to meeting predefined system performance goals and are, at the same the, responsive to human performance capacities and limita- tions. It also provides a mechanism whereby HPMs can be systematically improved. Alternative (or Complementary) Methodologies to Modeling Expert Opinion A relatively straightforward and inexpensive approach to predicting human perfo}~nance is to have experts predict what people will probably do in a hypothetical system. Unfortunately, there is no way of knowing in advance how valid these opinions will be. Moreover, the inherent complexity and dynamic nature of the systems and problems of interest make it extremely difficult for an expert, or group of experts, to account for the effects of all possible interactions, particularly those with a low probability of occurrence. Nevertheless, the analyses of an expert are usually essential in defining initial alternatives and in evaluating the results obtained by using other design methods.

8 Simulation QUANTITATIVE MODELING OF HUMAN PERFORMANCE Simulation refers to person-in-the-loop simulations that are, in fact, person-machine models, except that the humanist and portions of the environment are reaL Simulation has some unponant, although sometimes overstated, advantages over most modeling techniques. There can be little question about whether the people in the simulation are performing like humans (they obviously are), but whether they are performing like the humans of interest (e.g., fully trained operators of a system) can be questioned. This will depend on the amount of training and practice given to the operators of the simulation and the continuity of operation provided in the experiments. If the purpose of the simulation is to provide data for system design, the expense of building the simulator and the tune consumed in designing it, trading the operators, and collecting data on them may preclude drawing conclusions from their performance early enough to properly influence design of the real system. Even when this is not the case, the operating costs associated with person-in-the-loop simulation may severely constrain the amount of data that can be collected, which will adversely impact the scope of the system operation under investigation. Despite these problems, simulation is, and will continue to be, an es- sential element in complex system design because of its advantages relative to testing in the real environment. Moreover, human performance mod- eling will, for the foreseeable future, require experimental verification in simulators (just as simulator results often require real-world verification). Indeed, substantial synergy is possible between human performance mod- eling and simulation. Models can be used to reduce the required amount of simulation by determining critical areas of investigation, and they can be used to understand and extrapolate Me results of simulation. Simulation results can, in turn, be used to verify the model, identifier model parameters, and generally advance model development. Evaluation of Real Systems \ Real-world testing and measurement represent the ultimate evaluation of a design. However, the same objections can be raised for collecting data by using real systems as for simulations: namely, that me data can come too late for cost-effective design changes to be made. A more serious objection concerns the potential risks of real-world operation if there is uncertainty about the outcome. 2 Person-in-the-loop architecture refers to a system in which the human plays a more continu- ously active role in its control and management.

INTRODUCTION 9 Laboratory E xperimentation Basic laboratory experiments are also used to aid design decisions. In particular, basic experiments (sometimes involving simple part-task simula- tions) are often conducted to choose between design alternatives or to test a particular concept or design. Care must be exercised in interpreting the results of these experunents. For example, a laboratory experiment that shows statistically significant differences may, or may not' reflect function- ally significant differences in real-world performance. Moreover, because the laboratory context is carefully controlled (i.e., eliminates or holds constant many extraneous variables), the observed difference between al- ternatives could disappear, or even be reversed, in the real-world setting where these extraneous variables are a part of the task environment. These comments are not meant to imply that laboratory experimenta- tion is of no benefit but rather to suggest that its usefulness in predicting real-world performance is variable. Laboratory experiments can be a rel- atively inexpensive way to make early decisions when they must be made. They also can be used to test or develop component models for single tasks that are used in constructing more comprehensive models. In short, they are useful adjuncts to, but not substitutes for, modeling, simulation, and real-world evaluation. Benefits of lIuman Performance Modeling Each of the options discussed above may be appropnately applied to the process of system design and development. However, in some cases modeling offers advantages over other methods for obtaining the same, or similar, data. Examples of the advantages of human performance modeling are (1) its relative speed compared to other nonmodeling methods, (2) its ability to give insight into whole new approaches or applications, and (3) its cost effectiveness relative to dynamic simulation or real system experimentation. In other cases, human performance modeling can provide benefits not obtainable by other methods. For example, a mode} can be used to provide one or more of the following: · a systematic framework around which to organize facts; · an integrative tool which prompts consideration of aspects of a problem that might, otherwise, have been overlooked; and · a basis for extrapolating from the information given to draw new hypotheses about human or system performance. Broadly speaking, a model is nothing more than some modeler's rep- resentation of some thing or process. It may not be necessary for a model to be highly accurate to be useful (for example, a map of some area of

10 QUAN17TATI~E MODELING OF HUA~4N PERFORMANCE the earth that is depicted on a two-dimensional plane surface uses the "flat earth model," which is a misrepresentation, but the map is useful nonethe- less). This suggests that the issue of model utility must be considered in addition to in validity, as long as its users recognize that a useful model is not necessarily completed valid in terms of process as well as output. As discussed earlier, a model may accurately predict the output; however, the process used to arrive at this prediction may not accurately reflect the way in which a human would arrive at the same outcome. Genealogy of Human Performance Models The history of HPMs dates back to World War II. Of interest are the antecedents, and possible components, of the approaches to modeling de- scribed in this report Figure 1-1 summarizes this history diagrammatically by highlighting four main approaches to human performance modeling: information processing approaches, control theory approaches, task net- work approaches (network and reliability modeling), and knowledge-based approaches. Each of these developments is considered in turn. Information-Processing Models The Mathematical Theo f Information (Shannon and Weaver, 1949), together with the ideas of Wiener (1950) concerning feedback con- trolled systems that he called cybernetics were the precursors of a whole new way to think about human behavior. Because it then became possible to think concretely about the abstract concept "information," and because information input, processing, and output represented human activities as well as activities that could be ascn~ed to a machine, it was only natural for the information-processing analogy to be extended to the analysis of human performance. This new approach was torpified by Broadbent (1958) who formulated a block diagram analysis of information flow in human perception and mem- ory. Although Broadbent's ideas were qualitative, they laid the foundations for quantitative models of elementary human information processing op- erations. As Neisser (1967) pointed out, this approach is not a computer analogy in the sense that the brain behaves like a computer, but rather a programming analogy that gave rise to a Table research strategy founded on the idea of d~scove~g the algorithms by which human information processing takes place. This approach spawned models of visual search and identification, short- and long-term memory, reaction time underlying simple decision processes, and movement control, to mention just a few. It has led to numerous attempts to formulate block diagrams of human information

~n ~ I UJ ~ ~ O O Ct: UJ J ~ C y tI: ~ O C,) O ~ tr Z Y ~ cn <: o E 3: Z o ~ V o , ~ ~ .6 o Z ~ o ~ ~ m 0 z - c a 0 a, E ~ ._ o c c es .° — ~V ~ — 3 ~ ,8 Z cn (D ~ ~ a, .o ~ it ~—O eS =m = _ E ~ ' 1 _ _ b ~ ~ , ,2 -co P _ ~ 8- == O (D O o c ~ ~ cn t_ ~o ~ o' _ o C ._ o C. _ C' Q c C' 3 (D — Q LO, ~C O _ Ce O .g U) tn C _ C~ (D ~o ~D C ~ a: ~ ° oY ~ cn ~n 3 a) z o C ~ ~D C ~ .Q ~ ,cn ~D 11 _ ~D C: ' (D O ~ ca {D ~ ~ ~n _ a) — h_ ~ Q 1 cn UJ o X J -. ~ F (3 2 _ ~ o o cn O c o U) ~D C ~o ~ Q ~ - O ~ C C o U) a, C c' O ._ - C C Ct ~Q ~: C~ C ._ Ct _ ~4

12 QUANTITATIVE MODELING OF HUMAW PERFORMANCE processing. From the viewpoint of this report, however, the models were of isolated psychological functions rather than integrative human performance. The Human Operator Simulator (HOS), discussed in Chapter 3, was one of the first attempts to capture component information-processing concepts in the form of an aggregated model that might be applied to system design and evaluation (Lane, Strieb, and Leyland, 1979~. Control Theory Models Interest in manual control models was first stimulated by the need to understand how humans control antiaircraft guns and other closed- loop systems. The seminal paper on this subject was by Justin (1947), a British electrical engineer who fit first- and second-order differential equations to the experimentally observed transient response of the human operator to step-input signals. This was an insightful analysis based on the understanding of servomechanisms at the time. During this period a number of experimental studies systematically examined the effects of system variables on human tracHng performance (Helson, 1944; Wilson and Hill, 1948; Rockway, 1955~. At about this time Birmingham and Taylor (1954) published their landmark paper on "Man-Machine Control Systems." The concepts of quickening and aiding were introduced, and the theory was put forth that man operated most effectively when system constraints permitted performance analogous to that of a simple amplifier. In 1956, ELkind provided the first comprehensive, systematic data and models of human control as a function of a variety of continuous band- limited Gaussian input signals and different controlled element dynamics. Elkind pioneered in the empirical measurement and analysis of power density and cross-power densitr spectra, as well as in the technology for measuring human tracking performance. Although the technology for such measuring has made giant strides since the 1950s, ELkind's data and analysis have never been seriously challenged. Meanwhile, in the early 1950s, McRuer began advocating that analysis of the human pilot could be done in the same terms as analysis of the balance of the aircraft flight control system. He teamed with Krendel to generate new data and to undertake the first comprehensive review and analysis of all the manual control data available at the time. Their report, "Dynamic Response of Human Operators" (McRuer and Krendel, 1957) was the bible for work in this field for at least 10 years. McRuer and Krendel codified and systematized data in the forte of quasilinear describing function models, together with rules for their adaptation, as a function of the varied of system variables known at the time. A spin-off of their analysis was the Crossover Model, a simplified conception based on the observation that

INTRODUCTION 13 when the human and the system were represented as a unit, a simpler form of the model resulted (McRuer, Graham, Krendel, and Reisener, 1965~. In effect, the human adapted his behavior so that the combination behaved like a simple first-order system with limited bandwidth. It was also found that systems that approximated a simple integration and, therefore, allowed the operator to behave like a proportional controller (i.e., a gain or amplification factor) were preferred. This confirmed Birmingham and ylor's "simple amplifier" tenet. In the 1960s, modern control theory, using a state variable approach and optimization techniques that permitted closed-form solutions to com- plex control problems, was applied to the manual control problem. Baron and Kleinman (1969) proposed a model for the operator, based on optimal control and estimation theory, to account for both control itself and the information processing necessary to support it. This model was developed further, with contributions from Levison, and has come to be known as the Optimal Control Model (OCM). The OCM introduced the concepts of observation and motor noise as stochastic components of the operator that limited human performance. It also made explicit the need for an internal model of system inputs and dynamics as a prerequisite for successful track- ing performance. These concepts have been used for the quantification of attentional workload in the context of manual control (Lesson, Unkind, and Ward, 1971) and for exploring the question of what is learned as one acquires tracking skill (Levison, 1979~. The OCM has been applied widely, and the ~nformation-processing portion of the model has been extended to tasks other than manual control lithe introduction of automation in aircraft cockpits and the vast in- crease in complete of the avionics resulting from it have forced consid- erat~on of manual aircraft control in the larger context of aircraft systems management. These developments have led to the generalization of models to include the operation of management functions. The Procedure Oriented Crew (PROCRU) model (Baron, Zacharias, Muralidharan, and Lancraft, 1980) was a response to this need. PROCRU, a computer simulation model, is a derivative of the OCM that incorporates the execution of pro- cedures in the context of manual control. It introduces the concept of expected net gain, a generalization of the performance index, as a means of predicting priorities among procedures to be executed. Task Network Models In parallel with these advances, the operations research community developed sophisticated models of system processes using a task network approach. With this approach a complex system is represented by a network

14 QUANTITATIVE MODELING OF HUMAN PERFORMANCE of component processes, each modeled by statistical distributions of com- pletion time and probability of success. The resultant computer program is run as a Monte Carlo simulation to predict the statistical distributions of measures of overall system performance. The PERT methodology for man- agement of system development was one outgrown of this approach. Siegel and Wolf (1969) first applied task network modeling to predict human per- formance in a systems context. One innovative concept they introduced was that of a moderator function. Human capacities were postulated to be sensitive to certain global variables such as motivation or stress. ~ explore the impact of these variables, moderator functions shifted the time dism~u- tions or completion probabilities for all component tasks to be performed by the human operator based on the setting of the moderator [unction. This permitted sensitivity analyses to be run easily to test the robustness of performance in the face of variations in stress level or motivation. At about the same time, Swain and his colleagues working at Sandia became concerned with human reliability in the Navy and, later, in the nu- clear power industry. They collected data on the probability of successfully completing some elemental human operations such as closing valves, read- ing displays, or carrying out simple procedures. System reliability analysis, which predicts the performance of mechanical components in a systems context, proceeds according to methods not unlike network analysis. Swain (1963; Swain and Guttmann, 1980) developed methods for incorporat- ing elements of the network, reflecting the reliability of both human and mechanical components of a system, in order to improve overall system reliability estimates. The task network approach was further stimulated by the development of Systems Analysis of Integrated Networks of Tasks (SAINT), a simulation language specifically designed to make it easy to build task network models of human and system performance (Pritsker, Wortman, Seum, Chubb, and Seifert, 1974~. This language has been used to study performance in a wide range of systems including digital avionics systems, command and control networks, and a hot strip mill; SLAM II represents the current state of the art with respect to task network simulation languages and modeling tools. Knowledge-Based Models About the same time that component models of information pro- cessing were being developed, Newell, Shaw, and Simon (1958) and their colleagues began work on the development of computer programs capa- ble of logical reasoning. This work was based on the realization that a computer is basically a device for manipulating symbols, and that solving numerical problems (the purpose for which computers were developed) is only one example of symbol manipulation The work of Newell et al.

INTRODUCTION 15 led to the development of the General Problem Solver program (GPS; Newell and Smon, 1972), which was capable of m~mic~ng many of the behaviors observed when people attempt to solve logical problems with the general complexity of those in Scientific American puzzle articles. Newell, Simon, and their many colleagues and followers have pushed this work on knowledge-based models forward very rapidly, and today, many of the logical and programming techniques that they developed are the heart of modern artificial intelligence and expert system programs. In addition, the concepts they developed for tong about thought are central to today's study of cognitive psychology. Most of the work in this field has centered on modeling human problem solving rather than human-machine systems. More recently, though, sev- eral experimental studies of limited human-machine operations have been conducted. Many people believe that human-machine system modeling is the wave of the future, especially for situations in which the modeling effort views a person as a planner rather than a sensor or movement controller. 1

Next: 2. Approaches to Human Performance Modeling »
Quantitative Modeling of Human Performance in Complex, Dynamic Systems Get This Book
×
 Quantitative Modeling of Human Performance in Complex, Dynamic Systems
Buy Paperback | $45.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book describes and evaluates existing models of human performance and their use in the design and evaluation of new human-technology systems. Its primary focus is on the modeling of system operators who perform supervisory and manual control tasks. After an introduction on human performance modeling, the book describes information processing, control theory, task network, and knowledge-based models.

It explains models of human performance in aircraft operations, nuclear power plant control, maintenance, and the supervisory control of process control systems, such as oil refineries. The book concludes with a discussion of model parameterization and validation and recommends a number of lines of research needed to strengthen model development and application.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!