Click for next page ( 73


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 72
4 Issues and Research Recommendations OVERVIEW In examining the state of the art of human perfo'~ance modeling as it applies to complex dynamic systems, a variety of models and modeling approaches exist and have been, or are being, used in meaningful applica- tions. Nonetheless, there are issues concerning the technology of modeling that need to be addressed before human performance models (HPMs) can have the kind of impact envisioned by their proponents. In this report, the issues of principal interest are generic rather than specific to a particular model or approach. First, there is a constellation of five interrelated issues that are asso- ciated with attempts to extend the scope and applicability of HPMs to the kinds of complex problems that are of concern here. 1. Complex/comprehensive models: Most existing HPMs have been developed only for relatively simple situations. Many of the real-world person-machine systems of interest today are highly complex, involving multiple operators, multiple tasks, and variable environmental or equipment contexts. Preferred methods for developing HPMs for these systems have not been identified. 2. Model parametrization: As models become more complex, the number of parameters related to human performance in the model is likely to increase. The human performance data necessary to specify the parameters, and therefore to support the HPMs, will be more difficult and costly to obtain. Existing data bases are unlikely to be adequate for a priori definition of the HPMs and, in most cases, data appropriate to the 72

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 73 technology incorporated in current and anticipated person-machine systems will not be available. 3. Model validation: As models become more complex, they also become more difficult and costly to validate. This Is as true for models in economics and physics as it is in human performance modeling. As such, comprehensive HPMs lack the kind of scientific validation that has been achieved for many simpler models. This is unlikely to change, and the feasibility of extensive validation of comprehensive models is problematical. 4. Underutilzzanon/maccessibility of HPMs: Most complex HPMs have not been used widely or subjected to independent evaluation. Unless some way to simplifier their acquisition and facilitate their increased use is devised, this situation is unlikely to change. 5. Potential for misuse/misunderstanding: As models become more complex, they also become significantly more difficult to use. Misuse of models Is potentially costs for the user and harmful to the credibility of the modeling community. Three additional issues, related to modeling the future role of op- erators of complex systems and to recent developments and emphases in psychology, emerged from the working group's deliberations. 1. Accounting for mental aspects of tasks: In an attempt to deal with cognitive aspects of the operator's tasks, there has been increasing interest in incorporating mental models into HPMs. This is particularly true as the operator undertakes planning and other supe~ory roles relative to semiautomatic systems. Methods and data for accomplishing this are ill defined. 2. Developing and using h~owledge-based models: Along with the increased interest and popularity of artificial intelligence (AI), there has been a rush toward the development, integration, and use of intelligent or knowledge-based models (or submodels). The popularity of the concept may have outpaced methodological developments in the knowledge engi- neering (knowledge gathering and representation) necessary to support the development of HPMs. 3. Accounting for individual differences: The effects of individual differences have been largely ignored in HPMs to date, in favor of using average indices of human characteristics representing the ideal, fully trained operator. Many individual characteristics may have a significant impact on human, and therefore on system, performance and need to be considered. These issues are elaborated below, and recommendations for address- ing them are presented.

OCR for page 72
74 QUANTITATIYE MODELING OF HUMAN PERFORMANCE SPECIFICS ComplexIComprehensive Human Performance Models Issues In the past, HPMs have tended to be designed or selected for specific situations and used to simulate a single-function, person-machine system. Examples include search models with sequential looks, signal detection models with successive samples, game theory models u ith successive moves and defined payoff;, and tracking models with defined limits. In many, if not most, real-world situations the system operator is faced with a mixture of tasks and inputs that vary along dimensions such as forest, valid~n,r, importance, redundancy, cost, and response requirements. The un- derlying truth may be known only vaguely by the operator: intercorrelations may be significant but unknown; critical functions may interrupt routine ones. One result is that the person-machine system must reconfigure itself to handle different types of functions. An appropriate HPM should be ca- pable of similar changes of focus and state. The implication of this is that a comprehensive HPM must incorporate a model to account for properties and performance consequences of human attentional mechanisms. That is, it must account for changes in focus of attention and the resulting effects of both subtask and total system performance. These considerations and implications give rise to basic questions concerning the direction that the development of comprehensive HPMs should take: . Should a supermodel be developed, based on a single overarching theory, that will predict all the performance of interest? If so, what are reasonable expectations for, or limitations of, such a model? . Should comprehensive models be developed by providing a suitable framework for integrating existing unitary or single-task models? In con- cept, the set of existing single-task models could be combined, like Tinker toys, into the most appropriate or efficient format for specific modeling tasks. In practice, the questions will be, What is a suitable framework? How does one interface models that have very different bases? and Are the component models additive? Should the total system modeling effort and the development of comprehensive HPMs simply be abandoned and existing models used as part-task analysis tools? In other words, should "business as usual" prevail, with research efforts aimed at improving existing models or developing single-task models for the new tasks of interest? Ho other issues relate to the appropriate scope of future com- plex/comprehensive HPMs. First, until 10 years ago, most HPMs focused

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 75 on the structure of tasks of interest but not the context. Control theory, sig- nal detection theory, information theory, and other extant approaches tend to capture the structure of tasks in general but not the specific meaning of model parameters in relation to particular situations. Certainly, most mod- els reflect the context, but the context is not explicit. For example, models for performance on psychomotor tasks do not generally account for any differences that might be involved in initiating that task after completing or interrupting a cognitive task as opposed to another psychomotor task. Insofar as context changes may have a significant impact on the ability of humans to change their focus of attention, the explicit modeling of context may be an important component in the development of comprehensive HPMs. Second, many systems of interest are sufficiently complex to require more than one human for operation. Quantitative models for group per- formance are seriously lacking. A complete and reliable empirical data base on group performance is not currently available. Moreover, it will be extremely difficult to obtain appropriate and generalizable data. The sources of variability are increased when teams of individuals are involved. Requirements such as the needs for trained subjects and relatively long experiments impose large financial and human burdens. Some important questions related to multiple operator models include the following: How does one account for a crew member's internal modelers) of other crew members? Can factors such as social interactions and leadership be accounted for quantitatively? Is it possible to conceptualize team activity so as to distinguish clearly the components of performance associated uniquely with the in- teraction of team members from those associated with members acting individually? Recommendations Inasmuch as modeling attention is going to be an important component of any comprehensive modeling efforts, it is recommended that fundamental research in the area, having a quantitative perspective, be pursued. With respect to constructing comprehensive HPMs, it is highly unlikely that a single supermodel incorporating all levels of complexity could be developed in the foreseeable future or, for that matter, that it would be a particularly useful tool. A truly universal HEM is almost certain to be too complex to understand and use efficiently. In addition, it would incorporate large amounts of "excess baggage" for any specific application. Identification of the reasonable limitations to size and complexity in a functional HPM will, most likely, have to wait until models exist that go

OCR for page 72
76 QUANTITATIVE MODELING OF HU~IN PERFORMANCE significantly beyond those of today. On the other hand, the "business as usual" approach to part-task modeling and the refinement of single-task models seem too narrow to have the kind of impact on HPM system design and evaluation that is needed and justified. Thus, models that rely on terracing various submodels should receive the most attention. Because the potential variety of HEM situations is great, and because it is premature to decide on one favored approach to developing comprehensive HPMs, a gradual extension or aggregation of well-validated models to deal with new or compound situations, is recommended. The aggregated models should be validated experimentally to the extent possible. It Is also recommended that methods of accounting explicitly for con- text be explored. In particular, it appears that AI constructs may be relevant to this problem and that linking traditional numerical models with newer symbolic models, in an attempt to incorporate the richness of contextual situations within HPMs, would be an attractive area to explore. Of the possible extensions to current modeling approaches, the first area to be investigated should be the development of models for tasks involving two persons. This would serve as a foundation for larger modeling efforts and for addressing multiperson modeling issues. Model Parameterization Issues . The problems associated with the parameter~tion of comprehensive HPMs will be substantially more difficult than those for simple, single- process models. ~ help understand the difficulties somewhat, note that model parameters generally fall into four classes: 1. Parameters that are defined by the initial conditions under study, such as hardware variables for which specification forms a part of the problem statement. Examples include the distance between controls or the maximum speed of a vehicle. 2. Parameters that form an integral part of the human operator model, but that may be assumed to be invariant (or have invariant dis- tributions) over the range of conditions to be studied. These values or distributions may be estimated by aptitude achievement tests, laboratory experiments, theo~, or assumption Human time delays and observation noises in tracking tasks, reach times or eye movement times for a given hardware configuration, or memory recall times for certain tasks and con- texts are examples.

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 77 3. Parameters that may vary from condition to condition, but for which theory or experiment defines the rules of variation contingent on the context in which they are to be assigned. For example, the parameters describing distnbudons for task completion times in task network models may be based on empirical data for the specific task/condition or a related one; they may be predicated on some theoretical basis such as Fitts' law; or they may be various parameters of the describing function models for control tasks which are specified on the basis of verbal adjustment rules that result from theoretical considerations and empirical data. 4. Free or unknown parameters that are given assumed values at the time the model is exercised in order to predict performance or are adjusted after the fact to produce the best fit to data obtained in experiments or operation Examples are parameters related to human performance objectives such as cost function weightings or task criticalities. A general goal in any modeling effort is to limit the free parameters used in predicting behavior to the smallest number possible. For relatively simple situations such as linear, time-invariant systems and normal distri- buttons, there are strong theoretical results to help resolve the questions concerning numbers of free parameters as well as algorithmic methods for estimating parameters and statistical results to establish confidence limits. Given the complexity of the systems of interest today and for the future (e.g., nonlinear, time varying, mixed discrete plus continuous distributions), existing formal system identification and parameter estimation methods are not likely to be applicable to the problem of rigorously identifying all the parameters of complex HPMs. This fact raises a number of significant questions: 1. How constrained must a model be in order to make useful pre- dictions or generalizations? 2. Is there a reasonable ratio of unconstrained parameters to de- pendent variables that leads to useful models? How much uncertain in parameter settings can be tolerated before the predictions lose accuracy and credibility? 3. Simple models have the potential to exploit statistical procedures for identifying parameter values and maximize the goodness of fit to a data set However, models of the scope considered here are less amenable to these approaches because of the complex interactions involved. Is it possible to devise systematic approaches to estimate parameters that do not have full statistical rigor, or even the rigor of efficient hill climbing algorithms, yet provide some bounds on the time, effort, and confidence in the values obtained?

OCR for page 72
78 QUANTITATIVE MODE' [NG OF HUMAN PERFO~lNCE Some of these questions seem to depend on the particular model or application domain being considered, but some general statements might be made as experience with alternative model forms accumulates. An issue closely related to questions about the number and disposition of free parameters within a model is the quality and validity of the data base from which values for those parameters are drawn. Questions related to this issue include the following: What is the quality of the data used to establish values for parameters within a model? How good was the quality of the data base on which the model was first established? From how wide a population were the data collected? Is the data base population representative of the prospective system operator population? Recommendations The true degree and nature of the parameterization of particular HPMs is often opaque to all but the model developers. It is recommended that in documenting HPMs, developers be encouraged to identify and classify all parameters of the model. It would be useful if general classification schemes were employed in the process. The four classes given above represent one classification scheme. This scheme may have to be augmented to reflect parameters related specifically to computer implementation of the HEM, such as sample rate and bit size. Research into systematic methods of parameter identification, estima- tion, and evaluation for complex HPMs is needed. For example, the impact of trade-offs between the number of parameters that must be estimated from data in live simulations and the number of system performance mea- sures to be predicted from the HPM In simulation should be examined. It should also be a goal of research to develop estimation techniques that aim at uncovering distributions of parameter values, rather than simply point estimates, so that HPMs can be used to predict the range of expected performance and not just average performance. Existing human performance data bases should be reexamined to determine their relevance for specifying parameters of HPMs. However, efforts will probably be required to develop a more systematic dam base for HEM development. Such efforts are to be encouraged. Problems With Validation Issues Issues of validity have been difficult to resolve for simple models. They will be substantially harder to address for the complex models required in future applications. Most models have been validated only for single-task

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 79 situations. The human's ability to perform a particular task may depend on the nature of the other tasks for which he or she is responsible. For example, it may be that sequentially moving among regulation, recognition, and problem-solving tasks can lead to degraded performance relative to that achieved in single-task situations. On the other hand, different tasks may be complementary in the sense that the performance of one task may make performing another task easier. For example, there may be a natural relationship between tasks, in terms of information requirements, that leads to transfer from one to the other. Although it is probable that the models developed for single tasks will ultimately prove suitable as constituent models in an overall multitask formulation, most models have not yet been validated in this manner. Relevant questions include the following: 1b what extent can models of limited scope that have been validated independently in a research environment be assumed to be valid when incorporated as submodels into an integrated model? Which single-task models can be combined to Weld valid multitask models? In many computer-based systems, operators serve a supervisory, rather than a direct control, function. As such, the amount of human sensory- motor performance data available for comparison with model performance data will be limited. How is the operator's cognitive contribution to be modeled? How does one validate a model for the long periods in which there is little or no overt behavior? With mathematical or simulation models one usually looks for quan- titative validation or tests of model accuracy. For models of complexity sufficient to represent full-scale human-machine system performance, prob- lems of validation go well beyond selection of the proper goodness-of-fit statistics. The standard theoretical and statistical assumptions and con- structs used for testing or validation of simpler models such as linear systems, normal distn~ut~ons, and point estimation may be wholly inade- quate. Are existing tools adequate for the validation process? If so, what are they; if not, can they be developed? As a result of these difficulties, comprehensive HPMs lack extensive scientific validation a compilation of several independent, critically ex- amined studies showing that in a vaneW of human-mach~ne systems the crucial statistics on operator performance are in close agreement with the statistics predicted by a comprehensive HPM. Such a body of data does not exist for any current comprehensive model Furthermore, developing such a data base would be an extremely expensive and time-consuming process requiring extended studies of several large-scale human-machine systems.

OCR for page 72
80 QUANT7TATIYE MODELING OF HUMAN PERFORMANCE While one would certainly like to see the results of such a program, it may be unrealistic to expect them. However, there are other ways to evaluate a model A mode} may have demonstrated an adequate level of practical utility by repeatedly producing satisfactory answers to real-world engineenog questions. Ultimately it is the user, not the model developer, who decides if the model has sufficient utility. 1b determine whether or not this is true, the user needs access to comparisons of model predictions and expenmental results relevant to the applications of interest Recommendations It is recommended that HPM practitioners and users continue basic vali- dation research using standard mathematical and experimental techniques while activet pursuing the development of additional validation tools. In particular, methodological studies to identify and examine the usefulness of new validation concepts are recommended. These studies should allow for varying degrees of precision and accuracy. 1b facilitate the users decision-ma~ng process with regard to model utility, there is a need for practitioners or users to collect and publish comparisons of models versus experimentally obtained data for independent judgment of model scope and predictive accuracy. There is also a need for comparative evaluations across models (applications, performance, and validations. It is recommended that the feasibility of benchmark testing for the relative utility of models be explored. One major component of that exploration would be identification of the numbers and types of tasks and tests required to fairly evaluate the comparative strengths and weaknesses of venous models and modeling approaches for a variety of applications. Underutilization/Inaccessibility of Human Performance Models Issues Considerable use is made of specialized HPMs, which are often con- structed for the task at hand. Relatively little use is made of large, compre- hensive HPMs except by their developers and groups associated with them. There are three barriers to more extensive use. Up to the present, com- prehensive models have been available only on a few computing systems; learning how to operate the programs has been difficult. A second barrier to the use of models has beeI1 a general unfamiliarity with the concepts of human performance modeling in general and a conceptual basis of the particular comprehensive model of interest. Finally, potential users have

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 81 not had sufficient faith in the utility of the models to invest time and effort in acquiring and Earning how to use them. The first barrier to using comprehensive models, accessibility and ease of use, is now being addressed. Models are being rewritten to run on the personal computers and workstations that have come into widespread use and are easy to learn how to operate. The issues of learning about models and relying on them are more subtle. The problem is a circular one. Models must be exercised repeatedly to demonstrate their utility. People learn to use techniques that they perceive to be useful. However, until enough use is made of comprehensive models to demonstrate their utility, people will not invest the time required to learn to use them. Recommendations In general, efforts should be made to reduce the costs of comprehensive HPMs and to make available to potential users enough information so that they can make an informed decision conceding model use. When relevant experts, not just the original developers, find a comprehensive HPM to be useful, government agencies should support the development of easily used versions on the most inexpensive machines possible. This support should include the development of user friend interfaces and documentation. Support should also be provided for the publication of papers describing the scientific basis for the model in sufficient detail so that potential users can evaluate its appropriateness for their own projects. Users of comprehensive HPMs should be encouraged to publish both positive and negatwe experiences with them. Sponsors of model use should regard such publications as appropriate activities for funding and should encourage preparation of the necessary reports as part of a systematic program of model improvement. Whenever possible, these publications should be presented in the open, refereed literature. Sponsors of model development and use should insist on this provision. Potential users are at present handicapped in making decisions about model use by the absence of independently evaluated, easily accessible reports by both developers and prior users. There is also a need to locate, review, and integrate the applications that have already been published in sources such as the IEEE liansacuons, Proceeding of the Annual Conference on Manual Control. Unfortunately, funding Is easier to obtain for new efforts than for efforts aimed at deter- mining and integrating what is already known. One possible approach to the necessary integrative effort is to provide an explicit mandate to Depart- ment of Defense (DoD) Information Analysis Centers (e.g., CSERIAC3

OCR for page 72
82 QUANTITATIVE MODELING OF HUMAN PERFORALdNCE to review, synthesize, and update the HPM efforts that have already been published. Potential for Misuse or Misunderstanding Issues The HPMs discussed here are complex, not completely mature, and not fully documented. Currently, their use requires a significant degree of expertise with, and a detailed understanding of, the model or modeling approach. As problems of underutilization and inaccessibility are resolved, the risk of misapplication or abuse of assumptions and limitations may increase. For many models, the underlying assumptions are fully understood only by model developers. Moreover, key assumptions can exast in any of the following areas: assumptions about the operator (e.g., steady-state behavior, nature of performance limitations, level of training/alertness, error rate); mathematical assumptions (e.g., correlations among certain inputs, randomness of events, linearity of relationships, statistical independence of events/activities); and assumptions concerning the computing facilities and software (e.g., 8 bits versus 32 bits, memory capacity, methods for propagating dynamic equations). Assumptions of this type are required to define the model in an analytically and computationally tractable form. However, user problems can arise from a lack of explicit knowledge of the specific assumptions within the selected model, and a lack of guidelines as to the significance of departures from assumed conditions. Recommendations If models are to be used effectively, agencies funding me development of models must begin to provide funding for the production of careful technical documentation on the models. This is a nontrivial cost that must be borne to allow for proper evaluation and application by users other than the model developers. Documentation of fundamental assumptions, theoretical bases, and embedded data, as well as software implementations, should be a deliverable in contracts involving the development of a human performance model that is proposed for immediate or near-term applica- tion. However, research efforts in fundamental aspects of HPMs should not be impeded by such requirements.

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 83 Given the potential for misunderstanding or misuse of models, Hey should be exercised by individuals with training sped ifically related to human performance modeling. One way to ensure this is to require that people having input into the human engineenog of systems be trained in He use of HPMs, either as part of their basic educational curriculum or as pan of a continuing education effort for established professionals. Any efforts on the part of model developers to provide user friendly interfaces for their products should be directed at the nonnative user population. Regardless of who the user of an HPM is, a need exists for better user interfaces to HPMs. The output of me HPM must be usable by the person who needs the product The input should be easy to enter and guided or assisted by information embedded in the computer implementation of the HPM. The possibility of model developers providing expertise mat is incorporated In inspectable knowledge in the software should be explored tie., expert systems to aid model application). It should be recognized that because the development of user interfaces is not a pie research Interest of model developers, such efforts will have to be undertaken by those supporting model development and will certainly necessitate additional funding. Many of the misuses of a model result from factors such as poor input data and lack of awareness as to the range of its validity. These problems can often be overcome by sensitizer analyses with the HPMs. In particular, it is recommended that model results not be accepted unless accompanied by sensitivity analyses with respect to input parameters and data. These analyses should serve to identifier me range of expected performance as well as key assumptions or parameters for which highly reliable data are needed. They should also provide the guidelines and forms for follow-up, person- m-the-loop simulations. It is also recommended that the methodology for conducting such sens~tMty analyses be investigated. This would provide data on the robustness of the model Because of the large number of parameters likely to be involved, it is important to perform these analyses effectively. At present, it is not clear how this can be done. Mental Models to Account for Mental Aspects of Tasks Issues Advances in microprocessor and display control technologies have al- tered the roles of humans in He operation of complex systems. The result Is an increasing emphasis on the cognitive aspects of a task as opposed to its perceptual and psychomotor components. ~ continue to be useful, HPMs must be able to account for these cognitive processes.

OCR for page 72
. 84 QUANTITATIVE MODELING OF HUMAN PERFOR~fANCE The exploration of mental models to account for the cognitive aspects of task performance Is receiving increasing attention in both the psychology and the modeling communities. Unfortunately, the catchall term mental a models, although popular, is not sufficiently well defined and understood to be particularly useful for human performance modeling. There is an underlying assumption that changes In mental models lead to changes in performance. However, there are a number of difficulties involved in attempting to build a mental model of a particular system: mental models tend to be incomplete; they are dynamic (and thus unstable); they are different for different users; they include contradictory, erroneous, or unnecessary concepts; and they are context specific. These characteristics pose some critical questions that must be addressed: What are the requirements for identifying an operator's mental model that may be integrated into an HPM? How does one measure and describe the cognitive behavior or performance of the operator? Recommendation Efforts in describing cognitive functioning in computational terms should be supported. 1b be most useful, cognitive models need to be developed at a concrete, operational level of representation so that they can be incorporated in existing HPMs and model behavior can be com- pared with measurable operator data. In addition, cognitive models that place more emphasis on psychologically valid descriptions of, rather than prescriptions for, behavior are required. Developing and Using Knowledge-Based Models Issues Many developers, regardless of their primary approach, attempt to incorporate elements of the knowledge-based approach into their models. One reason for this is that the knowledge-based approach appears to be well suited for implementing the cognitive models discussed above; however, the procedures are very individualistic and the criteria for model validation are unknown at present Therefore, for knowledge-based models to gain acceptance as a valid approach, additional research and testing are required. The use of linear statistical models and linear control-theoreiic models has benefited greatly from the availability of identification methods, as well as ways of testing the goodness of fit of such models. Current practice in lmowledge-based modeling suffers from a lack of such methods, relying

OCR for page 72
ISSUES AND RESEARCH RECOMMENDATIONS 85 instead on subjective analysis of protocols and other knowledge-engineering methods. Recommendation Some initial work on identification and testing of knowledge-based models has been done. However, much more effort is recommended if this approach to human performance modeling is to achieve a reasonable level of methodological rigor. Accounting for Individual Differences Issues Humans differ from one another in a number of physical, cognitive, and emotional ways. Some of these differences are easily quantified (such as visual acuity or reaction time). Others, such as motivation, are more difficult to qualifier. Although not all differences have an impact on the performance of person-machine systems, which differences are significant in a given circumstance it not always clear. In general, HPMs have not focused on the effects of individual dif- ferences for several reasons. First, the problems of interest (e.g., pilot performance) have been ones in which the range of permissible human characteristics and behavior was constrained through selection and training so that the effects of individual differences on system performance, and therefore the need to model them, were minimized. Second, the relation- ships between model parameters and context-free, measurable individual differences are not Lowry Third, the relevant data on the range of val- ues for individual characteristics often do not exist and are difficult and expensive to obtain. As noted elsewhere in this report, the systems of current and future interest inherently allow for more operator discretion. Because of a re- duced emphasis on physical ability, new systems may use a greater variety of operators. In addition, system designers are increasingly interested in tailoring their systems to individual operators as advanced automation pro- vides the opportunity to do so. For these reasons, it is becoming increasingly important that HPMs be able to incorporate individual differences. Recommendations Rather than attempting to collect data on all possible individual dif- ferences in all relevant contexts, it is recommended that existing HPMs be used to assess the sensitivity of system performance to variations in

OCR for page 72
86 QUALITATIVE MODELING OF HUSSEIN PERFORAfANCE operator characteristics. This will entail systematically manipulating the model to determine which human characteristics significantly affect system performance and to identify the range of acceptable emanation for each, within which the system functions at an acceptable lever Thus, HPMs can be used to define their own data requirements. A list of key characteristics would enable more economical and more feasible data collection on indi- vidual variation. It is recommended that the users of models engage in this sort of experimentation and convey their results to other practitioners for additional testing and evaluation. CONCLUSION Given the current state of the art in human performance modeling, is the methodology ready to be an integral part of the system design process? Although the methodology has a number of admitted weaknesses, it also has the ability to make a number of unique contributions to the process of system engineering. By beginning modeling efforts early in the design process, a formal means is provided for considering the impact of human performance capac- ities and limitations on the range of design issues that must be confronted while there is still time to resolve them. An early modeling effort can pro- vide quantitative and qualitative analyses that allow design trade-off studies to include a variety of human performance factors along with other system variables. This process forces consideration of the assumptions and design decisions which underlie assertions that the system will work with available personnel In all, there are compelling reasons to believe that systematic human performance modeling efforts should be regularly advocated and used along with expert judgment and manned part- and full-task simulation, as a regular part of the design process for large-scale human-machine systems. c