Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 32
--> 4 Promising Developments in Human Behavior Research Although still in the early stages of its deliberations, the panel has identified a number of areas in which researchers have produced models of human behavior that hold promise for future consideration by the Defense Modeling and Simulation Office. Although following the methodology described in Chapter 3 is important, we emphasize that a critical need for further advancement of the field is to introduce more realistic representations of psychological and organizational processes into the models. The areas of human behavior research presented in this chapter are those with which the panel is most familiar; they should not be considered a complete list. In the second phase, the focus of the panel's work will be to conduct a more in-depth review of these and other areas for the purpose of recommending a plan for research and development. The following sections provide a brief description of current human behavior modeling research efforts in learning and memory, attention and performance, decision making, situation awareness, and organizational structure. Learning and Memory Learning, one of the oldest topics in psychology, has recently become one of the most intensively studied topics in cognitive science. It is also currently one of the most studied topics in computational organizational theory. As we said earlier, learning can be defined as the gradual accumulation of knowledge through experience, generally as the result of repeated exposure. Memory has been the focus of extensive modeling efforts for the past 30 years. In memory research the concern is usually the storage and retrieval of information presented only once
OCR for page 33
--> (or only a very few times), whereas in learning research the concern is usually the training of associations across a fairly large number of trials or examples. However, what is learned must be stored in memory, and learning offers one way of thinking about the process of memory encoding. Thus, there is no clear dividing line between these areas of research. Advances in modeling memory and retrieval are mentioned in this section, although learning through experience may be of the most importance for defense simulation purposes. Learning Several different types of learning have been defined in the literature. The most relevant for military purposes is learning the best action to take under a particular environmental condition, based on experience with previous consequences and anticipation of future changes in the environment or task. At the individual level, three basic approaches have been developed for constructing formal or computational models of learning: (1) the rule-based approach developed in the traditional artificial intelligence (machine learning) literature; (2) the exemplar approach, sometimes referred to by computer scientists as case-based learning, developed in the psychological literature; and (3) the neural network approach, developed in the cognitive science and engineering literature. At the unit level, two approaches have been taken: (1) the adaptation of the single unit and (2) the evolution of a set of units. Models of single-unit adaptation use either simulated annealing or neural networks; however, genetic programming can also be used. For evolutionary models, genetic algorithms or combined neural network genetic algorithm procedures have been used. With respect to the military context, these learning models can be applied at different levels. One level is to describe the learning process at the individual agent level. Another is to describe the internal adaptation process within the unit at the organizational or team level. Still another is to describe the evolutionary process for groups of organizations. The rule-based approach employs a production rule framework for representing knowledge. There are at least four basic mechanisms used to learn new rules with this approach: (1) to strengthen rules that lead to successful outcomes, (2) to discriminate by adding conditions before a rule can be applied, (3) to generalize by removing conditions before a rule can be applied, and (4) to use chunking, a procedure for creating new rules that encapsulate the lessons learned through searching for a solution. Good examples of a rule-based learning approach can he found in Anderson (1990) and in Laird et al. (1986). The advantage of these approaches is that they readily fit into the production rule framework that is currently used in most military simulations. The exemplar-based approach employs a multidimensional vector space representation of experience with previous episodes of situations, actions, and outcomes. Memory is based on storage of previous examples, with some decay over
OCR for page 34
--> time. Learning occurs by retrieving stored examples that are similar to the present situation and selecting the action that produced the most successful outcome in the past under the current conditions. Good examples of the exemplar approach can be found in Logan (1988) and Nosofsky and Palmari (in press). The exemplar models have proven to be more successful than rule-based models in experimental studies of learning. The exemplar approach also can be implemented in the rule-based systems without too much difficulty. Neural network models are inspired by elementary principles of neuro-science. Formally, neural networks are dynamic systems that describe the flow of activation from one field of nodes to another. Information from the environment is represented by a distribution of activation across an input field of nodes, and retrieval occurs when the activation in the network settles to an equilibrium output response state. Learning is achieved by updating the weights that connect one field of nodes to another. Learning is formalized as a gradient descent algorithm designed to maximize an objective function. Some advantages of neural networks are their robustness to noisy inputs, their ability to gradually deteriorate with damage (as opposed to the brittle behavior of rule-based systems), and their ability to react intelligently to novel situations (provided that the situation has some similarity to previous experience). Good examples of neural network models for learning are Plaut et al. (1996) and Kruschke (1992). Memory and Retrieval Of central importance for simulation purposes are two facts: (1) new information may be encoded incompletely, poorly, or in a form not conducive to retrieval and (2) new information, poorly encoded information, or well-known information may not be retrieved (especially under time pressure). One method of classifying models of memory and retrieval also involves sorting these into rule-based, exemplar-based, and neural-network-based models. Examples of rule-based approaches include ACT (Anderson, 1993) and the rational model approach (Anderson, 1990). Neural network models include those of Anderson et al. (1977), Chappell and Humphreys (1994), McClelland and Chappell (1994), Grossberg and Stone (1986), Murdock (1982), and Metcalfe-Eich (1982). Exemplar-based models include those of Hintzman (1988), Raaijmakers and Shiffrin (1980, 1981), and Shiffrin and Steyvers (in press). Possibly the best worked out and easiest to use model at present is the SAM model of Raaijmakers and Shiffrin. However, for many simulation purposes, it may be less important to incorporate any single model than to recognize the variability and imprecision of retrieval. More generally, learning and memory models may be integrated into a common theoretical framework. The basic principles are the same, but the application to the different types of tasks involves additional assumptions and specifications. For example, exemplar-based models may incorporate the same basic
OCR for page 35
--> principles for categorization and memory recall concerning storage and retrieval of examples, although the use of this retrieved information will differ depending on whether the task requires a categorization response or a exact reproduction of a list of items. Attention and Performance It is virtually a truism that the chief limitation on human behavior in most tasks is the capacity of the human processing system. Since attention is the mechanism by which this capacity is allocated, whether one conceives of a single pool of capacity or multiple and relatively separate capacities, it is essential for accurate modeling that attentional processes be represented properly in human behavior representations. Without attentional constraints, the simulated combatant possesses superhuman capacities that would produce grossly unrealistic training and test environments for humans and overly optimistic results for performance. During the past 10 years, psychological research has shown that the limitations of capacity can be overcome in large part through training, learning, and experience, all of which develop automatic processes that can produce actions and decisions while demanding far less capacity than is true prior to training (for descriptions see Shiffrin and Schneider, 1977; Schneider and Shiffrin, 1977; Shiffrin, 1988; Logan, 1988). The development of automatic processes is not without cost, of course, because the actions and decisions that result may reflect a lack of flexibility and adjustment to new circumstances, and a proper simulation of human behavior for defense purposes must reflect the tension between the need for flexible and appropriate actions and decisions in light of the local situation on one hand, and the need to free up scarce resources by relying on already learned responses on the other. For example, on the battlefield, soldiers are trained to employ direct fire to engage the enemy. However, there are many situations in which indirect fire is more appropriate. Selecting the indirect fire option requires evaluating the situation and ignoring the automatic, learned response. The need to develop automatic responses can be considered a form of learning, as discussed in the previous section. In this section we touch on promising and current models of attentional allocation. By far the greatest effort in the field has been aimed at developing models for low-level attention, which governs perception and elementary motor actions. Some additional work has gone into developing models of allocation in short-term memory, although these may not in principle be different. A few partial exceptions to the rule that attentional models are aimed at low-level issues and relatively simple tasks are found in the human factors literature and some military applications (e.g., Wickens, 1984; Schneider and Shiffrin, 1977; Fisk and Eggemeier, 1988). Even in the perceptual and motor attentional approaches, few general simu-
OCR for page 36
--> lation models have been proposed, most models being aimed at very specific tasks and issues. For example, a good deal of effort has gone into the question of the allocation of visual attention in time and space. Much research has shown that many performance deficits once thought to lie in limited perceptual and motor capacities instead are due to inevitable decisional limitations imposed by the structure of particular tasks (e.g., Shiffrin and Gardner, 1972; Palmer, 1994; Sperling and Dosher, 1986). In cases in which attentional allocation does play a role in determining performance, considerable effort has gone into distinguishing models in which attention moves continuously in time and space or jumps in quantal fashion from one time-space location to another (e.g., Sperling and Weichselgartner, 1995). Quite recently some attempts have been made to produce more general models of attention and performance that would better generalize across tasks and settings (e.g., Meyer and Kieras, in press). Also relevant to the discussion of attention and performance are the issues involved in the scheduling of mental processes. For example, some mental tasks must be performed in a serial order (one at a time), whereas other tasks allow for parallel processing (several simultaneously). Various combinations and mixtures of serial and parallel processing are also possible. A good example of work in this area is Liu (1996); for a comprehensive review, see Townsend and Ashby (1983). Attentional models applying to higher-level planning, problem solving, and decision making, still in their infancy, may be of critical importance for military simulations. Some attentional concepts, however, have begun to appear in decision-making models, in the form of weightings on information, differential use of information at different temporal and spatial removes, and differential actions and decisions under different time pressures. These are described in the next section. Decision Making The current military simulation models reviewed by the panel make very little use of sophisticated decision-making mechanisms. Decision making in these models is essentially reduced to checking preference values over a list of proposed actions selected for the current situation. One serious problem with this method is that the preferences have to be acquired from experts for every single rule and situation, which is an extremely costly procedure. Furthermore, this method provides no means for evaluating new actions (without consulting with an expert), making it impossible to evaluate innovative plans generated in response to new contingencies. Another problem with the current approach is that actions are picked deterministically for any given situation, so that an opponent player can easily anticipate and take advantage of an anticipated action. Incorporating more sophisticated decision models, such as those described below, would allow more general procedures for evaluating new actions. Both normative and behavioral approaches can be used to address two types of deci-
OCR for page 37
--> sion problems: one is called competitive games (decisions involving several intelligent competing agents), and the other is individual decision making (from the point of view of a single individual agent). With respect to the military context, the literature on individual decision making is most useful for individual combatant decisions; at the level of command and control, the literature on organizational learning is most relevant. Command-Level Models In principle, command-level decisions should reflect a fair degree of rationality or optimality with respect to the evaluation of plans and the allocation and reallocation of resources and tasks. At the command level, however, the quality of the decisions is a function of a variety of factors, including situation awareness and constraints imposed by the rules of operation. Some command decisions involve response to an opposing force, others involve coordination with supporting forces. No single theoretical tradition is sufficient or has a comprehensive model for all types of command-level decisions. For example, work in operations research and in distributed artificial intelligence has addressed the issue of dynamic resource allocation. Work in game theory has addressed issues of competition against a single enemy. Work in computational organizational theory has addressed issues of coordination and evaluation of changes in command, control, and communication structures. Elements of command, such as the need to assess OPFOR operations, make strategic decisions, and reassign resources, need to be represented in team-and unit-level models, as well in models of commanders. These models can be used in different ways and so can require different types of models. For example, if a representation of the commander is to be part of a training model for a subordinate, it may be desirable to model the commander as making decisions that, though satisfactory, are not necessarily optimal. For models of the OPFOR commander, it may be desirable to make the commander act in an optimal manner. Depending on how they are to be used, models of high command thus may or may not be based on optimality or learning models. An optimality model is a model in which the simulated agent is trying to locate the optimal solution for a problem. Game theory and operations resource optimizing functions are examples. A learning model is one in which the simulated agent is trying to locate a satisfactory or optimal solution and use past experience and knowledge to guide behavior. There are also constrained optimization models that can be adjusted to act like satisficers, such as simulated annealing. Depending on the goal, there are a number of tools for approaching this problem. A variety of models essentially try to locate the optimal solution given a set of constraints. One of the earliest to attract attention is game theory. More recent approaches, which may be of greater value with regard to command and
OCR for page 38
--> control, include dynamic programming, simulated annealing, and genetic programming. Incorporating some form of optimality modeling into models of high command may improve their performance. Game theory is concerned with rational solutions to decisions involving multiple intelligent agents that have conflicting objectives. Models employing the principle of subgame perfect equilibrium (e.g., Myerson, 1991) could be used for simulating command-level decisions because they provide an optimal solution for planning strategies with competing agents. Decision scientists employ decision trees to represent decisions in a complex dynamic environment. Although the number of stages used to represent the past and future and the number of branches extending from each node in a tree can be large, these numbers must be limited by the computational constraints of the decision maker. The planning horizon and the branching factor are important individual differences among decision makers. Game theoretic approaches tend to be useful for simple trade-offs and situations in which the payoff function is known. However, for many practical problems, a decision tree approach to the entire problem may not be particularly valuable, because it cannot either be constructed in its entirety, or it becomes arbitrarily complex (the curse of dimensionality), or it requires information that is not available, or it assumes that the choices are static and unchanging. In many instances, however, it is possible to use partial trees for only a part of the decision. That is, the trees can be cut off at a short horizon; game theory can then be applied to the truncated tree. Even the use of partial trees may increase the performance of many systems over using heuristic-based approaches. When decisions, or parts of decisions, cannot be easily represented as a tree, the value of game theory may diminish. For very complex decisions, representing the decision in game theoretic terms may be extremely cumbersome, often requiring the decision to be represented as a set of games and meta games. Furthermore, specifying the decision as a game requires being able to specify the payoff function. This is not always obvious, particularly for decisions for which the payoff is a function of previous actions. Having to specify the payoff or objective function is, in fact, a problem for most optimization, constrained optimization, and learning algorithms. A commander's job may involve balancing conflicting objectives in high-risk situations. In rational decision theory, the prescribed model for making risky decisions involving multiple objectives is the multiattribute utility model (e.g., Clemen, 1996). The states of the tree are represented by multiattribute consequences (i.e., each state is a vector of values, and each coordinate represents the value achieved on a different objective). In this way, optimal strategies are selected that are designed to maximize a multiobjective criteria (e.g., a weighted sum of values). Empirical research has been concerned with determining the form of the multiattribute function. The weighted sum is just an example and not the only form.
OCR for page 39
--> Other research suggests that humans do not behave in this way (Axelrod, 1976; Abelson, 1976). That is, they often cannot specify a criterion function, may attend only to a few criteria at a time, may choose the first satisfactory solution and not continue to search for the optimal solution, and so on. Thus, if the objective is to have a representation that is natural (seems like the way humans represent information) or an artificial agent that is descriptive (acts as a human might) then learning, satisficing, and constraint-based approaches would be called for. If the objective is to be normative, or to find the optimal solution, then more straight optimization, rational decision theory, or game theoretic models might be called for. In other words, different modeling tools are needed to model both how the commander makes a decision and how the commander should make a decision. Individual-Level Models Representations of low-level combatant decisions should reflect the fundamental abilities and limitations that are empirically observed in human decision-making research. There is a long history of empirical research that has been concerned with the discovery of lists of human biases and simplistic heuristic rules employed by human decision makers (e.g., Kahneman et al., 1982). Unfortunately, this work on heuristics and biases has failed to accumulate and evolve into any coherent formal or computational model of individual decision making. At best, the list of heuristic rules may be used to guide the selection of condition-action rules programmed into current production rule systems of individual decision making. More recently, some simple heuristic decision rules have been coded as production rules, and then inserted into computer simulation models for decision making (e.g., Payne et al., 1993). In fact, this type of decision model has already been incorporated into some production rule simulation models used by NASA (e.g., MIDAS). A problem with these models is that they are based on overly simplistic assumptions. For example, the information-processing approach used in MIDAS is based on a strictly serial processing assumption, which is known to be inconsistent with a large body of research on human cognition that emphasizes the importance of parallel distributed processing for fast but accurate cognitive performance. Progress has also been made on the development of dynamic decision models that are highly relevant to the goals of the Defense Modeling and Simulation Office (Busemeyer and Townsend, 1993; Townsend and Busemeyer, 1995; Grossberg and Gutowski, 1987). These models provide a description of decision making under real time constraints; they include effects of time pressure on the speed and accuracy of decision making; they allow for individual differences in impulsiveness and risk aversion; they are driven by emotional and motivational factors; and they can be used to model the effects of stress resulting from fear of
OCR for page 40
--> impending negative consequences. Another advantage of these dynamic decision models is that they can be readily integrated with learning models (e.g., Busemeyer and Myung, 1992). Thus, adaptive learning models and dynamic decision models can be synthesized into a general model of learning and motivation for real-time adaptive decision making. Situation Awareness Situation awareness is defined as the individual's state of knowledge or mental model of the surrounding situation or environment. It includes an understanding of the dynamics of the situation and the actions that are expected to take place in the future as well as cues for spatial orientation (Endsley, 1995a). A recent report on tactical displays for the individual soldier (National Research Council, 1997) identifies situation awareness as an element critical to successful performance in the combat environment. Other research also shows that situation awareness dominates tactical planning activities (Fallesen et al., 1992; Fallesen, 1993; Deckert et al., 1994). For example, awareness of uncertain assumptions contributes to more flexible planning. Better use of available information results in more elaborate war gaming. Explicit prediction of events is aided by active seeking of evidence to confirm or reject greater awareness of enemy activities. Maintenance of situation awareness also plays a key role in high-tempo battlefield activities that are more reactive in nature and involve less planning (e.g., the battle drills noted in the training document prepared by IBM, 1993). Several studies have focused on scenarios and conditions under which the decision maker must make dynamic decisions under high levels of uncertainty, high time pressure, and rapid change. These studies span the theoretical-to-applied spectrum and cover many domains. Endsley (1989, 1993, 1995a, b) and Adams et al. (1995) discuss psychological models of situation awareness and the impact of particular system characteristics on operator workload, attention and memory requirements, and the likelihood of errors. Klein and colleagues (1986) have studied a particular type of decision making predicated on the quick extraction of salient cues from a complex environment and a mapping of these cues to a set of procedures. A variety of situation awareness models have been hypothesized and developed by psychologists and human factors researchers, primarily through empirical studies in the field, but increasingly with computational modeling tools. Because of the critical role of situation awareness in air combat, the U.S. Air Force has taken the lead in studying the measurement and trainability of situation awareness (Caretta et al., 1994). Numerous studies have been conducted to develop situation awareness models and metrics for air combat (Stiffler, 1988; Spick, 1988; Harwood et al., 1988; Endsley, 1989, 1990, 1993, 1995a; Fracker, 1990; Hartman and Secrist, 1991; Zacharias et al., 1992, Klein, 1994).
OCR for page 41
--> TABLE 4.1 Characteristics of Situation Awareness (SA) Models Class Features Advantages Disadvantages Descriptive Data driven Reflect actual SA process Lack of predictive capability Qualitative Capable of handling complex scenarios Provide vague or non-extensible conclusions Empirical Basis Do not support computational implementation Prescriptive Assumption or theory driven Prescribe ''optimum'' SA process High development cost Quantitative Support computational implementation Limitations in applicability Support objective SA metric development Many validity issues Situation awareness modeling can be roughly categorized into two efforts: descriptive and prescriptive or computational. Table 4.1 summarizes the major features of the two model classes and presents their relative advantages and disadvantages. Most developed situation awareness models are descriptive. Endsley (1995a) presents a descriptive model of situation awareness in a generic dynamic decision-making environment, depicting the relevant factors and underlying mechanisms. The relationship between situation awareness and numerous individual and environmental factors is explored. Among these factors, attention and working memory are considered the critical factors limiting effective situation awareness. Mental model and goal-directed behavior are hypothesized as important mechanisms for overcoming these limits. Although the descriptive models are capable of identifying the dependent relationships between subsystem modifications and situation awareness enhancements, they do not support a quantitative evaluation of such relationships when no empirical data are available. There currently exists no descriptive model that has been developed into a computational model, for actual emulation of pilot decision-making behavior in real-time simulation studies. In contrast to the status of descriptive models, few prescriptive models of situation awareness have been proposed or developed. Early attempts use production rules (Baron et al., 1980; Milgram et al., 1984). In these efforts, the situation awareness model was developed as a forward-chaining production rule system. These models lack long-term memory and an internal mental model; thus, they cannot make use of event histories or event cue information. Recognizing that situation awareness is fundamentally a diagnostic reasoning process, Zacharias and colleagues (1992, 1994, 1996) and Mulgund et al. (1996a, b) used belief networks to develop prescriptive situation awareness mod-
OCR for page 42
--> els for two widely different domains: counter-air operations and nuclear power plant diagnostic monitoring. Both efforts modeled situation awareness as an integrated inferential diagnostic process, in which situations are considered as hypothesized reasons, events as effects, and sensory (and sensor) data as symptoms (detected effects). Situation awareness starts with the detection of event occurrences. After the events are detected, their likelihood (belief) impacts on the situations are evaluated by backward tracing the situation-event relation (diagnostic reasoning) using Bayesian logic. The updated situation likelihood assessments then drive the projection of future event occurrences by forward inferencing along the situation-event relation (inferential reasoning) to guide the next step of event detection. A more recent application of this modeling approach is described by Mulgund et al. (1996a) in an effort directed at developing a situation-driven display for fighter cockpits. Organizational Models The behavior of units has been modeled with some success for the purpose of testing ideas from organizational theories. Achieving greater realization in military simulations will require major adaptations. However, some of the findings from the tests of organizational theory can contribute to efforts in the areas of flexibility, adaptation, and performance monitoring. For example, research in this area has provided insight into the ways in which units change when faced with stress and should be structured to withstand stress (Carley and Lin, 1994, 1995, forthcoming a). Further work in this area may provide insight into factors affecting the success of joint task forces and coordinated responses by multiple forces. Most of the computational modeling work on unit-level models employs the use of multiagent models. These models range from the more symbolic distributed artificial intelligence models to models using one of the various complex adaptive agent techniques, such as genetic algorithms (Holland et al., 1986; Macy, 1991a, b; Crowston, 1994, in press), neural networks (Karayiannis and Venetsanopoulos, 1993; Kontopoulos, 1993), simulated annealing (Carley and Svoboda, 1996), chunking (Tambe, 1996), and other stochastic learning models (Carley and Lin, forthcoming a, b; Lin and Carley, forthcoming; Glance and Huberman, 1993, 1994). Some of this work rests on, or is related to, mathematical models of distributed teams (Pete et al., 1993, 1994; Tang et al., 1993) and social psychological experimental work on teams (Hollenbeck et al., 1995). Most of these models perform specific stylized tasks, and many of them assume a particular command and control structure. There are two types of models commonly used at the unit level. The first, intellective models, are small models intended to show proof of concept for some theoretical construct. Such models enable the user to make general predictions about the relative benefit of changes in generic command and control structures.
OCR for page 43
--> The second, emulation models, are large models intended to emulate a specific organization in order to locate specific limitations to that unit's structure. Such models enable the user to make specific predictions about a specific organization. Intellective models provide general guidance and an understanding of the basic principles of organizing; emulation models, which are difficult and time-consuming to build, provide policy predictions, but only for a single case. In these models, the artificial agents, although attempting to act like human agents, are not perfect human analogs (Moses and Tennenholtz, 1990). The basic research issue is how accurate the agents in these models need to be so that a unit composed of many of them acts like a unit of humans. Some research suggests that complete veridicality at the individual level may not be needed for reasonable veridicality at the unit level (Castelfranchi and Werner, 1992; Carley, forthcoming). Moreover, the cognitive capabilities of the individual agents, as well as their level and type of training, interact with the command and control structure and the task the agents are performing to the extent that different types of agent models may be sufficient for modeling unit-level response to different types of tasks (Carley and Newell, 1994; Carley and Prietula, 1994; Carley and Lin, forthcoming b). There have been a few models of unit-level adaptation that are not multiagent models (for a review, see Lant, 1994). These models typically employ autonomous agents acting as the unit or various search procedures. This work has been useful in demonstrating various points about organizational learning (Lant and Mezias, 1990, 1992). None of the computational unit-level models is at the point of plugging directly into a current Defense Modeling and Simulation Office platform as the model of the unit. In part this is because existing unit models have either too limited a repertoire of command and control structures or too limited a model of task. However, the network-based representation and the various network measures of structure could be incorporated into some current Defense Modeling and Simulation Office simulations.
Representative terms from entire chapter: