Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 10
--> 2 Modeling Needs for Human Behavior Representation What are the Modeling Needs? The Under Secretary of Defense for Acquisition and Technology sets as an objective to "develop authoritative representations of individual human behavior" and to "develop authoritative representations of the behavior of groups and organizations" (U.S. Department of Defense, 1995:4-19–4-21). Presentations made at three workshops held by the Defense Modeling and Simulation Office, formal briefings to the panel, and informal conversations among panelists, Department of Defense representatives, and Department of Defense contractor personnel suggest that users of human behavior representation do not consider the current generation of these representations to reflect the scope or realism that is required for the range of applications of interest. The panel interprets this as a need for representation of larger units and organizations, as well as for better agreement between the behavior of modeled forces (individual combatants and teams) and that of real forces; for less predictability of modeled forces, to prevent trainees from gaming the training simulations; for more variability due not just to randomness but also to reasoned behavior in a complex environment and for realistic individual differences among human agents; for more intelligence to reflect the behavior of capable, trained forces; and for more adaptivity to reflect the dynamic nature of the simulated environment and intelligent forces.
OCR for page 11
--> Levels of Aggregation Authoritative behavioral representations are needed at different levels of aggregation for different purposes. At various times, representations are needed for: Individual combatants, including dismounted infantry, Squad, platoon, and/or company, Individual combat vehicles, Groups of combat vehicles and other combat support and combat service support, Aircraft, Aircraft formations, The output of command and control elements, and Large units, such as Army battalions, brigades, or divisions, Air Force squadrons and wings, and Navy battle groups. They are needed for OPFOR (opposing forces or hostiles), BLUFOR (own forces or friendlies) to represent adjacent units, and GRAYFOR (neutrals or civilians) to represent operations other than war and for the interactions among these forces. The panel recognizes that this initial articulation of needs is very broad, diffuse, and unranked. More structure and prioritization will be needed to drive the development of the program plan to be presented in our final report. However, the panel as it is constituted lacks the military background and knowledge required to establish broad-based priorities for military human behavior representation requirements. This will be addressed by the panel in collaboration with the military sponsors in the final report. Observable Behaviors When viewed from the perspective of the simulation user (exclusive of developers) the characteristics of behavior that are visible and interpretable to the users of a simulation depend on the level of aggregation at which the behavior is defined. We consider first the actual user who views a simulation as an individual player, either dismounted or associated with a vehicle. It may be the individual combatant, a ground vehicle or air system commander, a squad or platoon leader, or a commander at a higher level. This individual may be observing a unit at different levels of aggregation as well. Issues of unit-level modeling, aggregation, and scalability are discussed later in this chapter. The most obvious behavior to be observed is the physical movement in the battlespace. It must be at an appropriate speed, and the path followed must make sense in light of the current situation and mission.
OCR for page 12
--> The detection and identification of enemy or friendly individual units by the human behavior representation must appear reasonable to the observer. The visual search should depend on awareness of the situation, current task demands, and external environmental factors such as the field of view, distance, the weather, visibility, the time of day, the display mode (unaided vision versus night vision goggles), etc. Decision-making outcomes should reflect situation awareness and real environmental conditions. The decisions concern such observations as which way to move given the plan and the situation presented by the opposing forces; they also concern whether to shoot, seek cover (evade in the case of aircraft or ship), or retreat. Movement decisions should be consistent and coordinated with the behavior of others in the same unit. Decisions should be consistent with the currently active goals. Ideally, individuals would exhibit behavior that reflects rational analysis and evaluation of alternative courses of action, including evaluation of alternative enemy actions given the context. In practice, in time-critical, high-stakes situations, individual decisions are more likely to be "recognition-primed," that is, made on the basis of previously successful actions in similar situations. For example, Klein et al. (1986) showed how experienced fire team commanders used their expertise to characterize a situation and generate a "workable" course of action without explicitly generating multiple options for comparative evaluation and selection. More recently, Kaempf et al. (1996) described how naval air defense officers spent most of their time deciding on the nature of the situation; when decisions had to be made about course of action plans, fewer than 1 in 20 decisions focused on option evaluation. Representation of communication processes also depends on the specific purposes of the simulation but should follow doctrine associated with the particular element. Communication need be represented only when it is providing relevant objective status, situation assessment, or unit status information that will affect action at the level of the unit being represented. Communication may take several forms and employ several modes, including direct verbal communication, hand gestures, radio communication, and data link. High-resolution models of small teams may require explicit representation of message content, form, and mode. How Process Models of the Individual Support These Needs Requirements for behavioral representation at the individual level are accomplished by means of a number of specific components of human performance that include active processes and supporting memories. Realistic models of behavior at the individual level require the representation of at least these components to support the models that execute the behavior. There is room for disagreement among experts concerning the scope and specification of a compre-
OCR for page 13
--> hensive set of components such as these. We claim only that the processes described here are critical determiners of the observable behaviors that are important to military simulations. The relevant components of human behavior are briefly discussed in the following paragraphs. Sensing and perception are processes that transform stimulus energy into internal representations that can be operated on by cognitive processes. Key elements of individual combatant behavior in military simulations are the detection, identification, and classification of targets, the determination of battlefield situation awareness by observation, and the recognition of patterns of OPFOR movement and positioning. In addition, we consider communications (verbal and gestural) as a special case of sensory input critical to modeling the interactions of individual combatants with team members. To be consistent with the behavior of real humans, individual combatant models must also exhibit selective attention, a process whereby information processing is focused on a relatively small subset of the sensory stimuli available at any time. The models should account for both visual and auditory attention. Working memory is the functional component that temporarily holds information for cognitive processing; it is limited in information capacity and persistence. Working memory is significant to military simulation because it is a cognitive bottleneck and therefore limits human performance. For example, a fighter pilot may momentarily be overloaded with competing tasks, forget about the presence of a threat for an instant, and perform an inappropriate maneuver. To the extent that such errors are desirable in modeled behavior, some explicit representation of working memory is essential in an individual combatant model. Long-term memory is the functional component responsible for holding large amounts of information for long periods of time. There is clearly a need for individual combatant models to represent the storage and retrieval of declarative (factual) knowledge (e.g., weapon characteristics and topography) and procedural knowledge (e.g., tactics and procedures). Situation assessment is the process by which situation awareness is achieved. It should reflect realistic and available information about the status and position of other friendly forces, own forces, and opposing forces. It should reflect the weather, the mission and objectives of the force, and reasonable inferences based on this information. Ideally it should reflect the expected actions of own and opposing forces in the near future. This implies that the modeled soldier must seek and acquire the relevant knowledge and retain it in memory. Similarly, situation awareness should be changed by observing the results of one's own unit actions. For example, if a particular movement results in unexpected exposure to intense counter fire, that result should revise the assessment of the strength of the opposing force. Decision making is the process for generating and selecting alternatives and is perhaps the most universally needed process for individual combatant models. Some type of decision process is needed to select the next set of actions based on
OCR for page 14
--> an evaluation of current information; this process is needed for the selection of actions by individual combatants as well as for high-level command and control. Task management is the process of managing multiple, concurrent tasks and is ubiquitous in combat operations. It is a cognitive activity that operates at a higher level than attention allocation, since it involves planning and prioritization as well as moment-to-moment task direction. An infantryman confronting an enemy may have to concurrently decide on a general course of action, plan his path of movement, execute tactical movement, and fire his weapon. When engaging multiple targets, a tank crew on an active battlefield must continuously navigate and control the vehicle, search for targets, aim and fire the gun, and assess battle damage. A pilot must simultaneously control the aircraft, plan maneuvers, navigate, communicate with his wingman, control sensors, aim and fire weapons, and monitor and manage other aircraft systems. A commander responsible for several units must continually divide his attention among those units. Motor response, broadly speaking, refers to the functions performed by the neuromuscular system to carry out the physical actions selected by the above-mentioned processes. Decision making, planning, and other invisible cognitive behaviors are ultimately manifested in observable behaviors that must be simulated, with varying degrees of realism depending on actual applications. Aspects of this realism include response delays, speed-accuracy trade-offs, and anthropometric considerations, such as constraints on movement due to limited reach and strength. How Process Models of the Unit Support These Needs The behavioral requirements at the unit level are accomplished by taking into account specific structures and processes. As at the individual level, our claim is simply that these structures and processes are critical determiners of observable unit-level behaviors that are important to military simulations. The processes and structures that facilitate and limit communication and coordination affect many unit-level behaviors, including speed of response and accuracy of response. Furthermore, the vulnerability of the unit when under attack or faced with a novel situation is dependent, in part, on the specific communication and coordination structures and processes. In addition, the likelihood of a unit being observed by OPFOR depends, in part, on the mode of communication chosen and the frequency of communication. Shared cognition is the set of information and processes held in common by unit members. The extent to which the shared cognition is accurate and is jointly held and the amount and type of information that is jointly held can affect the ability of unit members to coordinate without communication. Members of the unit act concurrently. The concurrency of action can lead to
OCR for page 15
--> coordination difficulties, particularly when unit members act to achieve their own goals without knowing the goals and constraints of other unit members. Group decision making, unlike individual decision making, takes into account the role of compromise, negotiation, power, and social influence in determining the joint outcome. Some type of group decision process is needed, even if it is a process that aggregates individual decisions using some weighting function. Resource allocation is the process of reassigning resources as tasks or goals change. Resource reallocation may be necessitated by depletion of some resources or because some subunits have become incapacitated. Need to Represent Behavioral Moderators A variety of team-and individual-level factors may have systematic effects on performance and may alter behavior in a number of ways. For example, they may act as multipliers, moderators, or mediators. Or they may interact and influence behavior in complex nonlinear ways. Excessive workload, fatigue, or generalized emotional stress can be expected to degrade overall performance. The level of achievement motivation will also affect performance. In general, these variables involve individual differences, but performance can also be affected in the aggregate by the quality of leadership and morale. This list is only partial; there is a need to enumerate the full range of the kinds of personality and task moderators that affect overall performance, to operationally define them, and to determine which ones are important for which simulation applications. At the unit level, the factors of average level of training, whether or not standard operating procedures are followed, the level of detail in these procedures, the degree of coupling between procedures, and so forth, can all affect performance. Need to Represent Learning The term learning is generally used to mean a change in behavior as a result of experience. In the context of human behavior simulations, it implies that an individual or unit is capable of evaluating the results of its own actions and can change behavior in order to improve the result when the same or a similar circumstance is presented more than once. Current simulation models under review by the Defense Modeling and Simulation Office lack any capability of learning from experience and adapting appropriately to changes in the contingencies in the environment. This causes serious problems for the simulations in many ways. Learning is clearly one of the most important capabilities for survival. An adaptive human behavior representation must be able to learn to anticipate the moves of an opponent in order to ultimately defeat the opponent. An adaptive human behavior representation must also be able to respond appropriately to changes in
OCR for page 16
--> its model of the environment. Learning is important both within a simulation run and across sequential or related simulation runs. The addition of learning models to the simulation also provides several other important benefits. First, they offer one method for including individual differences in skill levels. Second, learning provides a principled way to introduce variability into the behavior of the human behavior representation. The action taken by an agent on a new occasion will vary depending on the specific history of consequences from previous actions, and this history will vary in time and across agents. An effective learning process may also provide support in developing human behavior representations by substituting for some aspects of the knowledge acquisition process. Currently, rules are abstracted from human experts in an extremely costly and time-consuming manner and then are handwritten into the programs. This is a very inefficient and costly way to acquire knowledge. A more efficient and natural way would be to have the simulated agent learn these rules either from examples or by imitation of an expert's behavior. Although we are a long way from building learning models that could replace knowledge acquisition in this way, it is a direction to be explored. A key concern to be addressed in dealing with human behavior representations that learn is level-of-expertise control and certification. That is, the user will need to know the experience level of the human behavior representation, in terms of what schoolhouse training it has been given, what engagements it has fought, and so forth, to be able to assess battle outcomes in context. Need to Represent Variability in Decision Making The great majority of models that have been used to represent human behavior and decision making in military simulations have employed a fixed and predictable set of rules to govern the next action or decision, for a fixed set of conditions that hold at that time (e.g., MIDAS1 by the National Aeronautics and Space Administration, SOAR2 /IFOR by Tambe et al., 1995). Such an approach has many desirable properties: it is relatively easy to understand what the model is doing and why at any given moment; it is relatively easy to catch and repair programming errors; the model is more understandable and transparent; intuitions are easy to develop concerning why the model acts as it does; and when the 1 MIDAS (Man-machine Integration Design and Analysis System) is a fully constructive operator performance simulation designed to support engineering design decisions. 2 SOAR (State, Operator, and Result) is a cognitive architecture that attempts to provide a general theory of cognition, not tied to any specific problem space. It provides a set of principles and constraints on cognitive processing, and it is an artificial intelligence programming language.
OCR for page 17
--> model needs to be altered, it may be easier to predict what a given change will produce. Despite these advantages, there are even better reasons to think that effective models will have to incorporate unpredictability in various ways and for various reasons. First, human behavior is inherently variable, unpredictable, and sometimes irrational (March and Simon, 1958; Simon, 1955, 1956; Townsend and Busemeyer, 1995). A model will, therefore, fail to reproduce normal and realistic patterns of actions and decisions unless it incorporates a sufficient degree of random variability—variability that probably needs to be introduced at all levels of the simulation. Second, the use of simulations for training purposes is harmed by fixed and predictable patterns of simulated behavior. It is easy for a trainee to learn exactly what sequences of actions and behaviors will work in a given class of settings, based not on valid general rules, but on idiosyncratic details that are fixed and immutable properties of a given simulation. Third, even when human behavior can be modeled accurately at a microanalytic level, the observed behavior at a macroanalytic level may appear to be randomly variable. It can save enormous programming effort, and still capture the essence of behavior, by introducing variability macroscopically. This approach is also useful when the individual micro mechanisms that drive behavior are unknown and/or in debate. Similarly, game theory has identified many situations in which there is no rational strategy (such as an extended-play prisoner's dilemma with a fixed number of games against the same opponent—e.g., Myerson, 1991); behavior in such situations can appear to be and might well be quite probabilistic. Related to such situations are those with infinite planning regressions; for example, one's plans might depend on what one assumes the opponent knows and will do based on his or her knowledge of what one knows and will do, etc. Finally, there are well-known demonstrations in nonlinear systems of chaotic and pseudochaotic behavior of simple fixed rules applied in iterative fashion—as time proceeds, the present state of the system becomes less and less predictable from the starting states of the process (Beltrami, 1987); human behavior in general is certainly nonlinear, and undoubtedly shares this property. In all these cases, the unpredictability of behavior can be captured relatively simply and reasonably accurately by the introduction of random or stochastic noise in the simulation. Fourth, mathematical models and computer simulations of human behavior have been carried out since the 1950s by cognitive psychologists (and, more recently, cognitive scientists). Great dividends have accrued through the introduction of stochastic variability and probabilistic choice. In the 1980s, it became fairly common to propose fixed and deterministic neural network and connectionist models, and the general complexity of such systems (the large number of connections and nodes and the difficulty of anticipating the macro changes that occur during learning) seemed to provide adequate accounts of variable human
OCR for page 18
--> behavior, within certain limits. However, more recently it has become clear that even these systems require stochastic variability (e.g., McClelland, 1993). Fifth, it is sometimes claimed that fixed and deterministic models are adequate even in the face of known variability in the modeled behavior, to the extent that they represent linear and mean approximations of the underlying behavior. Human behavioral mechanisms, however, are anything but simple and linear, and there are numerous demonstrations that qualitatively incorrect predictions are produced by mean approximations to behavioral distributions. Just one of many examples occurs in behavioral models in which decisions are modeled as random walks or diffusion processes. The kinds of qualitative changes in predictions that can occur through the introduction of variability are well illustrated, for example, by the research of Ratcliff and Van Zandt (1995). Sixth, at the unit level, learning can interact with various characteristics of the unit (such as the form of the command structure) in nonobvious ways. For example, the allocation of resources can affect what skills, perspectives, and problem-solving strategies are learned (Brewer and Kramer, 1985), which can, in turn, affect the unit's flexibility in a crisis (Carley, 1991b). As another example, spatial or command and control structure can enable faster and more robust unit-level learning (Collins, 1992). Finally, present models and simulations used by the services, for the most part, do not take into account personality variables that are well known to influence behavior in important ways (e.g., cautious and adventurous decision makers may choose very different actions). One approach to this problem involves incorporation of personality variables into the models. This would increase the complexity of the models and would require knowledge of personality factors that may go beyond the present state of knowledge. It may be possible to apply a much simpler approach that could lead the models to produce appropriate variability in behavior simply through the addition of variability, probabilistic choice, and stochastic noise. These arguments in favor of the incorporation of noise and probabilistic variability in the models appear compelling. Nevertheless, such changes in approach do not come without cost. When these features are incorporated, it is harder to tell why the model produces its behaviors, and error correction during model development is much harder. There will be a similar need for certification of the level of variability represented in a particular human behavior representation. Finally, the running of a noisy and probabilistic simulation requires essentially the same computer time as a fixed and deterministic simulation. This area will be further explored and elaborated in the panel's final report. Need for Team Behavior Representation A team should manifest the range of behaviors required to be consistent with the degree of autonomy it is assigned, including detection and responding to
OCR for page 19
--> expected and unexpected threats. It should be capable of carrying out actions on the basis of communications typically received from its next higher-echelon commander. Much of the research on team or small unit-level adaptation leads to the following conclusions. Social dynamics, both equilibrium and nonequilibrium, depend on the rate of agent learning (adaptation or evolution) (e.g., de Oliveira, 1992; Collins, 1992; Carley, 1991a). Various constraints on agent actions can enable faster and more robust unit-level learning (Collins, 1992; Carley, 1992). Social chaos is reduced by having intelligent adaptive agents determine their actions using strategies based on observations and beliefs about others (Kephart et al., 1992) or by having their actions constrained by their position in the unit. This suggests that cognition and the command and control structure both play a defining role on emergent phenomena (Carley, 1992, forthcoming). Many current military models and simulations that portray actions at the team or small-unit level are inflexible and cannot adapt to different environments. Further, unit-level models of opposing forces are static and reflect performance and decision outcomes based on the same behavior as BLUE forces. Developing models of different forces takes a considerable amount of time. Unit-level learning models have the potential to change this. They can be used to examine the relative effectiveness of one form of command and control when pitted against opposing forces with different command and control forms. Unit-level learning models may also be particularly useful in examining issues of unit effectiveness and flexibility in a dynamic and volatile environment. When one is modeling teams or units, it is important to consider the following research findings. For example, it has been repeatedly demonstrated that there is no single correct command and control structure for all tasks (Lawrence and Lorsch, 1967); rather, specific structures vary in their effectiveness depending on the environment and training of the humans in the unit (Carley and Lin, forthcoming a). How the unit should be organized depends on the specific tasks being done, the volatility of the environment, the extent to which humans move into and out of various jobs in the unit, the amount, quality, and accuracy of information available for making decisions, and the level and type of training or experience acquired by the participants (Malone, 1987; Roberts, 1990). Command and control structures tend to change over time whether or not the environment is changing (Cohen et al., 1972). These changes often mean that lessons learned in previous engagements are no longer useful. Finally, the command and control structure is an amalgamation of structures, so no single measure adequately captures its behavior. Need for Large-Unit Behavior Representation The representation of larger units, such as squadrons and battalions, brigades and wings, and divisions and higher echelons, is manifest in the structures through
OCR for page 20
--> which information is disseminated and communications that are transmitted to other relevant units, either higher or lower. Such communications can include: Mission statement, Commander's guidance, Task organizations, Force packages and availability (what forces may be assigned in a particular battle plan), Target lists and priorities, Air Tasking Orders (detailed assignment of aircraft missions), Operational Orders (detailed assignment of ground missions), Alternative Courses of Action (candidate COAs to be evaluated), Logistics status, Transportation schedules and movement tables, Intelligence updates and situation reports, Weather briefings, and Battle damage assessments (results from battle actions). Models are needed that reflect the impact of alternative communication structures with realistic time lags and accurately represent transmission error probabilities. Similarly, models are needed that reflect alternative command and control structures. Current military simulations rarely model the command, control, and communication structure. There are two common approaches to representing the command, control, and communication structure in models that examine unit behavior and decision making. The most common approach is to represent it by using one or more networks (or matrices) representing the linkages among the humans in the unit. For example, a communication structure might be represented as a matrix such that each cell contains a 1 if the row person can communicate with the column person and a 0 otherwise. This approach could be expanded to the full command, control, and communication structure by specifying the set of matrices needed to represent it. This approach has many desirable properties. First, the representation facilitates measuring many factors that influence unit behavior, such as throughput, breaks in communication, structural redundancy, and workload assignment errors (e.g., Krackhardt, 1994). Second, having commanders or team members specify the structure often makes specific problems in it obvious to them. A second approach is to use a rule-based representation for procedures. For example, communication protocols could be represented as a series of rules for when to communicate what to whom and how to structure the communication. This approach is complementary to the network approach. In many applications, both the structure and the procedures need to be represented. Representing procedures as rules facilitates linking up unit-level models with models of indi-
OCR for page 21
--> vidual humans, because the unit-level rules can be added to the individual-level models as additional constraints on behavior. For example, this is the approach taken in the team SOAR (Tambe, 1996) and AAIS (Masuch and LaPotin, 1989). There is a need to represent changes in unit behavior as a function of scalability—this issue is often overlooked. That is, a common assumption made about unit behavior is that the behavior, problems, and solutions differ only in scale, not in kind, as the size of the unit grows. By this line of reasoning, large units of hundreds of individuals should make decisions in the same way and face the same types of problems that small units of 3 to 10 individuals do. Carried to its extreme, this suggests that units and individuals act in the same way, and a unit is just an aggregate of individual-level actions and reactions. This assumption seems unwarranted in many cases. Numerous studies indicate the presence of unit or team effects not accounted for by simple aggregation of individual behavior. Even in a less extreme form, the scalability assumption may not hold in all cases. For example, research in crisis management suggests that disasters are not scalable (Carley and Harrald, 1997). When scalability can be assumed, then the same model of unit behavior can be used regardless of unit size. When scalability cannot be assumed, then different models are needed for different size units. To date there has been little research on what types of problems, processes, and behaviors are scalable with unit size and what types are not. Determining the factors that are scalable is an important step in determining when new models are or are not needed.
Representative terms from entire chapter: