Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
5 Micro-Level Formal Models I n this chapter we discuss several micro-level formal models of human behavior, models that most often are concerned with the behavior of individuals. We begin with cognitive architectures, followed by c Â ognitive-affective models that consider the effect of human emotions on cognition and behavior, as well as of behavior on emotions. We then discuss expert systems, a legacy modeling approach that provides a frame- work for representing human expertise, and that now is often used as a programming paradigm in decision aiding systems. Finally we discuss decision theory and game theory and their limited applicability to indi- vidual, organizational, and societal modeling in general. For each model or approach, we follow the same discussion framework as in Chapters 3 and 4: we present the current state of the art, the most common applications of the approach, its strengths and limitations for the problems described in Chapter 2, and suggestions for further research and development. Cognitive Architectures Cognitive architectures are simulation-based models of human cog- nition. Their distinguishing feature is the broad focus on modeling the full sequence of information processing (stimulus-to-behavior) mediating adaptive, intelligent behavior. Cognitive architectures are built both for basic research and for applied purposes. Different architectures typically emphasize distinct aspects of human cognition (e.g., memory, multitasking, 149
150 BEHAVIORAL MODELING AND SIMULATION attention, learning, etc.), depending on their research objectives or applica- tion goals. Typically, cognitive architectures are used to model individual cogni- tion. Less often, the applicability of this approach for modeling collective behavior has also been explored, that is, using a cognitive architecture to model the behavior of a group, team, or organization. The utility and appropriateness of this approach to modeling group cognition has yet to be demonstrated, however, and so we have restricted our discussion here to covering the use of individual cognitive architectures to the modeling of individual behavior. Cognitive architectures have their roots in the early artificial intelligence (AI) models of human problem solving developed in the 1950s. These mod- els combined a number of key ideas emerging from observations of human problem solving and behavior, including symbolic processing, hierarchical organization of goals, problem spaces, rule- and heuristic-based behavior, and parallel and distributed representation and computation. A number of cognitive models were developed in the 1970s and 1980s, such as the Model Human Processor (MHP) and Goals, Operators, M Â ethods, and Selection rules (GOMS) (Card, Moran, and Newell, 1986), focusing on modeling a single function in the context of a single task and most often applied to models of human-computer interaction and, in par- ticular, to the design and evaluation of user interfaces. Although limited in scope, these models provided the necessary methodological foundations for the more broadly scoped cognitive architectures of today, by demonstrating the feasibility and benefits of computational cognitive models, primarily in the context of human-computer interface design. What Are Cognitive Architectures? Cognitive architectures are computational, simulation models of human information processing and behavior. Cognitive architectures are also referred to as agent architectures, computational cognitive models, and human behavior models. These simulation-based models aim to implement â ndeed, I this reportâs focus on models and simulations that can contribute to some element of improving forecasting or explanation in a Department of Defense context may limit the ultimate utility of applying some of the models described herein (and elsewhere in the report) in a broader nonmilitary context. Some researchers may argue that this is not the case because of inherent model generality, but this general issue goes beyond the original scope of the study and clearly deserves further study. â esearchers are beginning to suggest future work in this area; see, for example, ÂMacMillan R (2007). â pecific connotations may exist with each of these terms regarding the motivation and use S of the cognitive architecture.
MICRO-LEVEL FORMAL MODELS 151 some version of a unified theory of cognition (Newell, 1990) by modeling the entire end-to-end human information-processing sequence, beginning with the current set of stimuli and ending with a specific behavior. Cognitive architectures are typically classified into three broad cat- egories, depending on their approach to knowledge representation and inferencing: symbolic, subsymbolic (also referred to as parallel-distributed), or hybrid (combining elements of the former two). Symbolic architectures use one or more propositional knowledge representation formalisms, such as rules, belief nets, or semantic nets. Subsymbolic, parallel-distributed architectures typically use some type of a connectionist representation and inferencing (e.g., recurrent neural networks), in which the mapping between conceptual entities and the representation is not one-to-one, because the knowledge is distributed over multiple representational elements (e.g., nodes within the network). Hybrid architectures use elements of both rep- resentational formalisms and are becoming increasingly common, as the benefits of the combined symbolic-subsymbolic knowledge representation and inferencing are recognized. The specific functions represented in a particular architecture depend on its objective, level of resolution, and theoretical underpinnings. These also determine the specific modules that make up a given architecture. In most symbolic architectures, the modules and process structure correspond to (a subset of) the functions comprising human information processing. Most architectures thus contain some subset of the following broad cognitive and perceptual processes: attention, situation assessment, goal management, planning, metacognition, learning, action selection, and necessarily some form of memory (or memories), such as sensory, working, and long-term. Thus, for example, an architecture attempting to model recognition- primed decision making (RPD) would have a module dedicated to situation assessment, since that is a core component of the RPD theory (Klein, 1997); an architecture focusing on models of learning would have corresponding modules responsible for such functions as credit assignment and creation of new schemas in memory. It should be noted here that most existing cogni- tive architectures are not capable of learning (Morrison, 2003). While some architectures, such as Soar, do contain elements of learning (e.g., creation of new operators by combining existing operators), typically, there is no direct learning resulting from the agentâs interactions with the environment. However, the cognitive modeling community is beginning to recognize the limitations of human-constructed long-term memories in these models, and researchers are beginning to address the problem of automatic knowledge acquisition and learning in cognitive architectures (e.g., Anderson et al., 2003; Langley and Choi, 2006). Depending on the architectureâs control structure, the modules may execute in a fixed sequence, or in parallel, or anywhere between these two
152 BEHAVIORAL MODELING AND SIMULATION extremes. Figure 5-1 illustrates the module structure of a notional sequen- tial cognitive architecture, frequently referred to as a âsee-think-doâ control structure. An alternative to this sequential approach is a parallel-distributed control structure, in which a number of parallel processes access a com- mon memory structure (frequently referred to as a blackboard and hence the term âblackboard architectures,â Corkill, 1991). As with the sequential architectures, the specific processes represented, as well as the structure of the memory blackboard, depend on the architecture objectives, the level of resolution, and theoretical foundations. Figure 5-2 shows an example of a blackboard architecture, illustrating examples of possible associated processes. Historically, cognitive architectures have focused on the middle stage of the see-think-do metaphor, frequently simplifying the perceptual input and motor output components. However, as cognitive architectures expand in model complexity and desired functionality (e.g., operating in a real-world environment), they increasingly incorporate sensory and motor models to become full-fledged agent architectures, capable of autonomous, intelligent, and adaptive behavior in a real or a simulated world. Cognitive architectures thus contrast with the more narrowly scoped cognitive models (also referred to as micro models of cognition), which Sensing and Cognition Perception â¢ Multitasking â¢ Attention â¢ Memory and Learning Working Motor â¢ Vision Memory â¢ Situation Awareness Behavior â¢ Hearing â¢ Decision Making â¢ Planning â¢ Perception â¢ Behavior Moderators Long-Term Memory Stimuli Goals / Tasks Responses World Model Maintain situation awareness Other Report important events declarative Assess threat to goals knowledge Assess alternatives Procedural Manage goals / tasks knowledge External world events 5-1.eps FIGURE 5-1â Example of a notional sequential cognitive architecture. The word âCognitionâ in the 3rd box from left, at top, was set in 1-pt text--too small to see. Word doc would not open.
MICRO-LEVEL FORMAL MODELS 153 Situation Goal Action Assessment Selection Planning Selection Visual Sensor Auditory Right hand Sensor Blackboard Blackboard Left hand Gaze FIGURE 5-2â A blackboard architecture. 5-2.eps focus on a single function, such as attention, visual search, visual percep- tion, language acquisition, or memory recall and retrieval, and implement micro theories of cognition, rather than unified theories of cognition. This figure shows a high-level view of a parallel-distributed cognitive architecture, which represents an alternative to the sequential see-think- do model. In parallel-distributed models, processing occurs in multiple, concurrent processes, and coordination among these processes is achieved through the intermediate results posted on the blackboard, which represents the architecture memory. The structure of the blackboard varies, depend- ing on a particular architecture, to represent the desired types of distinct memories. State of the Art A large number of cognitive architectures have been developed in both academic and industrial settings, and new architectures are rapidly emerg- ing due to increasing demand, particularly in human-computer interaction (HCI) and decision support contexts, with emphasis on training, decision aiding, interactive gaming, and virtual environments. Three recent reviews provide a comprehensive catalogue of a number of established or commer- cially available cognitive architectures: a report focusing on U.S.-developed systems (Andre, Klesen, Gebhard, Allen, and Rist, 2000, pp. 51â111), a supplementary report focusing on systems developed in Europe, primarily in the United Kingdom (Ritter et al., 2003), and a review by Morrison that covers architectures in both the United States and Europe and includes some of the lesser known systems (Morrison, 2003). All three reviews provide detailed descriptions of the architectures in terms of the cognitive Âprocesses
154 BEHAVIORAL MODELING AND SIMULATION modeled, their historical context, applications, and implementation lan- guages and any validation studies. A large number of research-Âoriented architectures also exist in laboratories around the world. The best sources for information regarding these architectures are conferences and work- shops, such as the International Conference on Cognitive Modeling, the annual meeting of the Cognitive Science Society, symposia and conferences of the American Association for Artificial Intelligence, Autonomous Agents and Multi-Agent Systems, Human Factors, and BRIMS. See TableÂ 2-1 for an overview of cognitive architectures used in military contexts. Existing cognitive architectures are being used to support research on both human cognition and, more recently, emotion (see the next section on cognitive-affective models). They are also used in applied settings to control the behavior of synthetic agents and robots in a variety of contexts, including gaming and virtual reality environments, to enable user modeling in adaptive systems, and as replacements for human users and subjects for training, assessment, and system design purposes. It is beyond the scope of this chapter to describe in detail the large number of architectures that have been developed over the past 25 years. The three reviews mentioned above are excellent sources of in-depth infor- mation regarding a number of architectures that are sufficiently estab- lished to be included in comprehensive reviews. Below we briefly discuss a subset of these, to provide a sense of the breadth of theoretical ori- entations, Â representational formalisms and modeling methodologies, and applications. It should be noted that each architecture elaborates a particular sub- set of cognitive processing and that the architectures vary in their ease of transition to other domains and ease of use. These factors must be taken into consideration when a particular architecture is being considered as a modeling tool for a specific problem in a particular domain. For example, ACT-Râs focus is on relatively low-level processing, and is particularly con- cerned with memory modeling. EPIC emphasizes models of multitasking. Soar emphasizes a particular model of learning, cast in relatively high-level symbolic terms. Thus, before a particular architecture is adopted for a spe- cific modeling effort, it is necessary to carefully assess its ability to model the processes of interest at the desired level of resolution. The most established architectures in the United States are ACT-R and Soar, each having a large and active academic research community, with annual workshops and tutorials, and each having an increasing presence in industry, primarily the defense industry. These are described below, f Â ollowed by several other prominent architectures.
MICRO-LEVEL FORMAL MODELS 155 ACT-R The historical focus of ACT-R (Atomic Components of Thought or Adaptive Character of Thought) has been on basic research in cognition and modeling of a variety of fundamental psychological processes, such as learning and memory (e.g., priming) (Anderson, 1983, 1990, 1993). ACT-R combines a semantic net representation with rule-based representation to support declarative and procedural memory representation and associated inferencing. ACT-R is probably the cognitive architecture that is âbest grounded in the experimental research literatureâ (Morrison, 2003, p. 24). Primary early applications were tutoring in mathematics and computer pro- gramming (see www.carnegielearning.com). Gradually, ACT-R evolved into a full-fledged cognitive architecture, with increasing emphasis on sensory and motor components and applications in military settings (e.g., modeling adversary behavior in military operations on urban terrain, MOUT, tactical action officers in submarines, radar operators on ships; Andre et al., 2000; Anderson et al., 2004). Soar Soar (State, Operator, and Results) development was initially motivated by the desire to demonstrate the ability of generalized problem spaces, rules, and heuristic search capabilities to solve a wide range of problems and by the desire to develop an implementation of the unified theory of cognition of Newell (1990). Soar uses production rules to implement this problem- solving paradigm, via application of âoperatorsâ to states within a problem space. Soar represents all three types of long-term memory (declarative, procedural, and episodic) in terms of rules. A distinguishing feature of Soar is its ability to form new operators (rules) from existing operators (rules), when it reaches an impasse in its problem solving (impasse being defined as either no applicable operators selected or conflict among operators). It is thus one of the few architectures that explicitly addresses learning, albeit in the limited context of combining existing elements within its own knowl- edge base, rather than the bona fide acquisition of new knowledge from its interaction with the environment. Soar models both reactive and delibera- tive reasoning and is capable of planning (Hill, Chen, Gratch, Rosenbloom, and Tambe, 1998). While Soar was in part motivated by theoretical considerations, par- ticularly Newellâs unified theory of cognition, the architecture has become a more traditional AI system, in its increasing emphasis on performance, rather than accurate emulation of human information processing. A fre- quent criticism of Soar is its large number of free variables, which enables a large number of specific models to match empirical data, thereby making
156 BEHAVIORAL MODELING AND SIMULATION it difficult to unequivocally establish the validity of a given model. This is the case with most computational cognitive architectures. Soarâs capabilities progressed from simple toy tasks (puzzles), through expert systems applications (medical diagnosis, software design), to archi- tectures capable of controlling autonomous agents. Soar represents the more extensively applied cognitive architecture and includes a number of training installations or exercises in which it has replaced human role players or autonomous air entities: TacAir-Soar at the Air Force Research Laboratory (AFRL) training laboratory and at Williams Air Force Base (fixed-wing m Â issions), Joint Forces Command (JFCOM) J9 exercises, MOUTBot (sol- dier models) VIRTE MOUT at the Office for Naval Research, JCATS at the Defense Modeling and Simulation Office; SOFSoar at JFCOM, RWA-Soar (rotary wing missions), STEVE for training simulations, and Quakebot for interactive computer games (Jones et al., 1999; Laird, 2000). The applica- tions in the military are being developed by Soar Technology, Inc. (http:// www.soartech.com). Soar also serves as the core technology at the Institute for Creative Technologies at the University of Southern California, where it acts as an agent architecture, controlling synthetic characters in virtual environments, primarily applied to training and game-based training envi- ronments. Soar has also been applied in a nondefense context, to develop a decision support system for businesses (KB Agent, developed by ExpLore Reasoning Systems, Inc.). While the emphasis in Soar applications has been on individual models, it has also been applied in modeling multiagent environments, in which explicit representations exist of shared structures among team members (e.g., goals, plans). The STEAM model (Shell for TEAMwork) (1996) implements these enhancements and has been applied to military simula- tions (models of helicopter pilots) and to modeling soccer players in the RoboCup competition (Tambe et al., 1999). EPIC EPIC (Executive-Process/Interactive Control), developed from the MHP (Card et al., 1986), focuses on models of human behavior in multitasking contexts, in human-computer interaction. A distinguishing feature is its emphasis on integrating cognition with perceptual and motor processes. EPICâs sensorimotor capabilities have motivated its inclusion in some Soar models, to provide an interface with the real world. EPIC uses production rules to represent both its long-term memory and the control of processing within the architecture. It is primarily focused on research and is a good example of a more constrained architecture with a strong focus on valida- tion against human performance data. Recently EPIC has also been used in more applied settings, for the design of undersea ship systems.
MICRO-LEVEL FORMAL MODELS 157 COGNET COGNET (COGnition as a Network of Tasks) architecture was devel- oped by CHI Systems and combines several knowledge representation formalisms in a blackboard-oriented framework. It was initially applied in user interface design (Zachary, Jones, and Taylor, 2002) but has been expanded to include models of multitasking in the context of air traffic con- trol Â(Zachary, Santarelli, Ryder, Stokes, and Scolaro, 2001) and intelligent tutoring (Zachary et al., 1999). COGNET has an associated development environment iGEN, which is commercially available from CHI Systems. OMAR OMAR (Operator Model Architecture) is a task-goal network model with a focus on multitasking developed by BBN, Inc. (Deutsch, Cramer, Keith, and Freeman, 1999), from an earlier conceptual prototype, the CHAOS model (Hudlicka, Adams, and Feehrer, 1992). OMAR and its later distributed version, D-OMAR, have been used to model air traffic control and pilot error (Deutsch et al., 1999; Deutsch and Pew, 2001). It was one of the systems participating in the AMBR (Agent-based Modeling and Behavior Representation) validation project, in which its performance was compared with other cognitive architectures and with human subjects in the context of air traffic control (Gluck and Pew, 2005). Recent versions of OMAR were expanded with models of auditory and visual inputs, and the system was reimplemented in Java (from the original LISP version), to improve performance. MIDAS MIDAS (Man-machine Integrated Design and Analysis System) uses a goal-task network model to model simple, reactive decision making. It includes sensory inputs (visual and auditory) and simple motor outputs and has been applied in human-computer interaction to model pilot behavior in support of cockpit design (Corker and Smith, 1992; Corker, Gore, ÂFleming, and Lane, 2000; Laughery and Corker, 1997), air traffic control, the design of emergency communication systems, and the design of automation sys- tems for nuclear power plants. MIDAS is also capable of modeling multiple, interacting agents. SAMPLE SAMPLE (Situation Awareness Model for Pilot-in-the-Loop Evalua- tion) is a sequential hybrid model developed by Charles River Analytics,
158 BEHAVIORAL MODELING AND SIMULATION using several knowledge representational mechanisms, including fuzzy logic and belief nets and rules. It has been applied to model air traffic control, pilot behavior, unmanned aerial vehicles, and soldier behavior in MOUT operations (Zacharias, Miao, Illgen, and Yara, 1995; Harper, Ton, Jacobs, Hess, and Zacharias, 2001). SAMPLE implements the recognition-primed decision-making model (Klein, 1997) and does not include complex plan- ning. Sensorimotor components are represented at highly abstracted Âlevels. SAMPLE has a drag-and-drop development environment GRADE, for rapid application prototyping, and is available commercially. APEX APEX is an architecture supporting the creation of intelligent, autono- mous systems and serves also as a development environment. One of its goals is to reduce the effort required to develop agent architectures. Its primary applications are in human-computer interaction, to help design user interfaces and human-machine systems (Freed, Dahlman, Dalal, and Harris, 2002), and it has been applied in air traffic control. Other Architectures Several other architectures should be mentioned briefly. D-COG (Distributed Cognition) was developed at AFRL (Eggleston, Young, and McCreight, 2000) to model complex adaptive behavior. It was one of the architectures evaluated in the AMBR experiment (see Validation below). BRAHMS (Business Redesign Agent-Based Holistic Modeling System) is an environment developed by the National Aeronautics and Space Admin- istration (NASA) for modeling multiple, interacting entities (Sierhuis and Clancey, 1997; Sierhuis, 2001) and emphasizes the interaction among e Â ntities rather than individual cognition. Several well-established cognitive architectures have been developed in Europe. COGENT (Cognitive Objects within a Graphical EnviroNmentT) is a development environment for construction cognitive models developed by Cooper and colleagues (Cooper, Yule, and Sutton, 1998; Cooper, 2002). It supports the construction of cognitive architecture from individual, inde- pendent âmodules,â each responsible for a particular cognitive (or percep- tual) function, and includes explicit support for systematic evaluation of the resulting models. COGENT offers a number of representational formal- isms, including connectionist formalisms supporting the representation of distributed, subsymbolic knowledge. It has been applied to model medical diagnosis, models of memory, and models of concept learning. The architectures outlined above are primarily symbolic and represent the most common approach to the development of integrated cognitive
MICRO-LEVEL FORMAL MODELS 159 architectures. There are also examples of architectures that use connec- tionist formalisms, either exclusively or in combination with symbolic representations. We briefly mention two of these below. An example of the former is the ART (Adaptive Resonance Theory) architecture, developed by Grossberg (1999, 2000). ART emphasizes learning and parallel process- ing, both being key benefits of connectionist formalisms. An example of a hybrid connectionist-symbolic architecture is CLARION (Connectionist Learning with Adaptive Rule Indication On-Line), developed to support research in combined representations of symbolic knowledge (via rules) and subsymbolic knowledge (via connectionist networks) and inductive learning (Sun, 2003, 2005). Current Trends Several current trends in cognitive architecture development promise to contribute to more efficient development of these complex simulation systems, as well as more effective applications: â¢ Efforts to incorporate individual differences and behavior mod- erators, such as personalities and emotions, both to support basic research and to produce more realistic and robust agents (see next section). â¢ Efforts to provide broadly scoped end-to-end architectures, with increasing emphasis on sensory and motor processes, to enable the associated synthetic agent or robot to function in a virtual or actual environment (e.g., variety of Soar-based agents being developed at the Institute for Creative Technologies). â¢ Use of shared ontologies to facilitate the labor-intensive effort of cognitive task analysis and domain-specific model construction. â¢ Use of development environments to facilitate cognitive architec- ture construction, which may include automatic KA/KE facilities, visualizations, and model performance assessment and analysis tools. â¢ Increasing emphasis on empirical validation, frequently with respect to human performance data, and the development of validation methodologies and metrics (e.g., Gluck and Pew, 2005). Verification and Validation Issues As stated above, verification refers to ensuring that the architecture functions as intended, that is, that the model has been implemented accord- ing to the specifications. Validation refers to the degree to which the model specifications reflect the reality, at the desired level of resolution. We focus
160 BEHAVIORAL MODELING AND SIMULATION here on model validation and, more broadly, on model evaluation. While there is increasing emphasis on validation of cognitive architectures, vali- dation remains one of the most challenging aspects of cognitive architec- ture research and development. âHBR [human behavior representation] validation is a difficult and costly process [and] most in the community would probably agree that validation is rarely, if ever doneâ (Campbell and Bolton, 2005, p. 365). Campbell goes on to point out that there is not a general agreement on exactly what constitutes an appropriate validation of a cognitive architecture. Since cognitive architectures are developed for a wide variety of reasons, there is a correspondingly wide set of validation (and evaluation) objectives and metrics and associated methods. Lack of established benchmark problems and criteria exacerbates this problem. It is interesting to note that a set of recommendations for model accreditation and validation was made in the 1998 National Research Council report on modeling human and organizational behavior, but these have yet to be implemented. The same report also emphasizes that a general validation of these complex models is not possible, and the models must be evaluated in the specific context for which they were developed. Within these constraints, several approaches exist for cognitive architec- ture validation, varying in the data requirements, time, and effort required and the quality of the validation results. We list these below, in order of decreasing overall quality. â¢ Comparative empirical studies: the architectureâs performance is compared with human performance on the same task and in the same context. Both outcome and process measures can be used: the former include time, mean time between errors, accuracy and error types, and behavioral choices. The latter include assessments of internal and intermediate states, such as emotions, workload, situa- tion assessments, etc. The empirical data used can be obtained from a variety of sources. The ideal sources are parallel empirical Âstudies, conducted in the same task context as the model development. As these types of studies become more common, guidelines are emerging regarding the methods (and criteria) for establishing the goodness of fit between the human and the model performance. â¢ Performance-based evaluation: the architectureâs effectiveness is assessed with respect to selected performance criteria, which are defined on the basis of the architectureâs role and objectives (e.g., improved training, degree of agent realism, improved prediction of the modeled decision makerâs behavior, more robust and effective behavior). â¢ Heuristic evaluation: the architecture performance is evaluated by a panel of experts (or users). This is the weakest form of validation
MICRO-LEVEL FORMAL MODELS 161 but is frequently used because of resource limitations. Even with this weak method of validation, certain principles must be followed (e.g., judgments must be collected from individuals who were not involved in system development, data should be collected indepen- dently). When these guidelines are not followed, this approach is sometimes referred to as BOGSAT: âbunch of guys sitting around a tableââclearly to be avoided (National Research Council, 2003; Campbell and Bolton, 2005). Validation studies also vary with respect to the scope of the validated components. The architecture may be evaluated as a whole or selected m Â odules or submodules may be evaluated. Table 3.1 in the 1998 NRC report on human behavior modeling (National Research Council, 1998, p.Â 104) provides a useful summary of validation studies performed prior to 1998. A word of caution is in order, however, since not all the validation studies use the same criteria; in other words, a fully validated model using a panel of experts does not reflect the same degree of validity as a partially validated model using actual human performance data. To date, none of the existing cognitive architectures has been fully vali- dated against generalized human performance. There are, however, a number of task-specific validation studies for many of the established architectures and a larger number of validation studies for single-process cognitive models (e.g., models of memory retrieval, visual attention models, GOMS-based models of user performance on specific tasks using a particular interface). The GOMS family of models has proved to be particularly useful in HCI, in which they have been used to evaluate and select from candidate designs, often saving large amounts of money (e.g., Gray, John, and Atwood, 1993; see also Olson and Olson, 1990). One of the earliest examples of a cognitive architecture validated against human performance is EPIC, which success- fully predicted multitasking performance in telephone operators (Kieras, Wood, and Meyer, 1997). Validation against empirical data continues to be a focus of EPIC research. As cognitive architectures proliferate in mission-critical contexts, more opportunities exist for their validation in complex task settings. For exam- ple, Purtee and colleagues (Purtee, Krusmark, Gluck, Kotte, and Lefebvre, 2003) validated an ACT-R model controlling unmanned aerial vehicle operation, using verbal human data and protocol analysis. Andre et al. (2000) discuss validation studies of ACT-R, Soar, COGNET, and MIDAS. In general, three factors hinder systematic cognitive architecture validation studies: 1. Lack of established validation metrics and associated methods, including benchmark problems, and an understanding of when to
162 BEHAVIORAL MODELING AND SIMULATION apply which metric, using a particular method, in a specific task context. Different validation criteria are appropriate for different system objectives and operational characteristics. Currently, how- ever, no systematic taxonomy exists of either the system objectives or the operational contexts. 2. Frequent confusion between verificationâDoes the system do what it was programmed to do?âand validationâDoes the system accu- rately represent the modeled system? (Campbell and Bolton, 2005). Verification studies are often presented as proofs of model validity, with the architecture developers showing how the system generates behavior that is consistent with the behavior of human agents in some limited context. Such studies are almost meaningless, how- ever, in establishing the model validity. 3. The extensive effort required to conduct studies comparing human and cognitive architecture performance on a given task. These s Â tudies require first the development of a simulation environment for the particular task (e.g., air traffic control), and the develop- ment of the human-task and cognitive-architecture-task interfaces, to enable both the humans and the architecture to perform the task. In addition, the system must support human subject per- formance tracking and data collection. Given the general lack of interoperability among cognitive architectures, establishing these interfaces is a labor-intensive endeavor, and only one case exists in which several architectures were systematically compared with a set of benchmark problems: the AMBR project. The AMBR project represents the more promising validation approach: the systematic comparison of the cognitive architecture performance on a par- ticular task with human performance on the same task and under identical circumstances (Gluck and Pew, 2005). Relevance, Limitations, and Future Directions Relevance Cognitive architectures are built both for basic research and for applied purposes. Research architectures aim to develop a model of some aspect of human information processing, to enhance understanding of these phe- nomena by identifying the mediating structures and mechanisms. Specific applications of cognitive architectures include the control of autonomous synthetic agents and robots in a variety of settings, including operational systems in hostile or adverse environments, control of synthetic characters and agents in virtual reality environments, stand-ins for humans to enhance realism and believability in simulation-based training and assessment envi-
MICRO-LEVEL FORMAL MODELS 163 ronments, and as alternatives to human subjects in empirical studies sup- porting human factors analyses (e.g., user interface design and operation, task allocation between human and machine, risk assessment and reduc- tion, personnel-task matching). Recent advances in gaming technologies and the proliferation of games into a variety of settings, including military training, enable the integration of interactive gaming, virtual environments, and cognitive architectures to create immersive environments with increas- ing levels of realism. Such environments are increasingly being used in training, assessment, rehabilitation, and human factors analyses. Cognitive architectures can also be used for behavior prediction in a variety of settings, both individual and team, and across a range of task types and contexts. While some success has been achieved in predict- ing simple behavior in highly constrained task contexts, primarily HCI contexts (e.g., EPIC has generated accurate prediction of reaction times in simple dual-task contexts; Kieras et al., 1997), forecasting individual behavior is complex, under-constrained contexts is difficult, and often impossible. In spite of recent attempts (e.g., Silverman, Bharathy, and Nye, 2007), âit is currently not within the state of the art to develop a model of a particular person, or to predict the likelihood of a single-act at a par- ticular point in time. Instead, the predictive value of cognitive architectures lies more in their ability to generate probabilistic distributions of a range of possible behaviors that a particular type of individual might exhibit in given circumstances, rather than to generate predictions of the likelihood of single acts by particular individualsâ (Hudlicka, 2006b, p. 14). The increasing emphasis on complex cognitive processes in military modeling is creating a broad range of applications for cognitive architec- tures modeling individual entities. Both the research and the applied cogni- tive architectures are relevant. Cognitive architectures are relevant for three of the core areas in military modeling: analysis and forecasting in planning, simulation for training and rehearsal, and design and evaluation for acquisi- tion. These architectures are critical components of specific modeling and simulation applications: disruption of terrorist networks, prediction of adversaries responses to specific courses of action, prediction of societal reactions to specific events, crowd behavior modeling and crowd control training, and organizational design. These modeling needs, along with the increasing transitions to teams and nontraditional warfare, also highlight the increasing importance of modeling individual motivation and behavior variability, via explicit focus on models of emotion and personality traits. Both of these are addressed in the emerging cognitive-affective architectures, discussed in the next major section.
164 BEHAVIORAL MODELING AND SIMULATION Major Limitations While there have been great theoretical, methodological, and tech- nological advances in the development of cognitive architectures, many limitations remain. The most critical one is in the area of validation. This includes a lack of established validation criteria and methodologies, frequent confusion between verification and validation, lack of methods for validating archi- tecture memory and knowledge bases, and the lack of any fully validated, domain-independent cognitive architectures. Currently no validated cogni- tive architecture exists and systematic validation efforts, including valida- tion methods and appropriate metrics, are just beginning to emerge (e.g., Gluck and Pew, 2005). Another limitation is the time and effort required to develop a cogni- tive architecture and the associated bottleneck of knowledge engineering required for these models. As discussed above, the instantiation of an archi- tecture in a new domain requires large amounts of human performance and task data, as well as information about the nature of internal problem solving and decision making. Whether obtained from empirical studies or from cognitive task analyses and knowledge elicitation interviews, the process of obtaining the necessary human data is highly labor-intensive and represents a major bottleneck in the development of cognitive architectures capable of emulating human problem solving, decision making, and perfor- mance. In addition, once built, the resulting long-term memories typically require extensive tuning to produce the desired behavior and match human performance data. Even with the required tuning, cognitive architectures exhibit the âbrittleÂnessâ problem that plagues expert systemsâthat is, a lack of grace- ful degradation when limits of the domain knowledge (the modelâs long- term memory) are reached. This is one of the factors that limit the scope and degree of realism, and it applies equally to nonlearning systems and architectures with limited learning capabilities, such as Soar. Some researchers question whether the process of âmanualâ long- term memory construction can ever produce long-term memories capable of supporting robust performance, as is the case in biological agents. It is possible that long-term memories may need to be automatically constructed (learned) from ongoing, long-term interaction with the environment, as is the case with intelligent biological agents, including humans (Mathews, 2006), to produce robust knowledge bases capable of matching human per- formance and to enable the accurate representation of a range of behavior moderators, including emotions and personalities. Regardless of a theoretical position on this matter, it is becoming apparent that automated construction of cognitive architecture memories
MICRO-LEVEL FORMAL MODELS 165 or knowledge bases may be the most pragmatic solution to the difficult and labor-intensive task of knowledge base development. A related challenge is posed by the differences in representational reso- lution between the cognitive architecture representational capabilities and needs on one hand, and the empirical methods available for knowledge extraction on the other. Computational models offer a higher degree of representational resolution for the internal processes than currently avail- able human empirical data. In other words, while it is now possible to build detailed models of situation assessment, planning, learning, metacognition, and similarly complex cognitive processes, one cannot unequivocally iden- tify the internal mechanisms and structures that mediate these functions in biological agents. This state of affairs has serious implications for model validation, discussed below. While extensive human performance data exist at the periphery of human problem solving and performanceâthat is, sensory and motor data that define the model inputs and outputsâthese are more suitable for black box input-output models. Cognitive architectures enable, and frequently require, the specification of the detailed nature of internal mental pro- cesses, at a level of resolution that is currently not matched by the ability to obtain the required data. The data required to represent the internal mental structures and processes (e.g., situations, expectations, goals, beliefs) can be obtained only via indirect inference from observable behavioral data or self-reports. It should also be noted here that the current enthusiasm for in vivo brain imaging techniques (such as fMRI or PET scans) being able to provide these data at the required level of resolution is considered prema- ture by many neuroscientists. A more pragmatic limitation is the lack of established domain ontolo- gies, standardized modeling languages, and scenario and data repositories, which further hinder the architecture development process. Similarly, the lack of model standardization and the lack of interoperability limit the ability to exchange components across architectures and research groups. Both of these contribute to the fragmented state of affairs in architec- ture development, as well as the lack of established benchmark problems, against which different architectures could be compared, both to establish their validity and to facilitate systematic comparisons of the capabilities of different architectures. Another factor limiting the realism and fidelity of cognitive architec- tures, as well as the believability of the associated agents, is the lack of models of many mental processes that influence human perception, cogni- tion, and behavior and give rise to the type of variability and adaptability observed in humans. This is discussed further in the next section. Performance can also be an issue, particularly in applied agent andÂ robotic systems that require real-time responses. New hardware and
166 BEHAVIORAL MODELING AND SIMULATION nonâvon Neumann machine architectures are likely to contribute to solving this problem in the future (MartÃnez, Gomes, and Linderman, 2005). Last is a limitation that is particularly appropriate in the context of this study: the relative lack of interactions and collaboration among the research communities centered on particular architectures. The two most established architecture communities, Soar and ACT-R, have until very recently domi- nated the market (as evidenced by the ICCM biennial conference). Newcomer architectures often have a difficult time getting established and recognized, and potentially productive interactions among architectures with complemen- tary strengths are not exploited. Morrison highlights this issue when discuss- ing the BRAHMS architecture (Sierhuis, 2001), noting that its focus on social interaction and the focus of Soar and ACT-R on detailed models of cognition would make for an ideal collaboration, which has not occurred (Morrison, 2003, p. 39). The âeveryone in his or her own sandboxâ phenomenon is a common social one. However, it is important to recognize to what extent this situation limits the continued development of these important models and the successful addressing of the limitations outlined above. The development of standardized problem sets for architecture comparison would go a long way toward addressing this situation, as would the development of shared memories and domain ontologies. A concerted effort to promote long-term collaborations among different research groups is probably the single most critical element in advancing the state of the art. Future Directions Expanding on the earlier discussions, we briefly list the main points and augment them with additional suggestions resulting from a recent workshop that brought together researchers from the cognitive science and architecture-development communities (MartÃnez et al., 2005). â¢ Facilitate architecture development: via the use of standardized domain representation languages (e.g., human modeling markup languages), interchangeable plug-and-play components of generic architectures, and construction of cognitive architecture develop- ment environments. â¢ Facilitate architecture instantiation: via shared domain ontologies and human performance data repositories. â¢ Facilitate knowledge base development: via the use of automatic knowledge acquisition methods and machine learning, to eliminate the need for labor-intensive knowledge engineering. â¢ Enhance model explanation capabilities: via the development of visualization and explanation tools that support the understanding of the complex processing in a cognitive architecture.
MICRO-LEVEL FORMAL MODELS 167 â¢ Address the brittleness problem: via a combination of hybrid knowledge representation approaches (symbolic and connection- ist), learning and automatic knowledge acquisition to develop architecture knowledge bases, and the representation of common- sense knowledge. â¢ Enhance realism: by integrating architectures with embodied agents, either synthetic agents in virtual environments or robots, and by including emotion, personality, and cultural factors to pro- duce the type of behavioral patterns and variabilities characteristic of human behavior. â¢ Validation: develop validation methods, metrics, accreditation pro- cedures, and environments facilitating the comparison of model performance with human data and with other architectures in a set of well-defined benchmark problems. Support the development of such validation suites, in terms of shared simulation environments and benchmark test suites, broadly available to researchers and model developers. Support validation of the system as a whole, but also component validation, such as function-based or module- based validation. â¢ Explore new modeling formalisms: explore the applicability of addi- tional representational and inferencing mechanisms to enhance cog- nitive architecture performance, including nonÂsymbolic approaches such as chaos theory, and learning methods, such as genetic algorithms. â¢ Models of groups and teams: apply cognitive architectures to mod- els of groups and teams, in which the decision-making processes of the entity of interest can be sufficiently abstracted to enable the development of a cognitive architecture model representing the group as a whole. â¢ Context and task models: enhance the understanding of model limitations by specifying the range of tasks and operational con- texts for which a particular model is applicable and defining task and context taxonomies. Identify situations in which behavior can or cannot be predicted with varying degrees of specificity and accuracy. Affective Models and Cognitive-Affective Architectures Computational models of emotion represent a relatively recent devel- opment in computational models of mental phenomena. This develop- ment follows a rapid growth in emotion research in both psychology and neuroscience over the past 15 years. Although computational approach
168 BEHAVIORAL MODELING AND SIMULATION to emotion research represents a recent development, the recognition of the importance of emotion in decision making and individual and social b Â ehavior is not new (e.g., Simon, 1967), nor is the recognition that under- standing emotion is critical for understanding cognition and adaptive behavior in general (Norman, 1981). Like architectures focused on cognition, cognitive-affective architec- tures are simulation-based models of human information processing. In contrast to purely cognitive architectures, cognitive-affective architectures also include some aspects of affective processing. Like their purely cognitive counterparts, cognitive-affective architectures are used for both research and applied purposes. In addition to the objectives discussed for cognitive architectures, these models also serve to explore the nature of affective processes, the mechanisms of cognition-emotion interaction, and, in more applied contexts, to enhance the realism, believability, and effectiveness of synthetic agents and robots. Given the critical role of emotion in interÂ personal communication, these architectures are thus particularly relevant for organizational modeling (Hudlicka and Zacharias, 2005). In spite of their relatively recent appearance in cognitive science and AI research, significant progress has been made in computational emotion modeling and cognitive-affective architectures, particularly in the more applied areas of synthetic and believable agents (e.g., Dautenhahn, Bond, CaÃ±amero, and Edmonds, 2002; de Rosis, Pelachaud, Poggi, Carofiglio, and De Carolis, 2003; Prada, 2005). What Are Cognitive-Affective Architectures? Cognitive-affective architectures are computational simulation models of particular affective phenomena (e.g., effects of emotions on behavior), some aspects of affective information processing (e.g., generation of emo- tion via cognitive appraisal of the current situation), and associated affec- tive factors (i.e., specific emotions, moods, or affective personality traits). The process modeled most frequently is the generation of emotion via cognitive appraisal and the effects of emotion on behavior (e.g., Bates, Loyall, and Reilly, 1992; Gratch and Marsella, 2004b; Reilly, 2006). Less frequently, these architectures also include models of emotion effects on perception and cognition (Hudlicka, 1998, 2002a, 2002b, 2007b; Ritter, Avramides, and Councill, 2002). The affective factors modeled in cognitive-affective architectures include both transient states and more permanent traits. The states include short- lasting emotions, such as joy, fear, anger, and sadness, as well as longer lasting moods (e.g., fearful, happy, sad). Traits include affective personal- ity traits, such as emotional stability and extraversion of the five-factor personality model (Costa and McCrae, 1992). Some models also include
MICRO-LEVEL FORMAL MODELS 169 mental states that have both cognitive and affective components, such as attitudes. It is beyond the scope of this section to discuss the extensive literature in emotion research in psychology and neuroscience, both theoretical and empirical, which serves as the basis for computational emotion models. The reader is referred to the excellent recent handbooks on research in emotion and the affective sciences (Ekman and Davidson, 1995; Lewis and H Â aviland-Jones, 2000; Scherer, Schorr, and Johnstone, 2001; Davidson, Scherer, and Goldsmith, 2003). Briefly, however, we define emotions at the most abstract level as mental states that involve evaluations of current situations (internal or external; past, present, or future) with respect to the agentâs goals, beliefs, values, and standards. Note that this evaluation does not imply conscious, deliberative cognitive processes. A key aspect of emotions, and affective factors in general, is their multimodal nature. These complex phenomena involve physiological components associated with changes in the autonomic nervous system processes (e.g., heart rate, blood pressure, galvanic skin response); cognitive components (e.g., changes in attention and working memory properties); behavioral components associated with the expression of emotions, moods, and traits (e.g., facial expressions, effects on speech, gestures, posture, behavioral choices); and subjective components (e.g., idiosyncratic individual feelings associated with particular emotions and moods). It is critical to keep in mind this multimodal nature of emotions, since many misunderstandings of these complex phenomena can be traced to a focus on only a subset of these modalitiesâfor example, misleading questions such as âIs emotion a thought or a feeling?â It is both and more. Izard (1993) provides a framework for integrating the multiple modalities of emotion, in the context of emotion generation. Emotions play a number of critical roles in biological agents, both intrapsychic and interpersonal. Examples of the former include goal man- agement, reallocation of resources, rapid activation of fixed behavior rep- ertoires, all designed to enhance adaptive behavior (Hudlicka, 2003a). Examples of the latter include mediation of attachment behaviors and communicative and expressive functions of emotion (e.g., rapid communi- cation of behavioral intent to facilitate coordination). See Hudlicka (2007a, 2007b) for a more in-depth discussion of emotion research background from a computational perspective. Emotion research in psychology and neuroscience provides strong evi- dence that cognitive and affective processes function in parallel and in a closely coupled manner (e.g., LeDoux, 1998; Phelps and LeDoux, 2005). Most modern theories of emotion therefore consider cognition to be an important component of affective processing, and vice versa. This, along with a definition of cognition that includes both conscious/deliberative
170 BEHAVIORAL MODELING AND SIMULATION and unconscious/automatic processing, makes earlier debates regarding the primacy of cognition (Lazarus, 1984) versus primacy of emotion (Lazarus, 1984; Zajonc, 1984) in the generation of emotion a matter of Âsemantics. The current consensus regarding this issue is that these debates were largely a result of terminological vagueness and misunderstanding regarding exactly what constitutes cognitive processes. Cognitive-affective architectures share a number of features with the cognitive architectures discussed above. Like their cognitive counter- parts, emotion models can be standalone models of particular aspects of emotions, particular affective processes, or affect-related phenomena. C Â ognitive-Âaffective architectures are most frequently symbolic, but they can also contain connectionist components and thus be characterized as hybrid architectures. Purely connectionist approaches are used only for limited-scope models of single phenomena, rather than for entire archi- tectures. The specific constructs and processes represented in a particular cognitive-affective architecture depend on its objective, level of resolution, the specific processes modeled and their theoretical underpinnings, and any particular application, as well as the particular implementation approaches. Like their purely cognitive counterparts, cognitive-affective architectures typically include modules and functions that correspond to specific func- tions identified in biological agents, for example, emotion generation via cognitive appraisal, and generation of facial expressions. Given the broad range of proposed roles and characteristics of emo- tions, a systematic description of the variety of existing models addressing these phenomena can be challenging. Below we structure the description of existing models in terms of a categorization of core affective processes proposed by Hudlicka (2007b)âprocesses mediating emotion generation, and those mediating emotion effects on cognition and behavior. Hudlicka further suggests that âthe mechanism mediating these two fundamental processes then enables the variety of emotion roles identified in biological agents, such as resource re-allocation, goal management, etc.â (Hudlicka, 2007a). The majority of existing cognitive-affective architectures focus on the generation of emotions, most frequently via cognitive interpretive processes, termed cognitive appraisal. The state-of-the-art section below discusses examples of these models and architectures. In the majority of these archi- tectures the outcomes of the generated emotions, the emotion effects, are typically limited to influences on observable behavior. This includes spe- cific behavioral choices by synthetic agents or robots, as well as âemotion expressionâ in terms of distinct facial expressions, speech, and gestures and movement (e.g., Andre et al., 2000; Paiva, 2000; de Rosis et al., 2003; Breazeal and Brooks, 2005). A few cognitive-affective architectures focus also, or instead, on modeling the effects of emotions on the perceptual
MICRO-LEVEL FORMAL MODELS 171 and cognitive processes that mediate decision making and action selection, problem solving, and learning (e.g., the MAMID architectureâRitter et al., 2002; Bach, 2007; Hudlicka, 2007a, 2003a, 1998). Figure 5-3 illustrates an example of a cognitive-affective architecture with a dedicated affect appraiser module for emotion generation, a number of cognitive modules for the cognitive and perceptual functions supporting the necessary inter- pretive processes, and a range of modulating parameters that implement the effects of emotions on cognitive processing. Given the tight integration between cognitive and affective informa- tion processing, it follows that cognitive-affective architectures necessarily include purely cognitive processes, such as attention, planning, situation assessment, action selection, and different types of memories (working memory and long-term memories). These functions are necessary to pro- vide the cognitive infrastructure in which the affective processes can be modeled. Thus, for example, cognitive appraisal necessarily requires rep- resentation of the actual current state of the world and self (referred to as âsituation assessmentâ or sometimes âbeliefsâ), and the desired state of the world (referred to as âgoalsâ or âdesiresâ). Cognitive appraisal models also require knowledge about the mappings among specific stimuli (elicitors) and the resulting emotions (e.g., a large, rapidly approaching unknown object is likely to induce fear). More complex models of appraisal may also require the representation and generation of expectations and the agentâs own abilities to cope with a particular situation. Depending on a particular research objective or application, specific cognitive processes of interest may need to be represented (e.g., learning, planning). Depending on the particular implementation approach, there may or may not be a one-to- one correspondence between the modeled process (e.g., appraisal) and an architecture module (e.g., appraisal module). As with cognitive architectures, cognitive-affective architectures aim to be domain-independent, and their instantiation in a particular domain requires the specification and development of domain-specific long-term memories that contain the problem-solving knowledge required to perform a particular task. Applications and Benefits of Cognitive-Affective Architectures The applications and benefits of cognitive-affective architectures are similar to those of purely cognitive architectures, in both the Â theoretical and the applied realms. In addition, there are further categories of benefits, which follow from the primary roles of emotion in biological agents, as out- lined above. The intrapsychic roles of emotion, such as goal management, rapid resource reallocation, and coordination across multiple cognitive functions, enable more robust and effective autonomous behavior by facili-
172 BEHAVIORAL MODELING AND SIMULATION A COGNITIVE States / Traits / ARCHITECTURE COGNITIVE ARCHITECTURE Cognitive Factors PARAMETERS Processing Cognitive Attention Attention Module Parameters Speed / Capacity (Attention / Working Memory) WM Capacity Situation Speed / Capacity Speed Assessment Skill Level Inferencing Speed & Biases Traits Cue selection and delays Expectation Extraversion Situation selection and delays Generator Stability Conscientiousness Aggressiveness Structural Affect Appraiser Architecture Topology Weights on intermodule links Affective States Goal Anxiety / Fear Long Term Memory Manager Anger / Frustration Content and structure of Sadness knowledge clusters Joy (BN, rules) Action Selection B 5-3A.eps Cues Attention Attended Cues Situation Current Situations Assessment Task, Self, Other Expectation Expectations Generation Future States Task, Self, Other Affect Affective State & Emotions: Appraiser Valence (+ | â ) Anxiety, Anger, Neg. Affect, Pos. Affect Goal Manager Goals Task, Self, Other Action Selection Actions FIGURE 5-3â MAMID, a cognitive-affective architecture and its modulating Âparameters. 5-3B.eps Part A illustrates the modules, data flow, and mental constructs that mediate emotion generation via cognitive appraisal and decision making. PartÂ B illustrates how the Âeffects of emotions, personality traits and other individual differences are translated into archi- tecture parameters that control processing in the individual modules. SOURCE: Adapted from Hudlicka (2003a).
MICRO-LEVEL FORMAL MODELS 173 tating agent adaptive behavior in complex, uncertain environments (e.g., VelÃ¡squez, 1999; Scheutz, 2004; Scheutz and Schermerhorn, 2004; Scheutz, Schermerhorn, Kramer, and Middendorff, 2006; Bach, 2007). The rationale for using emotion to enhance agent autonomy rests on the assumption that since emotions mediate critical adaptive mechanisms in biological agents (e.g., goal monitoring and management, reward and punishment processes, resource reallocation), they are likely to enhance adaptive behavior in syn- thetic agents and robots. The interpersonal roles of emotion, such as communication of internal mental states and behavioral intent, help improve human-machine inter- action by enhancing the synthetic agentsâ realism and believability. The integration of emotions into purely cognitive architectures also enables affective expressiveness and behavioral variability that begins to resem- ble human behavior and thus enhances agent realism and believability, thereby promoting more engaging human-machine interactions. Examples of these applications include work in pedagogical applications (Prada, 2005; ÂPrendinger and Ishizuka, 2005; Zoll, Enz, Schaub, Paiva, and Aylett, 2006), adviser and recommender systems (e.g., de Rosis et al., 2003), and training (Gratch and Marsella, 2004b). As mentioned above, models of the interpersonal role of emotions are particularly critical in organizational modeling, in which explicit models of social interactions must be repre- sented. Augmenting purely cognitive architectures and models with emotion also enables more accurate and realistic modeling of users in a variety of training and tutoring applications. Finally, since emotions play critical roles in biological agents, any com- putational model of biological information processing must necessarily take into consideration affective factors. This view reflects the current consensus in the neurosciences: to understand cognition one must also understand emotion (e.g., Phelps and LeDoux, 2005). Representation of emotion is thus necessary to develop realistic models of human information processing and behavior, whether for research or applied purposes. The results of the theoretically motivated models of cognition- e Â motion interactions have a range of practical applications that include the following: â¢ Improved pedagogical strategies in education and training. â¢ Design of more effective and safer human-computer systems through improved human-machine function allocation, task design, and user interface design. â¢ Improved decision making and performance through the develop- ment of affect- and workload-adaptive decision support systems. â¢ More effective personnel selection for both team and individual tasks.
174 BEHAVIORAL MODELING AND SIMULATION â¢ More realistic models of social groups, teams, and larger organizations. â¢ Assessment and treatment for a range of affective and cognitive- affective disorders. Like their purely cognitive counterparts, cognitive-affective architec- tures can also be used for behavior prediction in a variety of settings, both individual and team, and across a range of contexts, ranging from simple task behavior prediction to adversary modeling for a variety of purposes, including counterterrorism. Since they include affective factors, which are considered to be key sources of human behavioral variability and an essen- tial component of motivation, it can be argued that these models are superior to purely cognitive architectures regarding behavior prediction. The caveat mentioned in the cognitive architecture section regarding the current limits in individual behavior prediction also applies to cognitive- affective architectures. Perhaps more so, since the behavioral variability and emotion-induced individual idiosyncrasies make accurate prediction of single acts virtually impossible. However, the addition of emotion to purely cognitive models does enable more realistic modeling and prediction of the possible ranges of behavior, due to varying individual personalities, emo- tions and moods, and consequent variabilities in interpretative processes, motivation, and behavioral expression (Hudlicka, 2007a). State of the Art Existing emotion models and cognitive-affective architectures are being used both as research platforms, to investigate the mechanisms and social roles of emotions, and in a wide range of applications to enhance agent and robot behavior and HCI. The latter are primarily in the form of Âcognitive- affective user models and cognitive-affective agents, used to enhance some aspect of HCI in training, education, and gaming environments. The major- ity of emotion models have been developed in academia, with some in industry research laboratories. The recent emergence of gaming and virtual environments has been a key factor in stimulating an interest in applied models of emotion and affective factors (e.g., personalities). No compre- hensive review of emotion models and cognitive-affective architectures currently exists, analogous to the reviews of cognitive architectures (i.e., National Research Council, 1998; Morrison, 2003; Ritter et al., 2003). An earlier review by Hudlicka and Fellous (1996) provides descriptions of sev- eral older models, a more recent review of some cognitive-affective models can be found in Bach (2007), and Hudlicka (in preparation) will include an overview of existing emotion models and cognitive-affective architectures. Mellers, Schwartz and Cooke (1998) provide a review of some models of
MICRO-LEVEL FORMAL MODELS 175 emotion effects on decision making but focus on more traditional, decision- theoretic models rather than cognitive architecture models. This section provides a brief overview of the state of the art in emo- tion modeling, not an exhaustive catalogue of the large number of exist- ing Âmodels. Cognitive-affective architecture structures are most frequently developed de novo (e.g., VelÃ¡squez, 1999; Breazeal and Brooks, 2005; Sloman, Chrisley, and Scheutz, 2005; Bach, 2007), although frequently following an established structure used for cognitive or agent architectures (e.g., the Belief-Desire-Intention agent architecture is often used as a start- ing point), or a particular model of information processing (e.g., RPD; Hudlicka, 2007b). In some cases, emotions are integrated into existing established architectures. For example, the Soar cognitive architecture has served as a framework for the implementation of several models of appraisal and emotion effects on behavior (e.g., ÂHenninger, Jones, and Chown, 2003; Gratch and Marsella, 2004a). ACT-R has been used to model effects of emotion on cognition (Belavkin, 2001; Ritter et al., 2002). Given the complexity of affective phenomena, the wide range of roles that emotions play in adaptive behavior and social interactions, and the lack of understanding of these processes, it is challenging to present the wide range of models in a systematic manner. Below we follow Hudlickaâs approach (2006a, 2007a) and divide the discussion of existing models into two catego- ries, based on the fundamental affective processes emphasized in the model: emotion generation via appraisal, and emotion effects on perception, cogni- tion, and behavior (Hudlicka, 2008). We conclude the section with a brief discussion of two broadly scoped cognitive-affective architectures. Models of Cognitive Appraisal Cognitive appraisal is the dominant theory of emotion generation and the most frequently modeled aspect of emotion. A few architectures aim to incorporate additional modalities into the appraisal process (e.g., the âsomatic markerâ hypothesis: Damasio, 1994; Breazeal and Brooks, 2005; Stocco and Fum, 2005), and other noncognitive components (VelÃ¡squez, 1999). In computational terms, the objective of appraisal is to map the emotion elicitors (stimuli relevant for the generation of emotion) to the resulting emotion(s). This mapping may be either direct or via an inter- mediate stage of domain-independent appraisal dimensions (Scherer et al., 2001), which include novelty, valence, goal relevance and goal congruence, responsible agent, coping, and individual and social norms. The specific elicitors may also be mapped onto a set of two or three dimensions that can be used to characterize emotions; typically these are valence and arousal. These mappings are determined in the context of a specific set of the agentâs goals and beliefs.
176 BEHAVIORAL MODELING AND SIMULATION Different models of appraisal vary in the following: theoretical founda- tions used as basis for the computational model (different theories vary in the degree of elaboration of the processes involved, stages of processing, specific functions included); specific methods used to implement the elicitor- to-emotion mapping (e.g., rules, vector spaces, decision-theoretic formula- tions, belief nets); the degree to which goals and beliefs are represented explicitly by the model and the complexity of their representation and relationships; the capability of the model to generalize across ambiguous triggers, reason under uncertainty, and to perform approximate matches; whether domain-specific triggers are mapped directly onto the emotions or whether this mapping is performed via a domain-independent âlayerâ of appraisal dimensions (e.g., novelty, valence, goal congruence, etc.); the specific triggers, appraisal dimensions, emotions, or affective dimensions represented in the model; the ability to represent appraisal idiosyncrasies in terms of variability of the matching functions from elicitors to emotions; and whether the model exists in isolation or integrated in an overall archi- tecture (Hudlicka, 2007a, 2007b). The OCC model of appraisal (Ortony et al., 1988) remains the most widely used theoretical basis for computational appraisal models. The OCC model defines an elaborate taxonomy of emotion triggers and clusters them in terms of three broad categories: event-based emotions, reflecting desirability (or lack thereof) of an event with respect to the agentâs current goals; attribution emotions, reflecting praiseworthiness (or lack thereof) of an event or situation with respect to the agentâs values; and attraction emotions, reflecting the degree of like or dislike of an entity. Models of the appraisal using the OCC theory include the Affective Reasoner (Bates et al., 1992, the first implementation of the OCC theory), the EM (Reilly, 2006), the Âpersonality and emotion model of Andre et al. (2000), and the work of Paiva and colleagues (Martinho, Machado, and Paiva, 2000), all of which have been used to enhance the believability of synthetic agents. Recently, the appraisal theories of Scherer (Sander, Grandjean, and Scherer, 2005; Scherer et al., 2001) and Smith and colleagues (Smith and Kirby, 2001) have begun to be used as theoretical bases for model- ing. Â Schererâs theories provide an elaborate description of the domain- i Ândependent appraisal variables of novelty, valence, goal congruence, and coping potential, whose values are extracted from the domain-dependent stimuli. The theories of Smith and colleagues, based on the previous work of Arnold and Lazarus, are similar but emphasize the role and mechanisms of coping. Both theories reflect a trend toward more process-oriented theo- ries, which lend themselves to computational implementations by providing more detailed descriptions of the mechanisms of the appraisal processes. These theories have recently served as basis for several computational m Â odels, including EMA (Gratch and Marsella, 2004a).
MICRO-LEVEL FORMAL MODELS 177 Increasingly, theoretical bases for particular appraisal models combine elements of multiple theories and approaches. Examples of these architec- tures include MAMID (see Figure 5-4) (Hudlicka and Canamero, 2004), which combines elements of the Scherer and Smith models of appraisal; the EMA architecture (Gratch and Marsella, 2004a), which combines ele- ments of the Scherer, Smith and Lazarus, and OCC models of appraisal; the architecture for the robot KISMET (Breazeal and Brooks, 2005), which uses elements of somatic marker hypotheses and a three-dimensional model of the emotion space (arousal, valence, and dominance); and the robot Yuppy (VelÃ¡squez, 1999), which uses emotion as a core component of the robot control system and integrates both cognitive and noncognitive triggers in the emotion generation process. Another promising trend in computational models of appraisal is the attempt to develop abstract formalisms, in which different theories can be compared. The work of Broekens and DeGroot represents an example of this trend (Broekens and DeGroot, 2006). A number of appraisal models have been developed in the past decade and it is beyond the scope of this section to describe all of them. The inter- ested reader is referred to the following recent publications which include descriptions of a number of cognitive-affective architectures and a variety of approaches to the implementation of emotion generation via appraisal (Dautenhahn et al., 2002; Trappl, Petta, and Payr, 2003; Hudlicka and C Â anamero, 2004; Fellous and Arbib, 2005). âUniversalâ Automatic Valence Abstract Current Valence Elicitors State â.9 Modulator â.8 Expanded Emotion Individual Emotion Specific Elicitors Anxiety .8 Anxiety .7 Anger .6 Anger .4 Sad. .4 Sad. .3 Happ. .1 Happ. .1 Existing Valence Trait Existing Emotion Profile FIGURE 5-4â Affect appraiser module of the MAMID cognitive-affective architecture. SOURCE: Hudlicka (2005). 5-4.eps
178 BEHAVIORAL MODELING AND SIMULATION In general, several trends are evident in recent models of appraisal. First, there is increased complexity and fidelity (one hopes) of the emotion dynamics (i.e., the functions calculating emotion intensity and decay rates). Second, increased effort is made to integrate multiple emotions and to model appraisal as an evolving, dynamic process. Third, modelers are recognizing the need to differentiate among emotion states based on their duration and to model both emotions (lasting seconds and minutes) and longer lasting moods, as well as stable personality dispositions (traits). Fourth, increasing attempts are made by psychologists to develop more mechanism-oriented theories of appraisal. These so-called process models then provide more of the details necessary to develop computational versions, and can in turn benefit from the empirical hypotheses generated by computational models. Fifth, attempts are made to identify domain-Âindependent appraisal dimen- sions as the intervening variables between domain-specific situations and the resulting emotions. While early models provided primarily domain-specific triggers and mapped these directly to specific emotions, more recent models interpose an intermediate step, whereby more abstract appraisal dimensions are first identified, such as relevance, novelty, unexpectedness, desirability, and ego involvement, which are then linked to specific emotions. Models of Emotion Effects on Cognition and Cognitive-Affective Interactions Architectures that focus on appraisal typically link the resulting emo- tion to specific behavioral results, most often to facial expressions, gestures, speech, or behavioral choices by the associated agents. The effective and realistic expression of emotion by synthetic agents represents a consider- able technological challenge. Much progress has been made in this area in the social agent and robot research community. It is beyond the scope of this section to address the theoretical, methodological, and technical chal- lenges. A recent book provides an overview of the methods and challenges (Prendinger and Ishizuka, 2003), and a brief overview of the state of the art is provided by Gratch and colleagues (Gratch, Rickel, Cassell, Petajan, and Badler, 2002). We focus here on an aspect of affective processing that remains underemphasized on cognitive-affective architectures: models of the effects of emotion on perception, cognition, and the appraisal processes themselves. One of the earliest models in this category was the work of Araujo (1991, 1993), who implemented a connectionist (recurrent associative net- work) model of two phenomena in cognitive-affective interaction: the effect of emotional state on performance and the effect of emotional state on memory and recall, based on neuroscience data. The model represented two separate but interacting systems mediating cognitive and affective process-
MICRO-LEVEL FORMAL MODELS 179 ing, each with different characteristics: fast processing of survival-related stimuli in the affective system, yielding approach/avoidance output, and slower, differentiated processing in the cognitive system. MAMID (Hudlicka, 1998, 2002b, 2003a) represents an example of a cognitive-affective architecture whose primary focus is the modeling of the multiple, interacting effects of emotions and affective traits on perception, cognitions, and behavior. MAMID is a domain-independent architecture that implements a generic methodology for modeling a broad range of individual differences (also referred to as behavior moderators), in terms of a series of external parameters that control processing within the individual modules (see Figure 5-2). MAMID dynamically generates emotions via the affect appraiser module (see Figure 5-4). The resulting configuration of emo- tions (and prespecified personality traits) are translated into specific values of the architecture parameters, which then control aspects of fundamental processes within the architectureâspeed, capacity, and specific content bias (e.g., bias for processing threatening information). MAMIDâs primary purpose is to elucidate the mechanisms mediating emotion-Âcognition inter- action, with particular emphasis on the effects of emotions on the cognitive appraisal process itself and on emotion regulation. Two other examples of parameter-based models of emotion effects are the work of Ritter and colleagues (Ritter and Avraamides, 2000; Ritter et al., 2002) and the MicroPsi architecture (Bach, 2007). Ritter follows the model proposed by Hudlicka and applies it to the modeling of emotion effects in the ACT-R architecture. The focus is on models of stress, and the parameters modeling these effects influence the ACT-R rule selection and conflict resolution algorithms (Ritter, Reifers, Klein, and Schoelles, 2007). In addition to the traits, states, and cognitive individual differences modeled in MAMID, Ritter also includes such factors as fatigue. The MicroPsi architecture uses four parameters to model emotion effects: arousal, which determines degree of action readiness; resolution level, which influences the degree of elaboration of perceptual and memory processes; selection threshold, which influences the extent to which an agent persists in its current activity (versus changing its goals and behavior); and sampling rate/securing behavior, which controls the agentâs orienting and novelty-seeking behavior. The MicroPsi architecture controls the behavior of simple agents in simulated environments, focusing on navigation and searching for objects of interest (e.g., food sources). Several agent and robot cognitive-affective architectures also model some aspects of emotion-cognition interaction. For example, the Yuppy robotâs attention and perceptual processes are influenced by emotions and display differences in orienting response and perceptual biases (VelÃ¡squez, 1999). Several attempts have been made to model emotion effects on decision making in the context of decision-theoretic models, which need to be aug-
180 BEHAVIORAL MODELING AND SIMULATION mented to allow for variability of the utility functions as a function of the current emotion or mood. Busemeyer, Dimperio, and Jessup (2007) have developed an augmented decision-theoretic formalism to model the affec- tive and motivational dynamics over time, termed âdecision field theoryâ (DFT). Specifically, DFT models both the changing goals and differences in the time required to meet particular goals as a result of specific action. DFT currently models affective states in terms of valence (positive/Ânegative). Behavioral alternatives are evaluated in terms of the anticipated valence that would be generated, and the alternative that generates the most posi- tive valence is selected. The work of Lisetti and Gmytrasiewicz (2002) provides another example of augmenting older utility models with affective factors. Work in modeling behavior moderators (termed âperformance modera- tor functionsâ or PMFs) represents another attempt to model the effects of personalities on behavior (Silverman, Johns, Cornwell, and OâBrien, 2006; Silverman et al., 2007). The PMF-based models combine a variety of theo- retical models, including the OCC appraisal model and decision-theoretic formalisms, and apply the resulting models to simulations of individual and group behavior. Cognitive-Affective Architectures Several cognitive-affective architectures have already been mentioned in the context of controlling agent or robot behavior and are described above in the context of either emotion generation or emotion effects on cognition and behavior (VelÃ¡squez, 1999; Breazeal and Brooks, 2005; Bach, 2007). Here we highlight two additional cognitive-affective architectures that aim to provide a broad model of intelligent behavior and integrate both cognitive and affective processing: the implemented Cog_Aff architecture (Sloman, 2003; Sloman et al., 2005) and a design for a cognitive-affective architecture proposed by Ortony, Norman, and Revelle (2005). Both archi- tectures share a number of features, and it is interesting to note that while developed independently, there is a degree of convergence in the design. Both models propose three levels of functioning, with a reactive s Â timulus-response layer mediating simple, hardwired behaviors; an inter- mediate level handling simple and routine, but learned, behavior (termed âdeliberativeâ by Sloman and âroutineâ by Ortony); and a third level han- dling complex reasoning and problem solving (termed âmeta-managementâ by Sloman and âreflectiveâ by Ortony). Processing occurs in parallel at all three layersâcomplex feedback mechanisms among the layers coordinating the independent processes and influencing the final outcome. Both models also propose different degrees of complexity in the affective reactions aris- ing at each level, with the reactive level generating rather undifferentiated
MICRO-LEVEL FORMAL MODELS 181 affective states corresponding to positive and negative valence; the middle level generating simple, primary emotions such as fear, joy, sadness, and anger; and the top level generating both complex versions of the primary emotions, as well as complex emotions requiring explicit representations of the self and having a strong cognitive component (e.g., shame, pride, guilt). Existing agent and robot architectures typically implement a subset of these, usually just one level, although increasingly multilevel processing is being implemented; for example, the FearNot! agent implements both a reactive and deliberative level of processing in emotion generation (Paiva et al., 2005). Relevance to Modeling Requirements Cognitive-affective architectures are relevant for three core areas in military modeling: analysis and forecasting in planning, simulation for training and rehearsal, and design and evaluation for acquisition. In addi- tion, the ability of cognitive-affective agents to enhance autonomous behav- ior is also critical for such applications as unmanned vehicle control. As mentioned above, cognitive-affective architectures are particularly relevant for modeling team and organization behavior, in which the emotion not only influences individual behavior, but also plays a key role in inter- personal interactions. The extensive existing work in social agents (e.g., Dautenhahn et al., 2002; de Rosis et al., 2003) is directly relevant here. One can envision integration of existing social network models with aspects of c Â ognitive-affective architectures to improve the validity and utility of larger organizational models. Of particular importance in the case of cognitive-affective architectures are training and assessment systems, in which the addition of affective fac- tors increases the effectiveness of the training system by enhancing the real- ism of any social aspects of the training environment. A critical role is also played by affect-adaptive systems, capable of assessing the userâs (traineeâs) emotional state and adapting the pedagogical strategies accordingly. This is also a critical factor in operational decision support systems. One may wonder whether incidents such as the downing of an Iranian airliner by the U.S.S. Vincennes would have happened if an affect-adaptive decision- s Â upport system had been in place. The potential for reducing accidents via the use of such systems needs to be explored (e.g., see Hudlicka, 2002a). Finally, the application of these models to behavior prediction, in both friendly and adversary situations, is also critical. Given the importance of emotion in motivation and behavior control, one can argue that any models attempting prediction must in fact include affective factors, while keeping in mind the general limitations of predictions of individual behavior already discussed. These applications include those outlined in Chapter 9: disrup-
182 BEHAVIORAL MODELING AND SIMULATION tion of terrorist networks, prediction of adversaries to specific courses of action, prediction of societal reactions to specific events, crowd behavior modeling and crowd control training, and organizational design. Major Limitations Emotion models and cognitive-affective architectures have the same limitations as their cognitive counterparts, already discussed in the cognitive architecture section, exacerbated by the difficulties associated with model- ing the transient, idiosyncratic, and poorly understood affective processes. These include lack of underlying theory to support model development, dif- ficulties in obtaining required data, brittleness, the labor-intensive nature of model development, and lack of validation. The issue of data is particularly critical: while increasing amounts of empirical data are available about affective effects at the periphery (attention and behavior), the effects of emotions on the internal cognitive states (e.g., situation assessment, learn- ing, goal management) are difficult to assess unequivocally. Furthermore, it is unlikely that the exact nature of these internal states can be identified to the extent required for process-level models in the near future. As with cognitive architectures, the most critical limitation is architec- ture and model validation, although progress is being made in this area. This includes the same issues already discussed with respect to cognitive architectures: lack of established validation criteria and methodologies, frequent confusion between verification and validation, and the lack of a fully validated, domain-independent cognitive-affective architecture. These issues are discussed in more detail below. Verification and Validation Issues In spite of the challenges associated with validation of emotion models and cognitive-affective architectures, progress is being made in this area. A promising trend in emotion modeling is the increasing emphasis on includ- ing evaluation and validation studies in publications. As is the case with cognitive architectures, no existing emotion models or cognitive-affective architectures have been validated across multiple contexts or a broad range of metrics. However, some important evaluation and validation approaches and studies exist. First, it is important to make the distinction between evaluation and validation. Given the increasing proliferation of cognitive-affective archi- tectures in synthetic agents, there is increasing emphasis on evaluating the effectiveness of the resulting models in improving HCI. These evaluation studies do not necessarily address model validity or, if they do, they focus on limited black box validation approaches. They are neverÂtheÂless Âcritical
MICRO-LEVEL FORMAL MODELS 183 in establishing the need for, and effectiveness of, augmenting synthetic agents with affective processes, for particular purposes and applicationsâ e Â nhancing agent likeability, realism, Â believability, empathy, etc. Examples of these types of evaluation studies include the work of Â Prendinger and Ishizuka (2005) in evaluating the effectiveness of a synthetic agent capable of limited emotion recognition in reducing user frustration. Results of these studies indicate that users experience less stress and perceive the task as less difficult when provided with âempathicâ feedback from the synthetic agent. Additional examples of this approach to agent evaluation include the work of de Rosis et al. (2003). Studies have also addressed the degree to which a social agent can improve human performance in a mixed human-robot team. Scheutz and colleagues (2006) have demonstrated improved effec- tiveness of human team membersâ performance as a robot team-member âexpressesâ emotions. Some evaluation studies also focus on assessing the degree to which cognitive-affective agents are better able to negotiate complex, novel, and uncertain environments than purely cognitive agents. Examples of these studies include work by Hille (1999), cited in Bach (2007). In addition to these evaluation studies, attempts are beginning to vali- date the underlying models themselves. As is the case with cognitive archi- tectures, these validation studies are performed via a range of methods, including the weaker heuristic and qualitative evaluations and increas- ingly focusing on comparisons with human data. Examples of these efforts include evaluation of MAMIDâs parameter-based model of emotion effects, which used a heuristic evaluation approach to evaluate the modelâs ability to match human data at a qualitative level; establishing the validity of an aug- mented ACT-R architecture to model effects of stress on subtraction, using data from existing empirical studies (Ritter et al., 2002); and recent work by Gratch and Marsella (2004a) establishing a correspondence between aggregated empirical data from coping questionnaires and a model of emo- tion generation and coping implemented in the EMA architecture. The key challenge in these validation studies is the selection of the most appropriate dataset. This refers to selecting data from a comparable context, as well as selecting the appropriate method and degree of data aggregation. It is not clear to what extent comparison of performance at the aggregated level can be used to reflect model validity when such highly variable phenomena as emotions are considered. The cognitive-affective architecture validation has not yet reached the stage of systematic comparisons that is beginning to be used for their cognitive counterparts, such as the AMBR project (Gluck and Pew, 2005). However, given the recent emphasis on validation in the computational emotion research community, such studies are likely to be taking place in the near future.
184 BEHAVIORAL MODELING AND SIMULATION Future Research and Development Requirements Future research and development requirements for cognitive-affective architectures are similar to those for cognitive architectures. Additional requirements reflect the challenges in building these architectures men- tioned throughout the text, as well as the major limitations discussedâthat is, issues related to a lack of underlying theory regarding emotion and e Â motion-cognition interactions to support model development, difficulties in obtaining required data for these transient and multimodal processes, brittleness, labor-intensive nature of model development, and lack of vali- dation. In addition, there are technical (and theoretical) issues associated with accurate recognition of emotion in humans in affect-adaptive appli- cations in training and gaming, as well as issues in realistic generation of affective behaviors (e.g., facial expressions, effects on natural language generation and speech). These represent important issues in the develop- ment of cognitive-affective agents and robots capable of social interaction. Some of these issues are discussed in a recent review of requirements for modeling synthetic agents (Gratch et al., 2002). The very nature of emotion and affective processes as complex, m Â ultiple-modality phenomena makes modeling affective processes and c Â ognitive-affective architecture more challenging than modeling purely cog- nitive architectures. It is not clear to what extent the types of abstractions typically made in these models (e.g., using sequential processes to model inherently parallel and distributed phenomena, abstracting an identified function as a single module within an architecture) hold when it comes to modeling the multimodal nature of affective processes. Cognitive-affective architecture development trends may also experience a more pronounced split between the research-oriented and the application-oriented archi- tectures. Due to the increasing demand for more realistic and believable agents enabled by incorporating affective factors into agent architectures, the future developments in these models are likely to be driven by practical considerations for rapidly developing such agents for such applications as interactive gaming. This is likely to contribute to emerging standards for affective markup languages and other tools to facilitate rapid development of largely black box models of these phenomena. Expert Systems A key feature that differentiates expert systems (ESs) from more tradi- tional software programs is the explicit representation of knowledge, stored in knowledge bases that are distinct from the inferencing mechanisms that control how the knowledge is used. This feature facilitates the editing of the knowledge base to accommodate additional or changing task knowl-
MICRO-LEVEL FORMAL MODELS 185 edge and provides flexibility in how the knowledge embedded in the system can be used (e.g., to answer previously unanticipated questions about the problem). ESs have increasingly become integrated with more traditional software development. Yet it would be a mistake to think of them as simply another programming paradigm, analogous, for example, to object-oriented pro- gramming, since a number of important factors distinguish ESs from these lower-level paradigms, including the architectures of these systems, the emphasis on explicit representation of knowledge and the associated knowl- edge representation formalisms, separation of knowledge and control, and the frequent use of human expertise and heuristics. These factors also make ESs well suited for modeling both individual and organizational behavior (see Hudlicka and Zacharias, 2005, for a discussion of how expert systems can be used in these contexts). ESs should not be confused with cognitive architectures. ESs and cogni- tive architectures are different in both their objectives and their architec- tures. The objective of an ES is to solve a particular problem, frequently by simulating human expertise and the use of heuristics. The objective of cognitive architectures is to emulate human perceptual and decision-making capabilities, frequently in the context of basic research aimed at advancing understanding of these processes, or to control the behavior of synthetic agents or robots. ES architectures are typically much simpler than cogni- tive architectures, the latter typically containing modules that correspond to functional components of the decision-making process (e.g., situation assess- ment, goal selection) or the mind (e.g., attention, long-term memory). What Is an Expert System? ESs are software programs that aim to simulate the decision making and problem solving of human experts on highly specialized tasks, such as medical diagnosis or mechanical system troubleshooting. ESs achieve their âexpertâ performance by applying large amounts of domain-specific knowledge to a particular problem. They are therefore also known as knowledge-based systems. Three essential components define ESs: 1. Knowledge base: an explicit representation of domain and p Â roblem-solving knowledge for a particular task. This knowl- â o T the extent that some systems may contain elements of both ESs and cognitive archi- tectures (e.g., knowledge bases, rule-based problem solving, characteristics of the working memory), they may be considered to partially fall within both categories (e.g., Soar; Hill et al., 1998).
186 BEHAVIORAL MODELING AND SIMULATION edge is typically represented in a modular, symbolic format, such as rules, frames (objects), logical propositions, semantic nets, constraints, or cases, and includes factual knowledge as well as heuristics used by human experts. For example, âIF (patient has high fever) AND (patient is covered with red spots) AND (patient is a child not Âvaccinated against chicken pox) THEN (patient has chicken pox w/ Âprobability 80%).â A typical rule base can con- tain thousands of rules. 2. Working memory: the component containing the specific data rep- resenting the current problem at hand (e.g., the current case), along with particular goals to satisfy or specific constraints. The data must be in a format that is compatible with the knowledge base format (e.g., âPatientâs fever is 104,â âPatient is covered w/ red spots,â âPatient is 6 years old,â âPatient has not had the chicken pox vaccineâ). 3. Inference engine: an inferencing mechanism capable of combining the existing knowledge with the current data to derive conclusions of interest and thereby solve the problem at hand (e.g., derive a diagnosis or interpretation of the data in the framework of the knowledge provided). In the example above, a forward-chaining rule interpretation mechanism would derive that there is a 80 per- cent chance of the patientâs having chicken pox. Other inferencing mechanisms include theorem proving for knowledge bases using predicate calculus or case-based reasoning for cases. ESs may also include one or more of the following components: â¢ A (graphical) user interface and intelligent front end to facilitate the developersâ and end usersâ interaction with the ES during develop- ment, refinement, and use. â¢ Explanation capabilities to explain the inferencing chains to the end user, to ensure that the reasoning process is transparent and that the final conclusions are accepted by the users. â¢ Knowledge acquisition capabilities to facilitate the acquisition (from existing technical materials) or the elicitation (from human experts) of the necessary knowledge and its modification during the knowledge base refinement stage. â¢ Learning capabilities to help acquire additional knowledge from patterns identified as the system performs its tasks. ESs have been developed for a range of problem types (e.g., diagnosis, design) across a variety of domains, including medicine, computer engineer- ing, process control, banking, law enforcement, and others. ESs are useful
MICRO-LEVEL FORMAL MODELS 187 as decision aids, for training purposes, and to capture knowledge and pre- serve expertise in a particular area. ESs can be developed using any computer programming language, typically a language that facilitates symbolic representation and inferenc- ing, such as LISP. However, the use of ES shells is more common. Shells are development environments that facilitate ES development by providing system components and templates for structuring the Â necessary knowl- edge, thereby facilitating the knowledge engineering required to obtain the necessary knowledge from the expert(s) and encode it within a particu- lar representational formalism. Shells also help maintain and modify the knowledge base and may provide a range of additional functionalities, such as a graphical user interface and explanation facilities. Specific ESs differ along a number of dimensions. Most important are the domain represented and the type of problems the system can solve. Additional differences include the following: â¢ Representational formalism used to encode the task knowledge (e.g., rules, frames, procedural knowledge sources, predicate calculus). â¢ Reasoning mechanisms implemented within the inference engine and the type of control implemented by the inference engine (e.g., forward versus backward chaining, implemented in rule-based ESs; mixed or opportunistic, implemented in blackboard systems). â¢ Type of knowledge represented (e.g., deep versus shallow domain knowledge). â¢ Source of the knowledge (e.g., acquisition from existing technical materials or elicitation from human experts). â¢ Type of problem-solving (control) knowledge used to help deter- mine which of several competing pieces of knowledge should be used at a given point in the inferencing. â¢ Management of uncertainty in both the knowledge representation and the reasoning (e.g., use of representational mechanisms inher- ently capable of representing uncertainty, such as Bayesian belief nets, explicitly representing uncertainty in terms of certainty fac- tors, using fuzzy logic). â¢ Knowledge about the structure of the ES itself (meta-knowledge). â¢ Degree to which intermediate results are available for explanatory purposes (e.g., unstructured versus highly structured, allowing the tracing of the inferencing processes). â¢ Ability to learn additional knowledge or to acquire knowledge automatically.
188 BEHAVIORAL MODELING AND SIMULATION State of the Art ESs represent one of the more successful applications of AI and are used extensively in multiple types of industrial and government applica- tions in the United States and abroad, particularly in Asia. ESs have been applied to a range of problem types and across a broad range of domains. The generic problem types (Chandrasekaran, 1986) include diagnosis and troubleshooting, data interpretation, design, and prediction and induction. Some domains spanning the time from early ESs to the present include â¢ Medicine, including diagnosis (e.g., web-based self-diagnosis pro- grams), medication management systems (Hagland, 2003), medical emergency management, and toxicology (the DEREK system in England provides in silico testing of the adverse effects of chemicals and drugs, thereby avoiding live animal testingâBuckle, 2004). â¢ Image interpretation, such as the TriPath FocalPoint system, which screens about 10 percent of Pap smear slides in the United States). â¢ Chemistry, including interpretation of spectroscopic data. â¢ Computer engineering, including the early XCON system for con- figuring computers (Barker, OâConnor, Bachant, and Soloway, 1989) and software development and database design. â¢ The oil industry, including identification of promising wells for oil drilling (Cannon et al., 1989). â¢ Agriculture and land management, including interpretation of sat- ellite images, hurricane damage assessment (Drake, 1996). â¢ Real-time process control, including system monitoring and per- formance optimization in power plants, such as a Japanese steel plant that uses an expert system SAFIA to control the operation of a blast furnace (Feigenbaum et al., 1993). â¢ Manufacturing, troubleshooting, maintenance, and performance optimization for a variety of electromechanical systems and tele- communication networks, for example, NASAâs space shuttle engine diagnosis (Marsh, 1988). â¢ Law enforcement and homeland security, for example, PortBlue (http://www.portblue.com/pub/solutions-law-enforcement). â¢ Training and tutoring in various subjects, for example, the MITRE Corporationâs F-16 Maintenance Skills Tutor used to train Air Force technicians (Marsh, 1999). â¢ The automotive industry, for example, diagnosis (Gelgele and Wang, 1998). â¢ Financial advising and insurance underwriting analysis (Pandey, Ng, and Lim, 2000).
MICRO-LEVEL FORMAL MODELS 189 â¢ Route planning and scheduling (Nuortio et al., 2006; Sheng et al., 2006). â¢ Contract administration and management (Trimble, Allwood, and Harris, 2002). â¢ Organizational design (Burton and Obel, 2004). Terms such as âknowledge technology,â âhybrid intelligent systems,â and âbusiness-process reengineeringâ frequently indicate the use of ES technologies (Liebowitz, 1997). A number of recent advances in ES devel- opment contribute toward more rapid development, flexibility and exten- sibility, improved performance, enhanced interaction with human users, and more natural integration in the work flow. We discuss the most critical ones below. Expert System Shells and Development Environments A wide variety of shells are now available, which greatly speed up the development of ESs. The shells facilitate the knowledge engineering process required to build and maintain the knowledge bases by providing knowl- edge templates required for particular tasks. By enforcing consistency, these templates reduce common knowledge-base errors. The shells vary along a number of dimensions, including overall complexity, number and type of knowledge representation formalisms supported, number and types of problem-solving tasks supported, ease of knowledge base development and maintenance, degree of automatic knowledge engineering supported, and cost. A number of freeware shells are available, ranging from general rule- based languages, such as NASAâs CLIPS, to specialized shells. The costs of commercial shells range from $50 to over $100,000. Increasingly, shells are tailored for a particular type of problem (e.g., diagnosis, design, scheduling, real-time control, planning) to support more efficient knowledge engineer- ing and performance. Automatic Knowledge Acquisition and Learning Knowledge acquisition is the major bottleneck in building ESs. To help address this problem, a number of automatic knowledge engineering tools have been developed, some of which use established domain ontolo- gies (Puerta et al., 1993), and researchers are exploring the application of machine learning methods to the automatic development of knowledge bases from training cases. In some cases, the learning methods may involve the use of additional representational and inferencing schemes, such as con- nectionist approaches or artificial neural nets.
190 BEHAVIORAL MODELING AND SIMULATION Hybrid and Embedded Systems Frequently, the most successfully deployed ESs are those that are inte- grated as components of larger, conventional systems. These Â embedded systems represent an important trend in which multiple methodologies or representations and inferencing mechanisms are applied to the solution of a particular problem. Examples of technologies that may augment an ES include fuzzy logic, neural networks, case-based reasoning, database management systems, genetic algorithms, chaos theory, statistical analysis, and data mining. Representing and Reasoning Under Uncertainty An essential aspect of expert reasoning is the ability to manage uncer- tainty. ESs must therefore be able to represent uncertainty in the facts and the knowledge and propagate uncertainties through the inferencing process. Early approaches included rather ad hoc âcertainty factors,â associated with rules. More recently, formalisms capable of integrating uncertainty representation and inferencing have become popular. These include multi- valued fuzzy logic (Zadeh, 1965), and Bayesian belief nets (BBNs) (Pearl, 1986). BBNs especially have found extensive use in the development of âsoftâ ES-based decision-aiding systems in the DoD because of their intui- tive graphical representation of causality and their ability to âreasonâ in the face of sometimes vague rules and often uncertain information. Relevance, Limitations, and Future Directions Relevance From the list of current applications of ESs above, it is clear that those dealing with human individual or social behavior could be useful in many ways. ESs might be used with knowledge bases comprising profiles of individuals (e.g., political or military leaders) or groups to support what-if exercises estimating the probability of various behaviors, given different courses of action. They might be applicable to diagnosis of the intentions of adversaries, given knowledge of those adversariesâ former behavior and current intelligence information. They might also be applicable to orga- nizational design problems. Because of the ability to support the capture and direct representation of knowledgeable experts in DoD (e.g., strategic planners, counterÂintelligence specialists, psychological operations officers, etc.), ES-based assessment tools and decision aids are likely to continue to be developed for specialized DoD applications in all of these areas. This
MICRO-LEVEL FORMAL MODELS 191 will be driven by the many benefits afforded by ESs already demonstrated in other domains, including: â¢ Improved quality and consistency of solutions, because of the a Â bility to explicitly store and retain expertise over time and sit- uation, ensuring permanence, the capturing and distribution of c Â ritical knowledge throughout an organization (Stylianou, Madey, and Smith, 1992). â¢ Increased availability of limited expertise, reduced down time, and increased reliability of human-system decision-making performance. â¢ Improved training via ES-based tutoring systems supporting situa- tion assessment, planning, and decision making in understanding individual, group, and organizational behaviors. â¢ Extensibility and flexibility, the ability to explain its reasoning, and the ability to handle uncertainty in data and knowledge (Georgeff and Firschein, 1985; Giarratano and Riley, 1998). Major Limitations In spite of the successes and the potential for the future, some Âresearchers have expressed the opinion that the idea of ESs is futile (Dreyfus and D Â reyfus, 2004) and that such systems are doomed to perpetual mediocre performance simply by virtue of the fact that they are not human. This may well be true, but one must remember that their aim is to perform routine, well-established tasks, not to behave like Renaissance men. Perhaps the best solution to this problem is to have the system simply recognize the limits of its expertise and refer the problem to another ES. Nonetheless, several limitations contribute to this pessimistic view of ES potential. One major limitation of ESs is the rapid degradation of their perfor- mance once the limits of their expertise (knowledge base) are reached. This is referred to as the brittleness problem. Unlike human experts, who display âgraceful degradationâ in their performance when faced with an unknown problem (by drawing on their large amounts of stored knowledge and general problem-solving methods), ESs can function well only within the very narrow scope of the specific task for which their knowledge base was developed. ESs thus resemble idiot savants: They may match or exceed the performance of human experts in a very Ânarrow area of expertise, but they cannot perform simple tasks outside this area of expertise. Another limitation is the extensive effort required to build the neces- sary knowledge bases and to maintain consistency when the knowledge base is modified. Ideally, the developer or user could add, delete, or modify
192 BEHAVIORAL MODELING AND SIMULATION the knowledge base as desired, taking advantage of its symbolic, modular structure, and the system would still derive the correct conclusions using the preexisting inference mechanism. In practice, this is not always the case. Frequently, when a particular piece of knowledge is added, deleted, or mod- ified, the dependencies in the knowledge base cause unintended inferences, requiring further modification of the knowledge base (tweaking) and, less frequently, changes in the inference engine control algorithms. The main approaches addressing this problem are automatic knowledge engineering tools, shared ontologies, and standardized domain languages. Finally, one of the major limitations is the difficulty in deciding whether an ES-based system is the most appropriate solution to the problem at hand, given the costs and effort often required for ES development. Depending on the task difficulty and problem stability, access to appropriate experts, and use of appropriate tools, the required time may range from weeks to many person-years. It is therefore critical that ES technology is applied appropri- ately. Several characteristics of the problem help determine whether an ES is the appropriate solution: â¢ Stability or persistence of the problem: Is the problem likely to exist long enough to justify the investment required to develop an ES? â¢ Appropriate problem complexity: Is the problem sufficiently dif- ficult to warrant the development of an ES, yet sufficiently routine that the necessary knowledge and procedures can be obtained and encoded within the ES formalisms? It has been said that ESs are appropriate for tasks that would take an expert an hour or two (Bobrow, Mittal, and Stefik, 1986). â¢ Appropriate problem familiarity: Is the problem sufficiently Âfamiliar and can a sequence of steps be defined for solving it? ESs are not suitable for situations in which each problem is unique and novel methods must be developed to solve each problem. They are appro- priate for automating tasks that are fairly routine and mundane, not exotic and rare (Bobrow et al., 1986). â¢ Availability of the necessary knowledge: Is the required knowledge available, either from technical materials or from human experts? Are the experts capable of articulating the necessary knowledge and are they available as necessary throughout the system develop- ment process, including evaluation and validation? â¢ Availability of test cases: Are sufficient test cases available to sup- port a systematic evaluation and validation process? â¢ Type of knowledge: Is the knowledge highly task-specific or is a high degree of commonsense knowledge required? ESs are appro- priate for problems that can be solved with highly domain-specific
MICRO-LEVEL FORMAL MODELS 193 knowledge, rather than the creative application of a broad range of commonsense knowledge. It is critical to understand that ESs do not perform magic. ESs can solve only problems for which well-defined solutions already exist and the neces- sary knowledge can be obtained and encoded in the knowledge base. Future Research and Development Requirements To ensure continued use of ES technologies, the limitations outlined above need to be addressed. To address the general issue of the narrow scope of applicability, effort needs to be devoted to developing technologies and systems that can rec- ognize the limits of expertise and, when exceeded, refer the problem to another ES. This attempt at âself-awarenessâ is the underlying motivation of the emerging multiagent systems in ES research. To address the issue of brittleness, one can pursue several strategies, including the development of: â¢ An ability of the ES to automatically acquire additional knowledge or problem-solving strategies by automatic knowledge acquisition and learning. â¢ An ability to represent large amounts of commonsense knowledge. â¢ An ability to draw on deep models of the domain and reason from first principles about an unfamiliar problem. To address the issue of the extensive effort needed to build and maintain ESs, guidelines need to be developed to determine if an ES-based solution is appropriate to the problem at hand. In addition, effort needs to be put into the development of shared ontologies, standardized domain languages, and automatic knowledge engineering tools. Finally, effort needs to be invested in developing methods for dealing with uncertainty and for addressing verification and validation to ensure consistency and correctness of the knowledge bases underlying ESs. Decision Theory and Game Theory Overview This section provides a brief overview of decision theory and game theory and their relevance to the individual and organizational modeling problem. In the earlier sections of this chapter we discussed many of the
194 BEHAVIORAL MODELING AND SIMULATION ongoing efforts in developing cognitive architectures and affective models to support the understanding of individual behavior within a Âpsychological and situational context, and, in Chapter 3, the importance of culture as a means of providing social context and as a determinant of both indi- vidual and group behavior. And, as we have discussed, even these multiÂ dimensional approaches often prove too stark to capture the rich variety of individual human and collective group behavior that we observe out in the real world. Hence, one cannot but be surprised when one looks at the formal modelÂing literature in economics and political science. Most of that lit- erature ignores culture entirely and only recently have cognitive models become part of the mainstream in these areas (Camerer, 2003). Instead, the standard assumption in these disciplines is that people maximize their pay- offs. Payoff maximizing behavior is not to be confused with self-Âinterested behavior. A person can be both payoff maximizing and altruistic at the same time. Knowing payoffs requires an understanding of the motivations of the other players. That may not always be possible. Nevertheless, game theory and decision theory can handle this type of uncertainty as we discuss below. Those outside economics and political science criticize the rational choice assumption, that is, the assumption of payoff maximizing behavior, on the grounds that it lacks descriptive accuracy. People donât make optimal decisions given a payoff function. Sometimes people make mistakes. Some- times they donât have well-defined payoffs. That is true. Nevertheless, the assumption of optimizing behavior has several reasons to recommend it, at least as a baseline model. First, it is well defined, which means that analyti- cally tractable models can be built. These models may not be 100 percent accurate, but they serve as gold standards against which models with relaxations in this assumption can be tested. Second, it enables prescriptive reasoning. Thus, using this model we can assess what people should do and then use this to generate hypotheses against which to compare actual behavior and identify the sources of deviation. Third, some theoreticians argue that even though people do not optimize initially, they should head in that direction over time, particularly as the fallacy of their behavior is pointed out. In this way, the model becomes a forecaster of ultimate behav- ior. Fourth, under special circumstances, there is some empirical evidence that people may act as if they optimize. The empirical evidence is strongest when the stakes are large and when the situation is repeated or familiar. â ften O factors that are not typically thought of as rational, including religious and political beliefs, have major motivating effects on behavior. Ignoranceâof the actual situation, the relative costs and payoffs of carrying out a decision, and other factorsâmay contribute to choices and behavior as well.
MICRO-LEVEL FORMAL MODELS 195 In general, to measure cognitive and cultural effects, a benchmark is needed for behavior (in the absence of those effects). The two most widely used benchmarks are that people behave randomly and that people behave optimally. Myerson (1999) argues that rationality (i.e., acting optimally, given imperfect and/or incomplete information available to them) makes more sense. Many economists and game theorists use the optimal behavior assumption. However, for much of the social, statistical, and computer sci- ences and for the network models and link analysis models discussed later in Chapter 6, the random behavior assumption is used as the baseline. We can distinguish between two types of models within the rational actor paradigm: decision theory models and game theory models. In a deci- sion theory model, the payoff to a person or groupâs action does not depend on the actions of others (Raiffa, 1997). In a game theory model, payoffs depend both on the person or groupâs own action and on the actions of the other players (Bierman and Fernandez, 1998). We call the former insulated actions and the latter interdependent actions. This distinction creates a demarcation line between decision theory and game theory. Two highly sim- plified examples illustrate the difference. A military commander confronted with the problem of how to assign troops to responsibilities during peace- time faces a decision problem. That same commander allocating troops in the heat of battle often plays a gameâthe payoffs from the commanderâs action depend on the actions of the adversary. What Are Decision Theory Models? In decision theory models, the actor chooses from among a set of pos- sible actions in order to satisfy some objective. In many situations that objective is to maximize a payoff function. Without uncertainty, decision theory models are not very interesting: the actor chooses the action with the highest payoff. The payoff depends on the action as well as on the state. The state literally means the state of the worldâthe set of factors that are payoff relevant. A countryâs oil reserves, its military strength, and its cash reserves would all be part of its state. Formally, we write the payoff as a function, f(a|s), of the action, a, conditional on the state, s. In other situ- ations, an actorâs objective might be to minimize regret. The concept of regret can be formalized as the difference between what the agent receives and what the agent could have received with perfect information. In a decision theory model, an agent has beliefs over the set of possible states. Formally, beliefs represent what someone thinks is likely to be true either at present or in the future. These beliefs are captured in a probability â any M would say there is no such thing: all behavior occurs in a cognitive and cultural environment.
196 BEHAVIORAL MODELING AND SIMULATION distribution over the possible outcomes. The expected payoff of an action equals the payoff of the action in each state multiplied by the probability of that state occurring as a result of the action taken. Consider a military commander who must decide whether or not to enter a hostile village. The commander has three options: to enter with firepower, to enter with a small group and attempt to negotiate, or to enter the village with food and medi- cal supplies. We define these as actions: attack, negotiate, and supply. The value of each of these actions depends on the hostility level of the village leaders. The village leadership might be hostile, moderate, or accepting. The leadershipâs attitude can be thought of as the state. We assume that payoffs to the military commander equal the number of lives lost, making lower payoffs better. We further assume that the military commander can make accurate assessments of the number of lives lost by following each action conditional on each state and that those are shown in Table 5-1. This scenario illustrates why some consider decision theory a useful modeling tool and the reasons why decision theory ends up being not that useful in practice. First, the decision theorist must be able to specify the complete set of states and the consequences of the actions. Such information is generally not known by the military commander, and the time to gather such information may inhibit rapid response. For military actions, timeÂ liness is often at least as important as accuracy. Second, the decision theorist needs to assume that the military commander knows the Âprobability distri- bution over the attitudes of the village leadershipâthat the commander has accurate beliefs. In general, the commander does not have such informa- tion; that is, the commander and his staff do not have well-founded beliefs over all of the states. Finally, the decision theorist needs to assume that only first-order effects are critical; that is, the second-order effects of the actions are neg- ligible. However, as most commanders will tell you, there are unintended consequences of actions (second-order effects) that are often more critical than the first-order effects. For example, a second-order effect of putting in to port in a city and enabling shore leave is an increase in money in the city and a consequent increase in corruption. To deal with this, the decision theorist has to make the model more complex so that it captures these second-order effects. The problem here is that these effects are not known a priori. There is simply insufficient understanding to predict the consequences of any action on any population or actor, especially given the influence of group think and social influence on behaviors. â e W consider an expanded version of this scenario later in our discussion of model verifica- tion and validation. â n fact, assessing the attitudes of the population correctly is a key challenge facing todayâs I military and requires a multidisciplinary approach not including decision theory.
MICRO-LEVEL FORMAL MODELS 197 TABLE 5-1â Number of Lives Lost Depending on State and Action State/Action Negotiate Attack Supply Hostile leaders p = 0.25 20 16 6 Moderate leaders p = 0.25 10 â8 6 Accepting leader p = 0.5 â2 â4 6 If, however, we were to assume that these many obstacles could be somehow overcome, then decision theory might still be a useful tool. For example, one can assume that the probability of an accepting leadership equals one-half and that the probability of each of the other types equals one-fourth. Given these assumptions, entering with food and medical sup- plies is the best action. It results in a loss of only six lives regardless of the leadership type, whereas both negotiating and attacking result in expected losses of eight and a half and eight lives, respectively. Let a denote the probability of an accepting leadership, m denote the probability of a mod- erate leadership, and h the probability of a hostile leadership. These prob- abilities must sum to one. With a little effort, it can be shown that attacking is not optimal for any beliefs. Thus, the question is whether to supply or to negotiate with the village leader. An important caveat is that, were it possible to overcome the obstacles to applying decision theory, the commander would still need computational support to correctly apply a decision theory model. That is, the scenario assumes that people think in Bayesian terms and that they do not make mis- takes when computing probabilities. Ample evidence suggests that people are not Bayesian and that theyâre particularly bad at computing conditional probabilities and at estimating very low-probability events (Camerer, 2003). Thus, even if the commander could get the requisite information and knew all the probabilities, the commander would still not find the answer that decision theory would suggest. Many decision aids being developed cur- rently are designed to overcome these two limitations and provide for the commander a ârecommendationâ based on decision theoretic reasoning. However, as noted, the recommendation will be faulty if the assumptions of knowing all states, all responses, and the probabilities are not met. At the current time, little is known about how to put confidence intervals around such recommendations. Another potential role for decision theory is in determining the value of perfect or improved information. Suppose that the military commander â e arrive at these numbers by taking the probability of each type of leader by the number W of lives lost. In the case of the attack strategy, we multiply 16 by 0.25, 8 by 0.25, and 4 by 0.5 to get the expected value.
198 BEHAVIORAL MODELING AND SIMULATION has beliefs about the village leadership based on incomplete information but that he can become perfectly informed at some cost. To be specific, suppose that at the cost of four lives the military commander can find out the attitude of the village leadership. Returning to the initial assumption about beliefs that the leadership was accepting half of the time, one can show that the cost of gathering the information exceeds its value. The information changes the military leaderâs action only when it reveals the leadership to be accepting. In that instance, the leader should negotiate rather than bring supplies. This action saves four lives. However, the pos- sibility of saving four lives half of the time would not be worth the certain cost of four lives. Thus, the value of the information is less than the cost. In practice, however, there are several problems with this argument. First, it is often as difficult to determine the cost of information as it is to assess the probabilities. Second, the cost of information and the impact of the decisions may not be measurable in commensurate ways. That is, while an action may save lives, it may not cost lives to gather information; rather it may be an issue of time, purchasing surveillance cameras, etc. If this is the case, another limitation arises to the decision approachâthat of converting all outcomes into the same currency. In theory, decision theory can also be used to model the decisions of an adversary. However, to use decision theory effectively, one needs to know the adversaryâs beliefs over the relevant states of the world. One of the key difficulties in adversarial modeling in general is understand- ing the adversaryâs beliefs, capabilities, available resources, etc. If these were known, behavior modeling would not be as difficult as it is. Since the adversary tends to operate in a deceptive framework, hiding actions, and is adaptive with beliefs and attitudes that change in practice, deci- sion theoretic models of the adversary tend not to be that valuable. In fact, historic models of this type tended to assume that the adversary had beliefs similar to those held by oneâs own forces, similar ways of engag- ing in battle, etc. Basing decisions on âmirror beliefsâ can readily lead to disastrous consequences. Even if the adversaryâs beliefs were known, the model is incomplete, because the adversaryâs attitude toward risk must also be known. Current models do not take into account the adversaryâs goals or strategies but take them as fixed. To model strategic adversaries, game theory is used, which we cover next. In our earlier discussion of cultural and cognitive models, we noted that people and cultures differ in how they respond in uncertain environments. Substantial evidence shows that people exhibit uncertainty aversion (Ellsberg, 1961). People prefer to take risks with known odds than risks with unknown odds. They will even take actions that appear u Â nreasonableâfrom a rational choice perspectiveâin order to avoid uncer-
MICRO-LEVEL FORMAL MODELS 199 tainty (Bewley, 1986). However, the extent to which they act in this way may be a function of the culture, and the relation of culture to risk-taking still needs exploration. A second aspect of risk that is typically overlooked in decision theory is the role of emotions. The emotional state of the actors can alter dramatically, and possibly in complex, nonlinear ways, their risk- taking behavior. A second branch of decision theory, multiattribute decision theory, takes a more normative approach. It considers how to make good decisions when those decisions influence multiple dimensions. A military action has military, economic, political, and social implications. Rarely will one action be dominant, that is, lead to a better outcome on every outcome dimen- sion. Thus, decision makers must come up with some process that either explicitly or implicitly assigns weights to the various outcome dimensions (Edwards and Barron, 1994). Multiattribute decision theory suffers from all the limitations already discussed. Nevertheless, a multiattribute approach is part of the most sophisticated of the agent-based models discussed in Chapter 6. In these multiattribute decision models, the actors in the models pursue multiple but simple and very well-defined goals. We must be careful not to be overconfident in our predictions, a point we discuss at length in our analysis of voting models in Chapter 6. What Are Game Theory Models? In a game theory model, as in decision theory, one assumes that each actor has a payoff function. And the same caveats apply as with decision theory: it may be impossible to know this function or take too much time to determine it. In game theory, an actorâs payoff depends not only on his own action, a, but also on the action of other actors, and this action is called o. For this reason, the actors are referred to as players, and the payoff function is written formally as f(a,o). One can differentiate between games in several ways. Games can be sequential, like chess, or simultaneous, like rock, paper, scissors. This dis- tinction is important because in some games advantages accrue to either the second mover or to the first mover. Games can also be one shot or repeated. In a standard repeated game, the same game is played in every period. Repeated games can be finitely or infinitely repeated; in the latter, cooperation is easier to sustain as the future always casts a shadow; that is, there are always more rounds to play that can create incentives. Games can be zero-sum or nonzero-sum. In a zero-sum game, for every winner there is a loser. In a nonzero-sum game, itâs possible for the total payoff to all players to increase or decrease. Negotiation is often improperly seen as zero-sum, when in fact bargains can be reached that benefit both parties
200 BEHAVIORAL MODELING AND SIMULATION (win-win).10 Below we discuss Colonel Blotto, a zero-sum game, and show how the competitive nature of such games makes predictions difficult. Game theorists distinguish between two types of uncertainty. In games of imperfect information, the players do not know the actions of the other players, even if those actions have happened in the past. For example, a military might not know where an adversary has stored its weapons. In con- trast, in games of incomplete information, players do not know the state of the world. For example, the military might not know how many Âweapons the adversary possesses. The state of the world most often influences pay- offs, but it can also change the set of possible actions. The distinction between these two types of uncertainty is of more than academic interest. Problems due to incomplete information can be overcome by gathering data about preferences, capabilities, and relevant environmental features. Problems due to imperfect information require observation of the other players. Alternatively, such problems can be overcome by changing the rules of the game, for example, by creating new rules that reduce the amount of imperfect information. A problem for the commander, however, is that he may not know what he doesnât know. Adversaries may act deceptively and provide evidence that makes it appear that information is more complete or perfect than it is, thus limiting further the value of game theoretic models. Game theory distinguishes between actions (what the players do), and strategies (the rules they use to decide what to do). For example, in a repeated prisonersâ dilemma, in which players must either cooperate or defect in each period, a playerâs action is either to cooperate or to defect, but the playerâs strategy is the rule used to decide what action to take based on the past history of plays. A player might use the strategy tit for tat, mimicking the action of the other player in the previous play of the game. This distinction hides an important assumption of game theoryâthat the actors are assumed to always act strategically and not just to be reac- tive. However, heightened emotional states, exhaustion, religious fervor, a reduction in basic needs (no water or food or shelter), and countless other conditions can actually all lead the players to simply react rather than to behave strategically. When one thinks of a game, be it football or chess, one thinks of sequences of moves, of ebbs and flows. Game theory focuses instead on equilibria. In what follows, we assume that the game has only two players, even though these models can handle any number of players. Normally, an equilibrium is written in terms of strategies. Here we simplify the definition 10â A key limitation of game theory arises because of these different types of games: in prac- tice, one must know what type of game is being played; however, the commander often does not have the information to make that determination, and assuming the wrong type of game can lead to suboptimal and indeed disastrous results.
MICRO-LEVEL FORMAL MODELS 201 and write it with respect to actions. In this formulation, an equilibrium (a*,o*) consists of an action for each player such that each action is optimal given the action of the other player. We write this as follows: The actions (a*,o*) are an equilibrium if a* optimizes f(â ,o*) and o* optimizes f(a*,â ) Equilibria in which both players take single actions are called pure strategy equilibria. Often pure strategies fail to exist, and players must randomize their strategies across multiple actions. Each player still takes only a single action but chooses that action from a set of possible actions so as to confuse the other player. For example, an attack might be either by land or by sea. These equilibria are called mixed strategy equilibria. In general, pure Âstrategy equilibria need not exist, but under some fairly mild conditions some type of equilibrium always exists. These mild condi- tions are mathematical assumptions about continuous payoff functions and theÂ like. Although one thinks of equilibria as places of rest, they can include finite punishment regimes, in which one player punishes the other for a fixed period of time, and bubbles, in which everyone is overly optimistic about the future (Blanchard and Watson, 1982; Green and Porter, 1984). Game theorists use equilibrium as their solution concept. Itâs what they think the outcome of a game will be. However, equilibrium is a very strong assumption. Most social systems donât reach equilibrium. Natural d Â isasters, scientific advances, belief changes, learning, external political coups, etc., all lead to adaptations that inhibit equilibrium from being reached. Understanding what behavior will be at equilibrium does not help the commander understand the ebbs, flows, and adaptations with which he is faced. Equilibrium as a solution concept for games has been justified on either of two grounds. First, optimizing players would locate equilibria. However, as we discussed, there is little evidence that the players actually optimize. Second, at least in some cases, agents who learn can locate equi- libria. However, this tends to be true only for highly simplistic games, and generally those in which each individual game has only two players, even if many play in the overall tournament. Thus, a key difference between game theory models and nongame-theoretic agent-based models, which we discuss later, is what is assumed about behavior. Game theorists assume that behavior somehow gets the players to equilibrium, whereas, in most agent-based models, equilibrium is never reached and behavior is the result of various processesâcognitive, social, political, cultural, and so on. Most game theory models assume two-person games. However, in most realistic situations, the adversary is not a single entity. In Iraq and Bosnia, for example, the adversary consists of sets of groups that come together and break apart, with varying strengths of alliances. Then there are coalition partners, nongovernment organizations, the population that might harbor
202 BEHAVIORAL MODELING AND SIMULATION insurgents or terrorists or not, and so on. In other words, there are multiple actors with ever-changing agendas.11 Relevance, Limitations, and Future Directions Relevance Decision theory and game theory can play four roles in constructing behavioral models: 1. They oblige us to define the actors, their possible actions and strate- gies, the states of the world, and payoffs. 2. They force us to think through what optimal behavior would be given our assumptions. 3. They enable us to gain a quick and powerful understanding of the primary incentives and their implications. 4. They provide simplistic, often mathematically tractable models against which deviations that engender greater realism can be assessed. To see these four roles in an example, consider the Colonel Blotto game, which can be used to model military strategy and the actions of t Â errorist groups. Colonel Blotto is a simultaneous zero-sum game. Two players allocate fixed resources to a finite number of fronts. Whichever player allocates more resources to a front wins that front. A playerâs payoff equals the number of fronts it wins. In trench warfare, a front might liter- ally represent a wall of troops. In using Colonel Blotto to model terrorism, one can think of a front as a potential target. This second, more modern application of Blotto is used here. The two players would be the terrorist organization and the host coun- try. Their possible actions would not necessarily be easy to characterize, but subject matter experts might be able to identify the set of potential targets. In the standard Blotto game, all fronts are of equal value. In a real-world scenario, that would not be true. Nevertheless, we might start by assum- ing that all targets take on equal value. If we assume that the host country can prevent terrorist acts on a target if they have sufficient resources, then the payoffs in the real world are approximated by Blotto. Already we see the value in using game theory in that it forced us to define the targets and their relative payoffs. 11â omputational game theory does allow for more complex multiplayer games. However, C the line between agent-based models and computational multiplayer game models is more a matter of theoretical intent than methodology.
MICRO-LEVEL FORMAL MODELS 203 Next, we can use game theory to solve for optimal behavior. In Blotto, a player tries to mismatch the actions of its opponent. Blotto can be thought of as a higher dimensional rock, paper, and scissors game. The equilibrium to both games is for both players to play mixed strategies: to randomly choose from among several actions. This solution provides a valuable insight: A smart player would randomize, making it impossible to know what action it will take. Formal analyses of Blotto reveal a second insight: If one player has more resources than the other, it does not necessarily win. To see why, suppose that the terrorist group has 5 units of the resource and that the host country has 20 units in an environment with 5 targets. If the host country evenly allocates its resources across the targets (as shown in Table 5-2), the terrorist group can win a front by putting all of its resources on one target. Even though the terrorists win on only one front and the host country would officially be declared the winner of the game, most countries would not see this outcome as a win. They want to avoid any terrorist attacks. Blotto shows why that outcome is difficult to achieve. Given that no pure strategy can win with certainty, the host country must still play a mixed strategy. In fact, if we assume that the host country wins ties, its optimal strategy in this case would be to assign five units to each of four targets and leave one target completely exposed. Thus, 20Â per- cent of the time, the terrorist group succeeds and the host country has no resources allocated to the target despite having a four to one advantage in resources and behaving optimally. Counterintuitive results like this are a hallmark of game theory models. Often what we think is optimal wonât be, when we think through all of the implications of actions. We might note that leaving a target uncovered might be politically infeasible. If so, that can be handled by changing the host countryâs payoff function. In principle, we can use this model as a basis for a more elaborate and realistic model. Such a model could include more general payoffs, it could include externalities between the targetsâperhaps resources at one target also partially guard another targetâand it could make the game repeated. In that repeated game model, we might also restrict the ability of players to allocate resources. If so, we have a situation quite different from that TABLE 5-2â Example of Allocation of Resources for the Host Country and Terrorist That Results in the Terrorist Winning on the Target 2 Front Player Target 1 Target 2 Target 3 Target 4 Target 5 Host country 4 4 4 4 4 Terrorist 0 5 0 0 0
204 BEHAVIORAL MODELING AND SIMULATION described by Colonel Blotto but nevertheless informed by Colonel Blotto. However, as we have noted, the severe limitations of decision theory and game theory make this move to a more elaborate and realistic model impractical and not as trivial a step as the formal theorists might wish. Game theory has been of moderate use in analyzing institutions. The game theoretic approach consists of four steps (Diermeier and Krehbiel, 2003): 1. Assume behavior. 2. Define the game generated by the institution. 3. Deduce the equilibria. 4. Compare the regularities to data. If behavior is assumed to be optimizing, then equilibrium is achieved and institutions can be thought of as equivalent to equilibria. To compare two institutions, we need only compare their equilibria: the better the equi- librium (e.g., the greater utility to the relevant actors), the better the institu- tion, and the more the actors will prefer it. The institutions as Âequilibrium approach proves powerful. If we want to compare a parliament with an open rule system, in which anyone can make a proposal, with a closed rule system, in which amendments are not allowed, or to compare a parliamen- tary system with a presidential system, we construct models of the two types of institution and compare their equilibria using game theory (Baron and Ferejohn, 1989). The institutions as equilibrium approach of game theory can be extended to include the game over institutions. In this game, the players first decide which institution to use. This meta-Âinstitutional game can explain not only how institutions perform but also why they may have been chosen in the first place. For example, we might use such a model to explain why a military leader chooses an open rule system even though that system allows greater voice to members of his cabinet. However, as noted, the assumptions that need to be made here are highly unrealistic, hence calling the entire approach into question. When we expand game theory to include learning models, then we can capture some forms of cultural transference. Many game theorists think of culture as beliefs. That characterization provides some leverage, but it is far from adequate. More recent work considers cultural learning in which players learn from one another (Gintis, 2000). They can even learn from the other games that they play (Bednar and Page, 2007). Game theoretic models can also be expanded to include networks that can evolve over time. In sum, game theoretic models can include cultural forces, but those forces must be well defined and analytically tractable. The movement to expand game theory by taking networks and culture into account is promising. However, the research here is in its infancy.
MICRO-LEVEL FORMAL MODELS 205 Major Limitations Decision theory models and game theory models tend to be overly sim- plistic, with few âmoving partsâ and with assumptions made with regard to the player behavioral characteristics that can be driven more by ease of solution criteria rather than fidelity of representation. Otherwise, the Âmodels become difficult or impossible to solve. For example, most game theory models assume either two players or an infinite number of players. The real world often takes place in the space in between, except for extremely artificial situations (e.g., chess games, two-candidate political races, etc.). Decision theory and game theory models require data about actors that often cannot be gathered with any reliability or within a reasonable amount of time determined by the decision window of the commander. A further problem with game theory models is that they produce mul- tiple equilibria. The Folk Theorem result states that, for repeated games, almost any outcome can be supported as an equilibrium. To overcome this problem of multiple equilibria, game theorists rely on refinements, such as symmetry. An equilibrium is symmetric if both players get the same payoff. Or they invoke Pareto efficiency: an equilibrium is Pareto efficient if no other equilibrium makes every player better off. Game theoretic models also often ignore the stability and attainability of the equilibria that they predict. Although recently game theorists have begun to study learning models, they tend to consider simple two-person games and not the more complex, multiplayer situations characteristic of the real world. Future Research and Development Requirements The potential for decision theory and game theory hinges on their a Â bility to capture the complexities of real people and the real world. A concern with realism would seem to undercut the mathematical strength of these two approaches: their ability to cut to the heart of a situation. Nevertheless, the few degrees of freedom that these models allow can be tugged in the direction of greater realism with potentially large benefits. In decision theory, we can look to cultural and cognitive explanations to explain beliefs. We can also look to culture as a determinant of what is pos- sible: some actions may be unlikely to occur in some cultures. Therefore, we can rule those actions out. However, as decision theory and game theoretic models become more nuanced to include cultural factors, they become less mathematically tractable, require increased data or more unrealistic assumptions, and require more effort for validation. As already mentioned, game theorists have begun including culture in the form of beliefs, networks, and behaviors. This can also be accomplished less formally. For example, Calvert and Johnson (1999) argue for culture
206 BEHAVIORAL MODELING AND SIMULATION as a means of coordinating on an equilibrium. By coordination, they mean selection of one equilibrium from among many. In their approach, game theory becomes a preliminary tool: it defines the set of possible outcomes. Detailed historical and cultural knowledge from subject matter experts then selects from among those equilibria. References Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press. Anderson, J.R. (1990). The adaptive character of thought. Hillsdale, NJ: Lawrence Erlbaum Associates. Anderson, J.R. (1993). Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates. Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., and Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111(4), 1036â1060. Andre, E., Klesen, M., Gebhard, P., Allen, S.A., and Rist, T. (2000). Exploiting models of personality and emotions to control the behavior of animated interactive agents. Paper presented at the International Workshop on Affective Interactions (IWAI), Siena, Italy. Araujo, A.F.R. (1991). Cognitive-emotional interactions using the subsymbolic paradigm. In Proceedings of Student Workshop on Emotions, University of Southern California, Los Angeles. Araujo, A.F.R. (1993). Emotions influencing cognition: Effect of mood congruence and anxiety upon memory. Presented at the Workshop on Architectures Underlying Motivation and Emotion (WAUME â93), University of Birmingham, England. Bach, J. (2007). Principles of synthetic intelligence: Building blocks for an architecture of motivated cognition. Unpublished doctoral dissertation, UniversitÃ¤t OsnabrÃ¼ck. Barker, V. E., OâConnor, D.E., Bachant, J., and Soloway, E. (1989). Expert systems for configu- ration at Digital: XCON and beyond. Communications of the ACM, 32(3), 298â318. Baron, D.P., and Ferejohn, J.A. (1989). Bargaining in legislatures. American Political Science Review, 89(4), 1181â1206. Bates, J., Loyall, A.B., and Reilly, W.S. (1992). Integrating reactivity, Goals, and emotion in a broad agent. Presented at the Fourteenth Annual Conference of the Cognitive Science Society, July, Bloomington, IN. Bednar, J., and Page, S.E. (2007). Can game(s) theory explain culture?: The emergence of ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ cultural behavior within multiple games. Rationality and Society, 19(1), 65â97. Belavkin, R.V. (2001). The role of emotion in problem solving. In Proceedings of the AISB â01 symposium on emotion, cognition and affective computing (pp. 49â57), Heslington, York, England. Bewley, T.F. (1986). Cowles Foundation discussion paper no. 807: Knightian decision Â heory: Part 1. New Haven, CT: Cowles Foundation for Research in Economics at Yale t University. Bierman, H.S., and Fernandez, L.F. (1998). Game theory with economic applications. ÂReading, MA: Addison-Wesley. Blanchard, O.J., and Watson, M.W. (1982). Bubbles, rational expectations, and speculative markets. In P. Watchel (Ed.), Crisis in the economic and financial structure (pp. 295â316). Lanham, MD: Lexington Books. Bobrow, D.G., Mittal, S., and Stefik, M.J. (1986). Expert systems: Perils and promise. Com- munications of the ACM, 29(9), 880â894.
MICRO-LEVEL FORMAL MODELS 207 Breazeal, C., and Brooks, R. (2005). Robot emotions: A functional perspective. In J.-M. Fellous and M.A. Arbib (Eds.), Who needs emotions? The brain meets the robot (pp. 271â310). New York: Oxford University Press. Broekens, J., and DeGroot, D. (2006). Formalizing cognitive appraisal: From theory to compu- tation. Paper presented at Agent Construction and Emotions (ACE 2006): Modeling the Cognitive Antecedents and Consequences of Emotion Workshop, April, Vienna, Austria. Buckle, G. (2004) A different kind of laboratory mouse. Available: http://digitaljournal.com/ article/35501/A_Different_Kind_of_Laboratory_Mouse [accessed Feb. 2008]. Burton, R.M., and Obel, B. (2004). Strategic organizational diagnosis and design: The Â ynamics of fit, third edition. Boston: Kluwer Academic. d Busemeyer, J.R., Dimperio, E., and Jessup, R.K. (2007). Integrating emotional processes into decision making models. In W.D. Gray (Ed.), Integrated models of cognitive systems (pp. 213â229). New York: Oxford University Press. Camerer, C. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton, NJ: Princeton University Press. Campbell, G.E., and Bolton, A.E. (2005). HBR validation: Integrating lessons learned from multiple academic disciplines, applied communities, and the AMBR project. In K.A. Gluck and R.W. Pew (Eds.), Modeling human behavior with integrated cognitive archi- tectures: Comparison, evaluation, and validation (pp. 365â395). Mahwah, NJ: Lawrence Erlbaum Associates. Cannon, R.L., Moore, P., Tansathein, D., Strobel, J., Kendall, C., Biswas, G., and Bezdek, J. (1989). An expert system as a component of an integrated system for oil exploration. In Proceedings of Southeastcon 1989 ProceedingsâEnergy and Information Technologies in the Southeast (volume 1, pp. 32â35). Los Alamitos, CA: IEEE Publications. Card, S.K., Moran, T.P., and Newell, A. (1986). The model human processor: An engineering model of human performance. In K. Boff, L. Kaufman, and J. Thomas (Eds.), Handbook of perception and human performance, volume II. Hoboken, NJ: John Wiley & Sons. Cooper, R.P. (2002). Modeling high-level cognitive processes. Mahwah, NJ: Lawrence ÂErlbaum Associates. Cooper, R., Yule, P., and Sutton, D. (1998). COGENT: An environment for the development of cognitive models. In U. Schmid, J.F. Krems, and F. Wysotzki (Eds.), A cognitive sci- ence approach to reasoning, learning, and discovery (pp. 55â82). Lengerich, Germany: Pabst Science. Corker, K.M., and Smith, B. (1992). An architecture and model for cognitive engineering simulation analysis: Application to advanced aviation analysis. Presented at American Institute of Aeronautics and Astronautics (AIAA) Conference on Computing in Aero- space, San Diego, CA. Corker, K.M., Gore, B., Fleming, K., and Lane, J. (2000). Free flight and the context of control: Experiments and modeling to determine the impact of distributed air-ground air Âtraffic management on safety and procedures. In Proceedings of the 3rd FAA Eurocontrol InterÂ national Symposium on Air Traffic Management, Naples, Italy. Corkill, D.D. (1991). Blackboard systems. AI Expert, 6(9), 40â47. Costa, P.T., and McCrae, R.R. (1992). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Personality and Indi- Four ways five factors are basic. vidual Differences, 13, 653â665. Damasio, A. (1994). Descartesâ error: Emotion, reason, and the human brain. New York: Avon Books. Dautenhahn, K., Bond, A.H., CaÃ±amero, L., and Edmonds, B. (Eds.). (2002). Socially intelliÂ gent agents: Creating relationships with computers and robots. Dordrecht, The Nether- lands: Kluwer Academic. Davidson, R.J., Scherer, K.R., and Goldsmith, H.H. (2003). Handbook of affective sciences. New York: Oxford University Press.
208 BEHAVIORAL MODELING AND SIMULATION de Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., and De Carolis, B. (2003). From Gretaâs mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies, 59(1â2), 81â118. Deutsch, S.E., and Pew, R.W. (2001). Modeling human error in D-OMAR. (Report No. 8328.) Cambridge, MA: BBN Technologies. Deutsch, S.E., Cramer, N.L., Keith, G., and Freeman, B. (1999). The distributed operator model architecture.ï¿½ Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA364623 &Location=U2&doc=GetTRDoc.pdf [accessed Feb. 2008]. Diermeier, D., and Krehbiel, K. (2003). Institutionalism as a methodology. Journal of Theoreti- cal Politics, 15(2), 123â144. Drake, B.J. (1996). Expert system shell, multipurpose land information systems for rural. In GIS/LIS â96 annual conference and exposition proceedings (pp. 998â1005). Bethesda, MD: American Society for Photogrammetry and Remote Sensing. Dreyfus, H.L., and Dreyfus, S.E. (2004). From Socrates to expert systems: The limits and dangers of calculative rationality. Available: http://socrates.berkeley.edu/~hdreyfus/html/ paper_socrates.html [accessed April 2008]. Edwards, W., and Barron, F.H. (1994). SMARTS and SMARTER: Improved simple methods for multiattribute utility measurement. Organizational Behavior and Human Decision Processes, 60(3), 306â325. Eggleston, R.G., Young, M.J., and McCreight, K.L. (2000). Distributed cognition: A new type of human performance model. In Proceedings of the 2000 AAAI Fall Symposium on Simulating Human Agents, North Falmouth, MA. (AAAI Technical Report #FS-00-03.) Ekman, P., and Davidson, R.J. (1995). The nature of emotion: Fundamental questions. O Â xford, England: Oxford University Press. Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. Quarterly Journal of Economics, 75(4), 643â669. Feigenbaum, E., Friedland, P.E., Johnson, B.B., Nii, H.P., Schorr, H., and Shrobe, H. (1993). Knowledge-based systems in Japan. Baltimore, MD: World Technology Evaluation Center. Fellous, J.-M., and Arbib, M.A. (2005). Who needs emotions? The brain meets the robot. New York: Oxford University Press. Freed, M., Dahlman, E., Dalal, M., and Harris, R. (2002). Apex reference manual for Apex version 2.2. Moffett Field, CA: NASA Ames Research Center. Gelgele, H.L., and Wang, K. (1998). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ An expert system for engine fault diagnosis: Development and application. Journal of Intelligent Manufacturing, 9(6), 539â545. Georgeff, M.P., and Firschein, O. (1985). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ IEEE Expert systems for space station automation. Control Systems Magazine, 5(4), 3â8. Getoor, L., and Diehl, C.P. (2005). Introduction: Special issue on link mining; Link mining: A survey. SIGKDD Explorations Special Issue on Link Mining, 7(2), 1â10. Avail- able: http://www.sigkdd.org/explorations/issues/7-2-2005-12/1-Getoor.pdf [accessed Feb. 2008]. Giarratano, J.C., and Riley, G.D. (1998). Expert systems: Principles and programming, third edition. Boston, MA: PWS. Gintis, H. (2000). Game theory evolving: A problem-centered introduction to modeling stra- tegic interaction. Princeton, NJ: Princeton University Press. Gluck, K.A., and Pew, R.W. (Eds.). (2005). Modeling human behavior with integrated cog- nitive architectures: Comparison, evaluation, and validation. Mahwah, NJ: Lawrence Erlbaum Associates. Gratch, J., and Marsella, S. (2004a). A domain independent framework for modeling emotion. Journal of Cognitive Systems Research, 5(4), 269â306.
MICRO-LEVEL FORMAL MODELS 209 Gratch, J., and Marsella, S. (2004b). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Journal Evaluating a computational model of emotion. of Autonomous Agents and Multiagent Systems, Special Issue on the Best of AAMAS 2004. Gratch, J., Rickel, E.A., Cassell, J., Petajan, E., and Badler, N. (2002). Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems, 17(4), 54â63. Gray, W.D., John, B.E., and Atwood, M.E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance. Human-Computer Interaction, 8(3), 237â309. Green, E.J., and Porter, R.H. (1984). Noncooperative collusion under imperfect price informa- tion. Econometrica, 52(1), 87â100. Grossberg, S. (1999). The link between brain learning, attention, and consciousness. Con- sciousness and Cognition, 8, 1â44. Grossberg, S. (2000). Linking mind to brain: The mathematics of biological intelligence. Â otices of the American Mathematical Society, 47, 1361â1372. N Hagland, M. (2003). Doctorâs orders. Healthcare Informatics, 39(January). Harper, K.A., Ton, N., Jacobs, K., Hess, J., and Zacharias, G.L. (2001). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Graphical agent develÂopment environment for human behavior representation. In Proceedings of the 10th Conference on Computer Generated Forces and Behavioral Representation, Orlando, FL: Simulation Interoperability Standards Organization. Henninger, A.E., Jones, R.M., and Chown, E. (2003). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Behaviors that emerge from emotion and cognition: Implementation and evaluation of a symbolic-connectionist architecture. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 321â328), Melbourne, Australia. Hill, R., Chen, J., Gratch, J., Rosenbloom, P., and Tambe, M. (1998). Soar-RWA: Planning, teamwork, and intelligent behavior for synthetic rotary-wing aircraft. In Proceedings of the 7th Conference on Computer Generated Forces and Behavioral Representation, Orlando, FL: Simulation Interoperability Standards Organization. Hille, K. (1999). Artificial emotions: Angry and sad, happy and anxious behaviour. In Proceed- ings of ICONIP/ANZIIS/ANNES Workshop and Expo: Future Directions for Intelligent Systems and Information Sciences, Dunedin, New Zealand, November 22â23, University of Otago. Hudlicka, E. (1998). Modeling emotion in symbolic cognitive architectures. In Proceedings from AAAI Fall Symposium: Emotional and Intelligent: The Tangled Knot of Cognition. (Technical Report #SS-98-02.) Menlo Park, CA: AAAI Press. Hudlicka, E. (2002a). Increasing SIA architecture realism by modeling and adapting to affect and personality. In A.H. Dautenhahn, L. Bond, and B.E. Canamero (Eds.), Multiagent systems, artificial societies, and simulated organizations. Dordrecht, The Netherlands: Kluwer Academic. Hudlicka, E. (2002b). This time with feeling: Integrated model of trait and state effects on cognition and behavior. Applied AI, 16, 1â31. Hudlicka, E. (2003a). Modeling effects of behavior moderators on performance: Evaluation of the MAMID methodology and architecture. In Proceedings of the 2003 Conference on Behavior Representation in Modeling and Simulation (BRIMS), Scottsdale, AZ. Hudlicka, E. (2003b). Personality and cultural factors in gaming environments. In Proceedings of the Workshop on Cultural and Personality Factors in Military Gaming, Alexandria, VA: Defense Modeling and Simulation Office. Hudlicka, E. (2005). The rationality of emotion . . . and the emotionality of reason. Presented at the MICS Symposium, March 4â6, Saratoga Springs, NY. Available: http://www. cogsci.rpi.edu/cogworks/IMoCS/talks/Hudlicka.ppt#479,22,AffectAppraisal [accessed April 2008].
210 BEHAVIORAL MODELING AND SIMULATION Hudlicka, E. (2006a). Depth of feelings: Alternatives for modeling affect in user models. Presented at the 9th International Conference, TSD 2006, September, Brno, Czech Republic. Hudlicka, E. (2006b). Summary of factors influencing decision-making and behavior. ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Psycho- metrix report #0612. Blacksburg, VA: Psychometrix Associates. Hudlicka, E. (2007a). Guidelines for modeling affect in cognitive architectures. Psychometrix report #0706. Blacksburg, VA: Psychometrix Associates. Hudlicka, E. (2007b). Reasons for emotions. In W. Gray (Ed.), Advances in cognitive models and cognitive architectures. New York: Oxford University Press. Hudlicka, E. (2008). What are we modeling when we model emotion? In Proceedings of the AAAI Spring SymposiumâEmotion, Personality, and Social Behavior. (Technical report #SS-08-04.) Menlo Park, CA: AAAI Press. Hudlicka, E. (in preparation). Affective computing: Theory, methods, and applications. Boca Raton, FL: Taylor and Francis/CRC Press. Hudlicka, E., and Canamero, L. (2004). Preface: Architectures for modeling emotion. Presented at the AAAI Spring Symposium, Palo Alto, CA: AAAI Press, Stanford University. Hudlicka, E., and Fellous, J.-M. (1996). Review of computational models of emotion. ÂArlington, MA: Psychometrix Associates. Hudlicka, E., and Zacharias, G. (2005). Requirements and approaches for modeling indiÂ viduals within organizational simulations. In W.B. Rouse and K.R. Boff (Eds.), Organi- zational simulation (pp. 79â138). Hoboken, NJ: John Wiley & Sons. Hudlicka, E., Adams, M.J., and Feehrer, C.E. (1992). Computational cognitive models: PhaseÂ I. BBN report 7752. Cambridge, MA: BBN Technologies Izard, C.E. (1993). Four systems for emotion activation: Cognitive and noncognitive processes. Psychological Review, 100(1), 68â90. Jones, R.M., Laird, J.E., Nielsen, P.E., Coulter, K.J., Kenny, P., and Koss, F.V. (1999). Auto- ï¿½ï¿½ï¿½ï¿½ï¿½ mated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27â41. Kieras, D.E., Wood, S.D., and Meyer, D.E. (1997). Predictive engineering models based on the EPIC architecture for a multimodal high-performance human-computer interaction task. Transactions on Computer-Human Interaction, 4(3), 230â275. Klein, G.A. (1997). The recognition-primed decision (RPD) model: Looking back, looking forward. In C. Zsambok and G. Klein (Eds.), Naturalistic decision making. Mahwah, NJ: Lawrence Erlbaum Associates. Laird, J.E. (2000, March). It knows what youâre going to do: Adding anticipation to a quakebot. (AAAI 2000 Spring Symposium Series: Artificial Intelligence and ÂInteractive Entertainment, Technical Report #SS-00-02.) Palo Alto, CA: AAAI Press, Stanford University. Langley, P., and Choi, D. (2006). A unified cognitive architecture for physical agents. In Pro- ceedings of the Twenty-First National Conference on Artificial Intelligence, Boston: AAAI Press. Available: ttp://cll.stanford.edu/~langley/papers/icarus.aaai06.pdf [accessed April h 2008]. Laughery, K.R., Jr., and Corker, K. (1997). Computer modeling and simulation of human/Â system performance. In G. Salvendy (Ed.), Handbook of human factors and ergonomics, second edition (pp. 1375â1408). Hoboken, NJ: John Wiley & Sons. Lazarus, R.S. (1984). On the primacy of cognition. American Psychologist, 39(2), 124â129. LeDoux, J. (1998). Fear and the brain: Where have we been, and where are we going? Biologi- cal Psychiatry, 44(12), 1229â1238. Lewis, M., and Haviland-Jones, J.M. (2000). Handbook of emotions, second edition. New York: Guilford Press. Liebowitz, J. (1997). Worldwide perspectives and trends in expert systems: An analysis based on the three world congresses on expert systems. AI Magazine, 18(2), 115â119.
MICRO-LEVEL FORMAL MODELS 211 Lisetti, C.L., and Gmytrasiewicz, P. (2002). Can a rational agent afford to be affectless? A ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ formal approach. Applied Artificial Intelligence, 16, 577â609. MacMillan, J. (2007). Technical briefing: Modeling the group. Available: http://www.tsjonline. com/story.php?F=2724207 [accessed Feb. 2008]. Marsh, C. (1988). The ISA expert system: A prototype system for failure diagnosis on the space station. In Proceedings of the 1st International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, volume 1, New York: ACM Press. Marsh, C. (1999). The F-16 Maintenance skills tutor. The Edge (MITRE Newsletter), 3(1). MartÃnez, J., Gomes, C., and Linderman, R. (2005). Workshop on research directions in architectures and systems for cognitive processing. Organized by Computer Systems Laboratory, Intelligent Information Systems Institute, on behalf of the Air Force Research Laboratory, July 14â15, Cornell University, Ithaca, NY. Martinho, C., Machado, I., and Paiva, A. (2000). Affective interactions: Towards a new gen- eration of affective interfaces. ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ New York: Springer Verlag. Mathews, R.B. (2006). The People and Landscape Model (PALM): An agent-based spatial model of livelihood generation and resource flows in rural households and their environ- ment. Ecological Modelling, 194, 329â343. Mellers, B.A., Schwartz, A., and Cooke, A.D.J. (1998). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Annual Judgment and decision making. Review of Psychology, 49, 447â477. Morrison, J.E. (2003). A review of computer-based human behavior representations and their relation to military simulations. (IDA Paper P-3845.) Alexandria, VA: Institute for Defense Analyses. Myerson, R.B. (1999). Nash equilibrium and the history of economic theory. Journal of Eco- nomic Literature, 37(3), 1067â1082. National Research Council. (1998). Modeling human and organizational behavior: Applica- tion to military simulations. Washington, DC: National Academy Press. National Research Council. (1999). Funding a revolution: Government support for computing research. Committee on Innovations in Computing and Communications: Lessons from History. Computer Science and Telecommunications Board, Commission on Physical Sci- ences, Mathematics, and Applications. Washington, DC: National Academy Press. National Research Council. (2003). The role of experimentation in building future Naval forces. Committee for the Role of Experimentation in Building Future Naval Forces. Naval Studies, Division on Engineering and Physical Sciences. Washington, DC: The National Academies Press. Nawab, S., Hamid, Wotiz, R., and De Luca, C.J. (2004). Improved resolution of pulse super- positions in a knowledge-based system EMG decomposition. In Engineering in Medicine and Biology Society, Proceedings of the 26th Annual International Conference of the EMBS â04 IEEE (September, 1, 69â71), San Francisco, CA. Available: http://ieeexplore. ieee.org/iel5/9639/30462/01403092.pdf?isNumber= [accessed Feb. 2008]. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press. Norman, D.A. (1981). Steps towards a cognitive engineering: System images, system friendli- ness, mental models. Technical report for Program in Cognitive Science, University of California, San Diego. Nuortio, T., KytÃ¶joki, Niska, H., and BrÃ¤ysy, O. (2006). Improved route planning and scheduling of waste collection and transport. Expert Systems with Applications, 30(2), 223â232. Olson, J.R., and Olson, G.M. (1990). The growth of cognitive modeling in human-computer interaction since GOMS. Human-Computer Interaction, 5(2â3), 221â265. Ortony, A., Clore, G.L., and Collins, A. (1988). The cognitive structure of emotions. New York: Cambridge University Press.
212 BEHAVIORAL MODELING AND SIMULATION Ortony, A., Norman, D.A., and Revelle, W. (2005). Affect and proto-affect in effective func- tioning. In J.-M. Fellous and M.A. Arbib (Eds.), Who needs emotions?: The brain meets the machine (pp. 173â202). New York: Oxford University Press. Paiva, A. (2000). Affective interactions, towards a new generation of computer interfaces. New York: Springer. Paiva, A., Dias, J., Sobral, D., Aylett, R., Woods, S., Hall, L., and Zoll, C. (2005). Learning by feeling: Evoking empathy with synthetic characters. Applied Artificial Intelligence Journal, 19(3â4), 235â266. Pandey, V., Ng, W.-K., and Lim, E.-P. (2000). Financial advisor agent in a multi-agent finan- cial trading system. In Proceedings of the 11th International Workshop on Database and Expert Systems Applications (pp. 482â486). Available: http://ieeexplore.ieee.org/ iel5/7035/18943/00875070.pdf?isNumber= [accessed Feb. 2008]. Pearl, J. (1986). Fusion, propagation, and structuring in belief networks. Artificial Intelligence, 29(3), 215â288. Phelps, E.A., and LeDoux, J.E. (2005). Contributions of the amygdala to emotion processing: From animal models to human behavior. Neuron, 48(2), 175â187. Prada, R. (2005). Teaming up humans and synthetic characters. Unpublished doctoral dis- sertation, UTL-IS-Technical University of Lisbon, Portugal. Prendinger, H., and Ishizuka, M. (2003). Life-like characters: Tools, affective functions, and applications. New York: Springer. Prendinger, H., and Ishizuka, M. (2005). Human physiology as a basis for designing and evaluating affective communication with life-like characters. IEEE Transactions on Information and Systems, E88-D(11), 2453â2460. Puerta, A.R., Neches, R., Eriksson, H., Szekely, P., Luo, P., and Musen, M.A. (1993). Â oward ontology-based frameworks for knowledge-acquisition tools. Available: http:// T bmir.Âstanford.edu/publications/view.php/toward_ontology_based_frameworks_for_ knowledge_acquisition_tools [accessed Feb. 2008]. Purtee, M.D., Krusmark, M.A., Gluck, K.A., Kotte, S.A., and Lefebvre, A.T. (2003). Verbal protocol analysis for validation of UAV operator model. In Proceedings of the 25th Interservice/Industry Training, Simulation, and Education Conference (pp. 1741â1750), Orlando, FL: National Defense Industrial Association. Raiffa, H. (1997). Decision analysis: Introductory lectures on choices under uncertainty. New York: McGraw-Hill. Reilly, W.S.N. (2006). Modeling what happens between emotional antecedents and emotional consequents. Paper presented at Agent Construction and Emotions (ACE 2006): Model- ing the Cognitive Antecedents and Consequences of Emotion Workshop, April, Vienna, Austria. Ritter, F.E., and Avraamides, M.N. (2000). Steps towards including behavior moderators in human performance models in synthetic environments. (Technical report #ACS-2000-1.) State College, PA: Pennsylvania State University. Ritter, F., Avramides, M., and Councill, I. (2002). Validating changes to a cognitive archi- tecture to more accurately model the effects of two example behavior moderators. In Proceedings of 11th CGF Conference, Orlando, FL. Ritter, F.E., Reifers, A.L., Klein, L.C., and Schoelles, M. (2007). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Lessons from defining Âtheories of stress. In W.D. Gray (Ed.), Integrated models of cognitive systems (IMoCS) (pp. 254â262). New York: Oxford University Press. Ritter, F.E., Shadbolt, N.R., Elliman, D., Young, R.M., Gobet, F., and Baxter, G.D. (2003). Techniques for modeling human performance in synthetic environments: A supplemen- tary review. Wright-Patterson Air Force Base, OH: Human Systems Information Analysis Center.
MICRO-LEVEL FORMAL MODELS 213 Sander, D., Grandjean, D., and Scherer, K.R. (2005). ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ A systems approach to appraisal mecha- nisms in emotion. Neural Networks, 18(4), 317â352. Scherer, I.R., Schorr, A., and Johnstone, T. (2001). Appraisal processes in emotion: Theory, methods, research. New York: Oxford University Press. Scheutz, M. (2004). Useful roles of emotions in artificial agents: A case study from artificial life. In Proceedings of the Nineteenth National Conference on Artificial IntelÂ ligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence. C Â ambridge, MA: AAAI Press/MIT Press. Scheutz, M., and Schermerhorn, P. (2004). The more radical, the better: Investigating the utility of aggression in the competition among different agent kinds. Available: http://citeseer.ist. psu.edu/cache/papers/cs/30796/http:zSzzSzwww.nd.eduzSzzCz7EairolabzSzpublicationsz Szscheutzschermerhorn04sab.pdf/the-more-radical-the.pdf [accessed Feb. 2008]. Scheutz, M., Schermerhorn, P., Kramer, J., and Middendorff, C. (2006). The utility of affect ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ expression in natural language interactions in joining human-robot tasks. In Proceedings of IEEE/ACM 1st Annual Conference on Human-Robot Interactions (HRI2006) (pp. 226â233), Salt Lake City, UT. Sheng, H.-M., Wang, J.-C., Huang, H.-H., and Yen, D.C. (2006). Fuzzy measure on Âvehicle rout- ing problem of hospital materials. Expert Systems with Applications, 30(2), 367â377. Sierhuis, M. (2001). Modeling and simulating work practice. BRAHMS: A multiagent modelÂing and simulation language for work system analysis and design (SIKS Dis- sertation Series No. 2001-10). Unpublished doctoral dissertation, The University of AÂmsterdam, Amsterdam. Available: http://www.agentisolutions.com/documentation/ papers/ÂBrahmsWorkingPaper.pdf [accessed Feb. 2008]. Sierhuis, M., and Clancey, W.J. (1997). Knowledge, practice, activities and people. In Proceed- ings of AAAI Spring Symposium on Artificial Intelligence in Knowledge Management, Stanford University, CA. Available: http://ksi.cpsc.ucalgary.ca/AIKM97/AIKM97Proc. html [accessed Feb. 2008]. Silverman, B.G., Bharathy, G., and Nye, B. (2007). Profiling as âpolitically correctâ agent- based modeling of ethno-political conflict. Paper presented at the Interservice Industry Training, Simulation and Education Conference, Orlando, FL. Silverman, B., Johns, M., Cornwell, J., and OâBrien, K. (2006). Human behavior models for agents in simulators and games: Part I, enabling science with PMFserv. PRESENCE, 15(2), 139â162. Simon, H.A. (1967). Motivational and emotional controls of cognition. Psychological ÂReview, 74, 29â39. Sloman, A. (2003). How many separately evolved emotional beasties live within us? Paper presented at the Workshop on Emotions in Humans and Artifacts, Vienna, Austria, A Â ugust 1999 and to appear in Emotions in Humans and Artifacts, R. Trappl and P. Petta (Eds.), Cambridge, MA: MIT Press. Available: http://citeseer.ist.psu.edu/cache/Âpapers/ cs/21550/http:zSzzSzwww.cs.bham.ac.ukzSzresearchzSzcogaffzSzsloman.vienna99.pdf/ sloman02how.pdf [accessed Feb. 2008]. Sloman, A., Chrisley, R., and Scheutz, M. (2005). The architectural basis of affective states and processes. In J.-M. Fellous and M.A. Arbib (Eds.), Who needs emotions?: The brain meets the robot (pp. 203â244). New York: Oxford University Press. Smith, C.A., and Kirby, L.D. (2001). Toward delivering on the promise of appraisal theory. In K.R. Scherer, A. Schorr, and T. Johnstone (Eds.), Appraisal processes in emotion: Theory, methods, research (pp. 121â140). New York: Oxford University Press. Stocco, A., and Fum, D. (2005). Somatic markers and memory for outcomes: Computational and experimental evidence. In Proceedings of the XIV Annual Conference of the Euro- pean Society for Cognitive Psychology (ESCoP 2005), September, Leiden University.
214 BEHAVIORAL MODELING AND SIMULATION Stylianou, A.C., Madey, G.R., and Smith, R.D. (1992). Selection criteria for expert system shells. Communications of the ACM, 32(10), 30â48. Sun, R. (2003). Tutorial on the Clarion 5.0 Architecture. Technical Report, Cognitive Science Department, Rensselaer Polytechnic Institute. Sun, R. (2005). Cognition and multi-agent interaction. New York: Cambridge University Press. Tambe, M., Adibi, J., Al-Onaizan, Y., Erdem, A., Kaminka, G.A., Marsella, S.C., and Muslea, I. (1999). Building agent teams using an explicit teamwork model and learning. Artificial ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ï¿½ Intelligence, 110, 215â239. Trappl, R., Petta, P., and Payr, S. (2003). Emotions in humans and artifacts. Cambridge, MA: MIT Press. Trimble, E.G., Allwood, R.J., and Harris, F.C. (2002). Expert systems in contract man- agement: A pilot study. Defense Technical Information OAI-PMH Repository (United States). (Accession# ADA149363.) Available: http://stinet.dtic.mil/oai/ oai?verb=getRecord&metadataPrefix=html&identifier=ADA149363 [accessed April 2008]. VelÃ¡squez, J.D. (1999). An emotion-based approach to robotics. In Proceedings of the IEEE/ RSJ International Conference on Intelligent Robots and Systems, Kyongiu, Korea. Zacharias, G.L., Miao, A.X., Illgen, C., and Yara, J.M. (1995). SAMPLE: Situation aware- ness model for pilot-in-the-loop evaluation. In Proceedings of the 1st Conference on Situation Awareness in the Tactical Air Environment, Wright Patterson Air Force Base, OH. CSERIAC. Zachary, W., Cannon-Bowers, J., Bilazarian, P., Drecker, D., Lardieri, P., and Burns, J. (1999). The Advanced Embedded Training System (AETS): An intelligent embedded tutoring system for tactical team training. Journal of Artificial Intelligence in Education, 10, 257â277. Zachary, W., Jones, R.M., and Taylor, G. (2002). How to communicate to users what is inside a cognitive model. In Proceedings of the Eleventh Conference on Computer-Generated Forces and Behavior Representation (pp. 375â382), Orlando, FL: UCF Institute for Simulation and Training. Zachary, W., Santarelli, T., Ryder, J., Stokes, J., and Scolaro, D. (2001). Developing a multi- tasking cognitive agent using the COGNET/iGEN interactive architecture. In Proceedings of 10th Conference on Computer Generated Forces and Behavioral Representation (pp. 79â90), Norfolk, VA: Simulation Interoperability Standards Organization (SISO). Zadeh, L.A. (1965). Fuzzy sets. Information and Control, 8, 338â353. Zajonc, R.B. (1984). On the primacy of affect. American Psychologist, 39(2), 117â123. Zoll, C., Enz, S., Schaub, H., Paiva, A., and Aylett, R. (2006). Fighting bullying with the help of autonomous agents in a virtual school environment. Paper presented at 7th InterÂ national Conference on Cognitive Modeling, Trieste, Italy.