Click for next page ( 150


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 149
5 Micro-Level Formal Models I n this chapter we discuss several micro-level formal models of human behavior, models that most often are concerned with the behavior of individuals. We begin with cognitive architectures, followed by cognitive-affective models that consider the effect of human emotions on cognition and behavior, as well as of behavior on emotions. We then discuss expert systems, a legacy modeling approach that provides a frame- work for representing human expertise, and that now is often used as a programming paradigm in decision aiding systems. Finally we discuss decision theory and game theory and their limited applicability to indi- vidual, organizational, and societal modeling in general. For each model or approach, we follow the same discussion framework as in Chapters 3 and 4: we present the current state of the art, the most common applications of the approach, its strengths and limitations for the problems described in Chapter 2, and suggestions for further research and development. COgNITIvE ARCHITECTuRES Cognitive architectures are simulation-based models of human cog- nition. Their distinguishing feature is the broad focus on modeling the full sequence of information processing (stimulus-to-behavior) mediating adaptive, intelligent behavior. Cognitive architectures are built both for basic research and for applied purposes. Different architectures typically emphasize distinct aspects of human cognition (e.g., memory, multitasking, 4

OCR for page 149
50 BEHAVIORAL MODELING AND SIMULATION attention, learning, etc.), depending on their research objectives or applica- tion goals.1 Typically, cognitive architectures are used to model individual cogni- tion. Less often, the applicability of this approach for modeling collective behavior has also been explored, that is, using a cognitive architecture to model the behavior of a group, team, or organization. The utility and appropriateness of this approach to modeling group cognition has yet to be demonstrated, however,2 and so we have restricted our discussion here to covering the use of individual cognitive architectures to the modeling of individual behavior. Cognitive architectures have their roots in the early artificial intelligence (AI) models of human problem solving developed in the 1950s. These mod- els combined a number of key ideas emerging from observations of human problem solving and behavior, including symbolic processing, hierarchical organization of goals, problem spaces, rule- and heuristic-based behavior, and parallel and distributed representation and computation. A number of cognitive models were developed in the 1970s and 1980s, such as the Model Human Processor (MHP) and Goals, Operators, Methods, and Selection rules (GOMS) (Card, Moran, and Newell, 1986), focusing on modeling a single function in the context of a single task and most often applied to models of human-computer interaction and, in par- ticular, to the design and evaluation of user interfaces. Although limited in scope, these models provided the necessary methodological foundations for the more broadly scoped cognitive architectures of today, by demonstrating the feasibility and benefits of computational cognitive models, primarily in the context of human-computer interface design. What Are Cognitive Architectures? Cognitive architectures are computational, simulation models of human information processing and behavior. Cognitive architectures are also referred to as agent architectures, computational cognitive models, and human behavior models.3 These simulation-based models aim to implement 1 Indeed, this report’s focus on models and simulations that can contribute to some element of improving forecasting or explanation in a Department of Defense context may limit the ultimate utility of applying some of the models described herein (and elsewhere in the report) in a broader nonmilitary context. Some researchers may argue that this is not the case because of inherent model generality, but this general issue goes beyond the original scope of the study and clearly deserves further study. 2 Researchers are beginning to suggest future work in this area; see, for example, MacMillan (2007). 3 Specific connotations may exist with each of these terms regarding the motivation and use of the cognitive architecture.

OCR for page 149
5 MICRO-LEVEL FORMAL MODELS some version of a unified theory of cognition (Newell, 1990) by modeling the entire end-to-end human information-processing sequence, beginning with the current set of stimuli and ending with a specific behavior. Cognitive architectures are typically classified into three broad cat- egories, depending on their approach to knowledge representation and inferencing: symbolic, subsymbolic (also referred to as parallel-distributed), or hybrid (combining elements of the former two). Symbolic architectures use one or more propositional knowledge representation formalisms, such as rules, belief nets, or semantic nets. Subsymbolic, parallel-distributed architectures typically use some type of a connectionist representation and inferencing (e.g., recurrent neural networks), in which the mapping between conceptual entities and the representation is not one-to-one, because the knowledge is distributed over multiple representational elements (e.g., nodes within the network). Hybrid architectures use elements of both rep- resentational formalisms and are becoming increasingly common, as the benefits of the combined symbolic-subsymbolic knowledge representation and inferencing are recognized. The specific functions represented in a particular architecture depend on its objective, level of resolution, and theoretical underpinnings. These also determine the specific modules that make up a given architecture. In most symbolic architectures, the modules and process structure correspond to (a subset of) the functions comprising human information processing. Most architectures thus contain some subset of the following broad cognitive and perceptual processes: attention, situation assessment, goal management, planning, metacognition, learning, action selection, and necessarily some form of memory (or memories), such as sensory, working, and long-term. Thus, for example, an architecture attempting to model recognition- primed decision making (RPD) would have a module dedicated to situation assessment, since that is a core component of the RPD theory (Klein, 1997); an architecture focusing on models of learning would have corresponding modules responsible for such functions as credit assignment and creation of new schemas in memory. It should be noted here that most existing cogni- tive architectures are not capable of learning (Morrison, 2003). While some architectures, such as Soar, do contain elements of learning (e.g., creation of new operators by combining existing operators), typically, there is no direct learning resulting from the agent’s interactions with the environment. However, the cognitive modeling community is beginning to recognize the limitations of human-constructed long-term memories in these models, and researchers are beginning to address the problem of automatic knowledge acquisition and learning in cognitive architectures (e.g., Anderson et al., 2003; Langley and Choi, 2006). Depending on the architecture’s control structure, the modules may execute in a fixed sequence, or in parallel, or anywhere between these two

OCR for page 149
5 BEHAVIORAL MODELING AND SIMULATION extremes. Figure 5-1 illustrates the module structure of a notional sequen- tial cognitive architecture, frequently referred to as a “see-think-do” control structure. An alternative to this sequential approach is a parallel-distributed control structure, in which a number of parallel processes access a com- mon memory structure (frequently referred to as a blackboard and hence the term “blackboard architectures,” Corkill, 1991). As with the sequential architectures, the specific processes represented, as well as the structure of the memory blackboard, depend on the architecture objectives, the level of resolution, and theoretical foundations. Figure 5-2 shows an example of a blackboard architecture, illustrating examples of possible associated processes. Historically, cognitive architectures have focused on the middle stage of the see-think-do metaphor, frequently simplifying the perceptual input and motor output components. However, as cognitive architectures expand in model complexity and desired functionality (e.g., operating in a real-world environment), they increasingly incorporate sensory and motor models to become full-fledged agent architectures, capable of autonomous, intelligent, and adaptive behavior in a real or a simulated world. Cognitive architectures thus contrast with the more narrowly scoped cognitive models (also referred to as micro models of cognition), which Sensing and Cognition Perception • Multitasking • Memory and Learning • Attention Working Motor • Situation Awareness • Vision Memory Behavior • Decision Making • Hearing • Planning • Perception • Behavior Moderators Long-Term Memory Stimuli Responses Goals / Tasks World Maintain situation awareness Model Other Report important events declarative Assess threat to goals knowledge Assess alternatives Manage goals / tasks Procedural knowledge External world events 5-1.eps FIguRE 5-1 Example of a notional sequential cognitive architecture. The word “Cognition” in the 3rd box from left, at top, was set in 1-pt text--too small to see. Word doc would not open.

OCR for page 149
5 MICRO-LEVEL FORMAL MODELS Situation Goal Action Planning Assessment Selection Selection Visual Sensor Right hand Auditory Blackboard Blackboard Sensor Left hand Gaze FIguRE 5-2 A blackboard architecture. 5-2.eps focus on a single function, such as attention, visual search, visual percep- tion, language acquisition, or memory recall and retrieval, and implement micro theories of cognition, rather than unified theories of cognition. This figure shows a high-level view of a parallel-distributed cognitive architecture, which represents an alternative to the sequential see-think- do model. In parallel-distributed models, processing occurs in multiple, concurrent processes, and coordination among these processes is achieved through the intermediate results posted on the blackboard, which represents the architecture memory. The structure of the blackboard varies, depend- ing on a particular architecture, to represent the desired types of distinct memories. State of the Art A large number of cognitive architectures have been developed in both academic and industrial settings, and new architectures are rapidly emerg- ing due to increasing demand, particularly in human-computer interaction (HCI) and decision support contexts, with emphasis on training, decision aiding, interactive gaming, and virtual environments. Three recent reviews provide a comprehensive catalogue of a number of established or commer- cially available cognitive architectures: a report focusing on U.S.-developed systems (Andre, Klesen, Gebhard, Allen, and Rist, 2000, pp. 51–111), a supplementary report focusing on systems developed in Europe, primarily in the United Kingdom (Ritter et al., 2003), and a review by Morrison that covers architectures in both the United States and Europe and includes some of the lesser known systems (Morrison, 2003). All three reviews provide detailed descriptions of the architectures in terms of the cognitive processes

OCR for page 149
54 BEHAVIORAL MODELING AND SIMULATION modeled, their historical context, applications, and implementation lan- guages and any validation studies. A large number of research-oriented architectures also exist in laboratories around the world. The best sources for information regarding these architectures are conferences and work- shops, such as the International Conference on Cognitive Modeling, the annual meeting of the Cognitive Science Society, symposia and conferences of the American Association for Artificial Intelligence, Autonomous Agents and Multi-Agent Systems, Human Factors, and BRIMS. See Table 2-1 for an overview of cognitive architectures used in military contexts. Existing cognitive architectures are being used to support research on both human cognition and, more recently, emotion (see the next section on cognitive-affective models). They are also used in applied settings to control the behavior of synthetic agents and robots in a variety of contexts, including gaming and virtual reality environments, to enable user modeling in adaptive systems, and as replacements for human users and subjects for training, assessment, and system design purposes. It is beyond the scope of this chapter to describe in detail the large number of architectures that have been developed over the past 25 years. The three reviews mentioned above are excellent sources of in-depth infor- mation regarding a number of architectures that are sufficiently estab- lished to be included in comprehensive reviews. Below we briefly discuss a subset of these, to provide a sense of the breadth of theoretical ori- entations, representational formalisms and modeling methodologies, and applications. It should be noted that each architecture elaborates a particular sub- set of cognitive processing and that the architectures vary in their ease of transition to other domains and ease of use. These factors must be taken into consideration when a particular architecture is being considered as a modeling tool for a specific problem in a particular domain. For example, ACT-R’s focus is on relatively low-level processing, and is particularly con- cerned with memory modeling. EPIC emphasizes models of multitasking. Soar emphasizes a particular model of learning, cast in relatively high-level symbolic terms. Thus, before a particular architecture is adopted for a spe- cific modeling effort, it is necessary to carefully assess its ability to model the processes of interest at the desired level of resolution. The most established architectures in the United States are ACT-R and Soar, each having a large and active academic research community, with annual workshops and tutorials, and each having an increasing presence in industry, primarily the defense industry. These are described below, followed by several other prominent architectures.

OCR for page 149
55 MICRO-LEVEL FORMAL MODELS ACT-R The historical focus of ACT-R (Atomic Components of Thought or Adaptive Character of Thought) has been on basic research in cognition and modeling of a variety of fundamental psychological processes, such as learning and memory (e.g., priming) (Anderson, 1983, 1990, 1993). ACT-R combines a semantic net representation with rule-based representation to support declarative and procedural memory representation and associated inferencing. ACT-R is probably the cognitive architecture that is “best grounded in the experimental research literature” (Morrison, 2003, p. 24). Primary early applications were tutoring in mathematics and computer pro- gramming (see www.carnegielearning.com). Gradually, ACT-R evolved into a full-fledged cognitive architecture, with increasing emphasis on sensory and motor components and applications in military settings (e.g., modeling adversary behavior in military operations on urban terrain, MOUT, tactical action officers in submarines, radar operators on ships; Andre et al., 2000; Anderson et al., 2004). Soar Soar (State, Operator, and Results) development was initially motivated by the desire to demonstrate the ability of generalized problem spaces, rules, and heuristic search capabilities to solve a wide range of problems and by the desire to develop an implementation of the unified theory of cognition of Newell (1990). Soar uses production rules to implement this problem- solving paradigm, via application of “operators” to states within a problem space. Soar represents all three types of long-term memory (declarative, procedural, and episodic) in terms of rules. A distinguishing feature of Soar is its ability to form new operators (rules) from existing operators (rules), when it reaches an impasse in its problem solving (impasse being defined as either no applicable operators selected or conflict among operators). It is thus one of the few architectures that explicitly addresses learning, albeit in the limited context of combining existing elements within its own knowl- edge base, rather than the bona fide acquisition of new knowledge from its interaction with the environment. Soar models both reactive and delibera- tive reasoning and is capable of planning (Hill, Chen, Gratch, Rosenbloom, and Tambe, 1998). While Soar was in part motivated by theoretical considerations, par- ticularly Newell’s unified theory of cognition, the architecture has become a more traditional AI system, in its increasing emphasis on performance, rather than accurate emulation of human information processing. A fre- quent criticism of Soar is its large number of free variables, which enables a large number of specific models to match empirical data, thereby making

OCR for page 149
5 BEHAVIORAL MODELING AND SIMULATION it difficult to unequivocally establish the validity of a given model. This is the case with most computational cognitive architectures. Soar’s capabilities progressed from simple toy tasks (puzzles), through expert systems applications (medical diagnosis, software design), to archi- tectures capable of controlling autonomous agents. Soar represents the more extensively applied cognitive architecture and includes a number of training installations or exercises in which it has replaced human role players or autonomous air entities: TacAir-Soar at the Air Force Research Laboratory (AFRL) training laboratory and at Williams Air Force Base (fixed-wing missions), Joint Forces Command (JFCOM) J9 exercises, MOUTBot (sol- dier models) VIRTE MOUT at the Office for Naval Research, JCATS at the Defense Modeling and Simulation Office; SOFSoar at JFCOM, RWA-Soar (rotary wing missions), STEVE for training simulations, and Quakebot for interactive computer games (Jones et al., 1999; Laird, 2000). The applica- tions in the military are being developed by Soar Technology, Inc. (http:// www.soartech.com). Soar also serves as the core technology at the Institute for Creative Technologies at the University of Southern California, where it acts as an agent architecture, controlling synthetic characters in virtual environments, primarily applied to training and game-based training envi- ronments. Soar has also been applied in a nondefense context, to develop a decision support system for businesses (KB Agent, developed by ExpLore Reasoning Systems, Inc.). While the emphasis in Soar applications has been on individual models, it has also been applied in modeling multiagent environments, in which explicit representations exist of shared structures among team members (e.g., goals, plans). The STEAM model (Shell for TEAMwork) (1996) implements these enhancements and has been applied to military simula- tions (models of helicopter pilots) and to modeling soccer players in the RoboCup competition (Tambe et al., 1999). EPIC EPIC (Executive-Process/Interactive Control), developed from the MHP (Card et al., 1986), focuses on models of human behavior in multitasking contexts, in human-computer interaction. A distinguishing feature is its emphasis on integrating cognition with perceptual and motor processes. EPIC’s sensorimotor capabilities have motivated its inclusion in some Soar models, to provide an interface with the real world. EPIC uses production rules to represent both its long-term memory and the control of processing within the architecture. It is primarily focused on research and is a good example of a more constrained architecture with a strong focus on valida- tion against human performance data. Recently EPIC has also been used in more applied settings, for the design of undersea ship systems.

OCR for page 149
5 MICRO-LEVEL FORMAL MODELS COgNET COGNET (COGnition as a Network of Tasks) architecture was devel- oped by CHI Systems and combines several knowledge representation formalisms in a blackboard-oriented framework. It was initially applied in user interface design (Zachary, Jones, and Taylor, 2002) but has been expanded to include models of multitasking in the context of air traffic con- trol (Zachary, Santarelli, Ryder, Stokes, and Scolaro, 2001) and intelligent tutoring (Zachary et al., 1999). COGNET has an associated development environment iGEN, which is commercially available from CHI Systems. OMAR OMAR (Operator Model Architecture) is a task-goal network model with a focus on multitasking developed by BBN, Inc. (Deutsch, Cramer, Keith, and Freeman, 1999), from an earlier conceptual prototype, the CHAOS model (Hudlicka, Adams, and Feehrer, 1992). OMAR and its later distributed version, D-OMAR, have been used to model air traffic control and pilot error (Deutsch et al., 1999; Deutsch and Pew, 2001). It was one of the systems participating in the AMBR (Agent-based Modeling and Behavior Representation) validation project, in which its performance was compared with other cognitive architectures and with human subjects in the context of air traffic control (Gluck and Pew, 2005). Recent versions of OMAR were expanded with models of auditory and visual inputs, and the system was reimplemented in Java (from the original LISP version), to improve performance. MIDAS MIDAS (Man-machine Integrated Design and Analysis System) uses a goal-task network model to model simple, reactive decision making. It includes sensory inputs (visual and auditory) and simple motor outputs and has been applied in human-computer interaction to model pilot behavior in support of cockpit design (Corker and Smith, 1992; Corker, Gore, Fleming, and Lane, 2000; Laughery and Corker, 1997), air traffic control, the design of emergency communication systems, and the design of automation sys- tems for nuclear power plants. MIDAS is also capable of modeling multiple, interacting agents. SAMPLE SAMPLE (Situation Awareness Model for Pilot-in-the-Loop Evalua- tion) is a sequential hybrid model developed by Charles River Analytics,

OCR for page 149
5 BEHAVIORAL MODELING AND SIMULATION using several knowledge representational mechanisms, including fuzzy logic and belief nets and rules. It has been applied to model air traffic control, pilot behavior, unmanned aerial vehicles, and soldier behavior in MOUT operations (Zacharias, Miao, Illgen, and Yara, 1995; Harper, Ton, Jacobs, Hess, and Zacharias, 2001). SAMPLE implements the recognition-primed decision-making model (Klein, 1997) and does not include complex plan- ning. Sensorimotor components are represented at highly abstracted levels. SAMPLE has a drag-and-drop development environment GRADE, for rapid application prototyping, and is available commercially. APEX APEX is an architecture supporting the creation of intelligent, autono- mous systems and serves also as a development environment. One of its goals is to reduce the effort required to develop agent architectures. Its primary applications are in human-computer interaction, to help design user interfaces and human-machine systems (Freed, Dahlman, Dalal, and Harris, 2002), and it has been applied in air traffic control. Other Architectures Several other architectures should be mentioned briefly. D-COG (Distributed Cognition) was developed at AFRL (Eggleston, Young, and McCreight, 2000) to model complex adaptive behavior. It was one of the architectures evaluated in the AMBR experiment (see Validation below). BRAHMS (Business Redesign Agent-Based Holistic Modeling System) is an environment developed by the National Aeronautics and Space Admin- istration (NASA) for modeling multiple, interacting entities (Sierhuis and Clancey, 1997; Sierhuis, 2001) and emphasizes the interaction among entities rather than individual cognition. Several well-established cognitive architectures have been developed in Europe. COGENT (Cognitive Objects within a Graphical EnviroNmentT) is a development environment for construction cognitive models developed by Cooper and colleagues (Cooper, Yule, and Sutton, 1998; Cooper, 2002). It supports the construction of cognitive architecture from individual, inde- pendent “modules,” each responsible for a particular cognitive (or percep- tual) function, and includes explicit support for systematic evaluation of the resulting models. COGENT offers a number of representational formal- isms, including connectionist formalisms supporting the representation of distributed, subsymbolic knowledge. It has been applied to model medical diagnosis, models of memory, and models of concept learning. The architectures outlined above are primarily symbolic and represent the most common approach to the development of integrated cognitive

OCR for page 149
5 MICRO-LEVEL FORMAL MODELS architectures. There are also examples of architectures that use connec- tionist formalisms, either exclusively or in combination with symbolic representations. We briefly mention two of these below. An example of the former is the ART (Adaptive Resonance Theory) architecture, developed by Grossberg (1999, 2000). ART emphasizes learning and parallel process- ing, both being key benefits of connectionist formalisms. An example of a hybrid connectionist-symbolic architecture is CLARION (Connectionist Learning with Adaptive Rule Indication On-Line), developed to support research in combined representations of symbolic knowledge (via rules) and subsymbolic knowledge (via connectionist networks) and inductive learning (Sun, 2003, 2005). Current Trends Several current trends in cognitive architecture development promise to contribute to more efficient development of these complex simulation systems, as well as more effective applications: • Efforts to incorporate individual differences and behavior mod- erators, such as personalities and emotions, both to support basic research and to produce more realistic and robust agents (see next section). • Efforts to provide broadly scoped end-to-end architectures, with increasing emphasis on sensory and motor processes, to enable the associated synthetic agent or robot to function in a virtual or actual environment (e.g., variety of Soar-based agents being developed at the Institute for Creative Technologies). • Use of shared ontologies to facilitate the labor-intensive effort of cognitive task analysis and domain-specific model construction. • Use of development environments to facilitate cognitive architec- ture construction, which may include automatic KA/KE facilities, visualizations, and model performance assessment and analysis tools. • Increasing emphasis on empirical validation, frequently with respect to human performance data, and the development of validation methodologies and metrics (e.g., Gluck and Pew, 2005). verification and validation Issues As stated above, verification refers to ensuring that the architecture functions as intended, that is, that the model has been implemented accord- ing to the specifications. Validation refers to the degree to which the model specifications reflect the reality, at the desired level of resolution. We focus

OCR for page 149
04 BEHAVIORAL MODELING AND SIMULATION described by Colonel Blotto but nevertheless informed by Colonel Blotto. However, as we have noted, the severe limitations of decision theory and game theory make this move to a more elaborate and realistic model impractical and not as trivial a step as the formal theorists might wish. Game theory has been of moderate use in analyzing institutions. The game theoretic approach consists of four steps (Diermeier and Krehbiel, 2003): 1. Assume behavior. 2. Define the game generated by the institution. 3. Deduce the equilibria. 4. Compare the regularities to data. If behavior is assumed to be optimizing, then equilibrium is achieved and institutions can be thought of as equivalent to equilibria. To compare two institutions, we need only compare their equilibria: the better the equi- librium (e.g., the greater utility to the relevant actors), the better the institu- tion, and the more the actors will prefer it. The institutions as equilibrium approach proves powerful. If we want to compare a parliament with an open rule system, in which anyone can make a proposal, with a closed rule system, in which amendments are not allowed, or to compare a parliamen- tary system with a presidential system, we construct models of the two types of institution and compare their equilibria using game theory (Baron and Ferejohn, 1989). The institutions as equilibrium approach of game theory can be extended to include the game over institutions. In this game, the players first decide which institution to use. This meta-institutional game can explain not only how institutions perform but also why they may have been chosen in the first place. For example, we might use such a model to explain why a military leader chooses an open rule system even though that system allows greater voice to members of his cabinet. However, as noted, the assumptions that need to be made here are highly unrealistic, hence calling the entire approach into question. When we expand game theory to include learning models, then we can capture some forms of cultural transference. Many game theorists think of culture as beliefs. That characterization provides some leverage, but it is far from adequate. More recent work considers cultural learning in which players learn from one another (Gintis, 2000). They can even learn from the other games that they play (Bednar and Page, 2007). Game theoretic models can also be expanded to include networks that can evolve over time. In sum, game theoretic models can include cultural forces, but those forces must be well defined and analytically tractable. The movement to expand game theory by taking networks and culture into account is promising. However, the research here is in its infancy.

OCR for page 149
05 MICRO-LEVEL FORMAL MODELS Major Limitations Decision theory models and game theory models tend to be overly sim- plistic, with few “moving parts” and with assumptions made with regard to the player behavioral characteristics that can be driven more by ease of solution criteria rather than fidelity of representation. Otherwise, the models become difficult or impossible to solve. For example, most game theory models assume either two players or an infinite number of players. The real world often takes place in the space in between, except for extremely artificial situations (e.g., chess games, two-candidate political races, etc.). Decision theory and game theory models require data about actors that often cannot be gathered with any reliability or within a reasonable amount of time determined by the decision window of the commander. A further problem with game theory models is that they produce mul- tiple equilibria. The Folk Theorem result states that, for repeated games, almost any outcome can be supported as an equilibrium. To overcome this problem of multiple equilibria, game theorists rely on refinements, such as symmetry. An equilibrium is symmetric if both players get the same payoff. Or they invoke Pareto efficiency: an equilibrium is Pareto efficient if no other equilibrium makes every player better off. Game theoretic models also often ignore the stability and attainability of the equilibria that they predict. Although recently game theorists have begun to study learning models, they tend to consider simple two-person games and not the more complex, multiplayer situations characteristic of the real world. Future Research and Development Requirements The potential for decision theory and game theory hinges on their ability to capture the complexities of real people and the real world. A concern with realism would seem to undercut the mathematical strength of these two approaches: their ability to cut to the heart of a situation. Nevertheless, the few degrees of freedom that these models allow can be tugged in the direction of greater realism with potentially large benefits. In decision theory, we can look to cultural and cognitive explanations to explain beliefs. We can also look to culture as a determinant of what is pos- sible: some actions may be unlikely to occur in some cultures. Therefore, we can rule those actions out. However, as decision theory and game theoretic models become more nuanced to include cultural factors, they become less mathematically tractable, require increased data or more unrealistic assumptions, and require more effort for validation. As already mentioned, game theorists have begun including culture in the form of beliefs, networks, and behaviors. This can also be accomplished less formally. For example, Calvert and Johnson (1999) argue for culture

OCR for page 149
0 BEHAVIORAL MODELING AND SIMULATION as a means of coordinating on an equilibrium. By coordination, they mean selection of one equilibrium from among many. In their approach, game theory becomes a preliminary tool: it defines the set of possible outcomes. Detailed historical and cultural knowledge from subject matter experts then selects from among those equilibria. REFERENCES Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press. Anderson, J.R. (1990). The adaptive character of thought. Hillsdale, NJ: Lawrence Erlbaum Associates. Anderson, J.R. (1993). Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates. Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., and Qin, Y. (2004). An integrated theory of the mind. Psychological Review, (4), 1036–1060. Andre, E., Klesen, M., Gebhard, P., Allen, S.A., and Rist, T. (2000). Exploiting models of personality and emotions to control the behavior of animated interactive agents. Paper presented at the International Workshop on Affective Interactions (IWAI), Siena, Italy. Araujo, A.F.R. (1991). Cognitive-emotional interactions using the subsymbolic paradigm. In Proceedings of Student Workshop on Emotions, University of Southern California, Los Angeles. Araujo, A.F.R. (1993). Emotions influencing cognition: Effect of mood congruence and anxiety upon memory. Presented at the Workshop on Architectures Underlying Motivation and Emotion (WAUME ’93), University of Birmingham, England. Bach, J. (2007). Principles of synthetic intelligence: Building blocks for an architecture of motivated cognition. Unpublished doctoral dissertation, Universität Osnabrück. Barker, V. E., O’Connor, D.E., Bachant, J., and Soloway, E. (1989). Expert systems for configu- ration at Digital: XCON and beyond. Communications of the ACM, (3), 298–318. Baron, D.P., and Ferejohn, J.A. (1989). Bargaining in legislatures. American Political Science Review, (4), 1181–1206. Bates, J., Loyall, A.B., and Reilly, W.S. (1992). Integrating reactivity, Goals, and emotion in a broad agent. Presented at the Fourteenth Annual Conference of the Cognitive Science Society, July, Bloomington, IN. Bednar, J., and Page, S.E. (2007). Can game(s) theory explain culture?: The emergence of cultural behavior within multiple games. Rationality and Society, (1), 65–97. Belavkin, R.V. (2001). The role of emotion in problem solving. In Proceedings of the AISB ’0 symposium on emotion, cognition and affective computing (pp. 49–57), Heslington, York, England. Bewley, T.F. (1986). Cowles Foundation discussion paper no. 0: Knightian decision theory: Part . New Haven, CT: Cowles Foundation for Research in Economics at Yale University. Bierman, H.S., and Fernandez, L.F. (1998). Game theory with economic applications. Reading, MA: Addison-Wesley. Blanchard, O.J., and Watson, M.W. (1982). Bubbles, rational expectations, and speculative markets. In P. Watchel (Ed.), Crisis in the economic and financial structure (pp. 295–316). Lanham, MD: Lexington Books. Bobrow, D.G., Mittal, S., and Stefik, M.J. (1986). Expert systems: Perils and promise. Com- munications of the ACM, (9), 880–894.

OCR for page 149
0 MICRO-LEVEL FORMAL MODELS Breazeal, C., and Brooks, R. (2005). Robot emotions: A functional perspective. In J.-M. Fellous and M.A. Arbib (Eds.), Who needs emotions? The brain meets the robot (pp. 271–310). New York: Oxford University Press. Broekens, J., and DeGroot, D. (2006). Formalizing cognitive appraisal: From theory to compu- tation. Paper presented at Agent Construction and Emotions (ACE 2006): Modeling the Cognitive Antecedents and Consequences of Emotion Workshop, April, Vienna, Austria. Buckle, G. (2004) A different kind of laboratory mouse. Available: http://digitaljournal.com/ article/35501/A_Different_Kind_of_Laboratory_Mouse [accessed Feb. 2008]. Burton, R.M., and Obel, B. (2004). Strategic organizational diagnosis and design: The dynamics of fit, third edition. Boston: Kluwer Academic. Busemeyer, J.R., Dimperio, E., and Jessup, R.K. (2007). Integrating emotional processes into decision making models. In W.D. Gray (Ed.), Integrated models of cognitive systems (pp. 213–229). New York: Oxford University Press. Camerer, C. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton, NJ: Princeton University Press. Campbell, G.E., and Bolton, A.E. (2005). HBR validation: Integrating lessons learned from multiple academic disciplines, applied communities, and the AMBR project. In K.A. Gluck and R.W. Pew (Eds.), Modeling human behavior with integrated cognitive archi- tectures: Comparison, evaluation, and validation (pp. 365–395). Mahwah, NJ: Lawrence Erlbaum Associates. Cannon, R.L., Moore, P., Tansathein, D., Strobel, J., Kendall, C., Biswas, G., and Bezdek, J. (1989). An expert system as a component of an integrated system for oil exploration. In Proceedings of Southeastcon  Proceedings—Energy and Information Technologies in the Southeast (volume 1, pp. 32–35). Los Alamitos, CA: IEEE Publications. Card, S.K., Moran, T.P., and Newell, A. (1986). The model human processor: An engineering model of human performance. In K. Boff, L. Kaufman, and J. Thomas (Eds.), Handbook of perception and human performance, volume II. Hoboken, NJ: John Wiley & Sons. Cooper, R.P. (2002). Modeling high-level cognitive processes. Mahwah, NJ: Lawrence Erlbaum Associates. Cooper, R., Yule, P., and Sutton, D. (1998). COGENT: An environment for the development of cognitive models. In U. Schmid, J.F. Krems, and F. Wysotzki (Eds.), A cognitive sci- ence approach to reasoning, learning, and discovery (pp. 55–82). Lengerich, Germany: Pabst Science. Corker, K.M., and Smith, B. (1992). An architecture and model for cognitive engineering simulation analysis: Application to advanced aviation analysis. Presented at American Institute of Aeronautics and Astronautics (AIAA) Conference on Computing in Aero- space, San Diego, CA. Corker, K.M., Gore, B., Fleming, K., and Lane, J. (2000). Free flight and the context of control: Experiments and modeling to determine the impact of distributed air-ground air traffic management on safety and procedures. In Proceedings of the rd FAA Eurocontrol Inter- national Symposium on Air Traffic Management, Naples, Italy. Corkill, D.D. (1991). Blackboard systems. AI Expert, (9), 40–47. Costa, P.T., and McCrae, R.R. (1992). Four ways five factors are basic. Personality and Indi- vidual Differences, , 653–665. Damasio, A. (1994). Descartes’ error: Emotion, reason, and the human brain. New York: Avon Books. Dautenhahn, K., Bond, A.H., Cañamero, L., and Edmonds, B. (Eds.). (2002). Socially intelli- gent agents: Creating relationships with computers and robots. Dordrecht, The Nether- lands: Kluwer Academic. Davidson, R.J., Scherer, K.R., and Goldsmith, H.H. (2003). Handbook of affective sciences. New York: Oxford University Press.

OCR for page 149
0 BEHAVIORAL MODELING AND SIMULATION de Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., and De Carolis, B. (2003). From Greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies, 5(1–2), 81–118. Deutsch, S.E., and Pew, R.W. (2001). Modeling human error in D-OMAR. (Report No. 8328.) Cambridge, MA: BBN Technologies. Deutsch, S.E., Cramer, N.L., Keith, G., and Freeman, B. (1999). The distributed operator model architecture. Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA364623 &Location=U2&doc=GetTRDoc.pdf [accessed Feb. 2008]. Diermeier, D., and Krehbiel, K. (2003). Institutionalism as a methodology. Journal of Theoreti- cal Politics, 5(2), 123–144. Drake, B.J. (1996). Expert system shell, multipurpose land information systems for rural. In GIS/LIS ’ annual conference and exposition proceedings (pp. 998–1005). Bethesda, MD: American Society for Photogrammetry and Remote Sensing. Dreyfus, H.L., and Dreyfus, S.E. (2004). From Socrates to expert systems: The limits and dangers of calculative rationality. Available: http://socrates.berkeley.edu/~hdreyfus/html/ paper_socrates.html [accessed April 2008]. Edwards, W., and Barron, F.H. (1994). SMARTS and SMARTER: Improved simple methods for multiattribute utility measurement. Organizational Behavior and Human Decision Processes, 0(3), 306–325. Eggleston, R.G., Young, M.J., and McCreight, K.L. (2000). Distributed cognition: A new type of human performance model. In Proceedings of the 000 AAAI Fall Symposium on Simulating Human Agents, North Falmouth, MA. (AAAI Technical Report #FS-00-03.) Ekman, P., and Davidson, R.J. (1995). The nature of emotion: Fundamental questions. Oxford, England: Oxford University Press. Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. Quarterly Journal of Economics, 5(4), 643–669. Feigenbaum, E., Friedland, P.E., Johnson, B.B., Nii, H.P., Schorr, H., and Shrobe, H. (1993). Knowledge-based systems in Japan. Baltimore, MD: World Technology Evaluation Center. Fellous, J.-M., and Arbib, M.A. (2005). Who needs emotions? The brain meets the robot. New York: Oxford University Press. Freed, M., Dahlman, E., Dalal, M., and Harris, R. (2002). Apex reference manual for Apex version .. Moffett Field, CA: NASA Ames Research Center. Gelgele, H.L., and Wang, K. (1998). An expert system for engine fault diagnosis: Development and application. Journal of Intelligent Manufacturing, (6), 539–545. Georgeff, M.P., and Firschein, O. (1985). Expert systems for space station automation. IEEE Control Systems Magazine, 5(4), 3–8. Getoor, L., and Diehl, C.P. (2005). Introduction: Special issue on link mining; Link mining: A survey. SIGKDD Explorations Special Issue on Link Mining, (2), 1–10. Avail- able: http://www.sigkdd.org/explorations/issues/7-2-2005-12/1-Getoor.pdf [accessed Feb. 2008]. Giarratano, J.C., and Riley, G.D. (1998). Expert systems: Principles and programming, third edition. Boston, MA: PWS. Gintis, H. (2000). Game theory evolving: A problem-centered introduction to modeling stra- tegic interaction. Princeton, NJ: Princeton University Press. Gluck, K.A., and Pew, R.W. (Eds.). (2005). Modeling human behavior with integrated cog- nitive architectures: Comparison, evaluation, and validation. Mahwah, NJ: Lawrence Erlbaum Associates. Gratch, J., and Marsella, S. (2004a). A domain independent framework for modeling emotion. Journal of Cognitive Systems Research, 5(4), 269–306.

OCR for page 149
0 MICRO-LEVEL FORMAL MODELS Gratch, J., and Marsella, S. (2004b). Evaluating a computational model of emotion. Journal of Autonomous Agents and Multiagent Systems, Special Issue on the Best of AAMAS 004. Gratch, J., Rickel, E.A., Cassell, J., Petajan, E., and Badler, N. (2002). Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems, (4), 54–63. Gray, W.D., John, B.E., and Atwood, M.E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance. Human-Computer Interaction, (3), 237–309. Green, E.J., and Porter, R.H. (1984). Noncooperative collusion under imperfect price informa- tion. Econometrica, 5(1), 87–100. Grossberg, S. (1999). The link between brain learning, attention, and consciousness. Con- sciousness and Cognition, , 1–44. Grossberg, S. (2000). Linking mind to brain: The mathematics of biological intelligence. Notices of the American Mathematical Society, 4, 1361–1372. Hagland, M. (2003). Doctor’s orders. Healthcare Informatics, (January). Harper, K.A., Ton, N., Jacobs, K., Hess, J., and Zacharias, G.L. (2001). Graphical agent development environment for human behavior representation. In Proceedings of the 0th Conference on Computer Generated Forces and Behavioral Representation, Orlando, FL: Simulation Interoperability Standards Organization. Henninger, A.E., Jones, R.M., and Chown, E. (2003). Behaviors that emerge from emotion and cognition: Implementation and evaluation of a symbolic-connectionist architecture. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 321–328), Melbourne, Australia. Hill, R., Chen, J., Gratch, J., Rosenbloom, P., and Tambe, M. (1998). Soar-RWA: Planning, teamwork, and intelligent behavior for synthetic rotary-wing aircraft. In Proceedings of the th Conference on Computer Generated Forces and Behavioral Representation, Orlando, FL: Simulation Interoperability Standards Organization. Hille, K. (1999). Artificial emotions: Angry and sad, happy and anxious behaviour. In Proceed- ings of ICONIP/ANZIIS/ANNES Workshop and Expo: Future Directions for Intelligent Systems and Information Sciences, Dunedin, New Zealand, November 22–23, University of Otago. Hudlicka, E. (1998). Modeling emotion in symbolic cognitive architectures. In Proceedings from AAAI Fall Symposium: Emotional and Intelligent: The Tangled Knot of Cognition. (Technical Report #SS-98-02.) Menlo Park, CA: AAAI Press. Hudlicka, E. (2002a). Increasing SIA architecture realism by modeling and adapting to affect and personality. In A.H. Dautenhahn, L. Bond, and B.E. Canamero (Eds.), Multiagent systems, artificial societies, and simulated organizations. Dordrecht, The Netherlands: Kluwer Academic. Hudlicka, E. (2002b). This time with feeling: Integrated model of trait and state effects on cognition and behavior. Applied AI, , 1–31. Hudlicka, E. (2003a). Modeling effects of behavior moderators on performance: Evaluation of the MAMID methodology and architecture. In Proceedings of the 00 Conference on Behavior Representation in Modeling and Simulation (BRIMS), Scottsdale, AZ. Hudlicka, E. (2003b). Personality and cultural factors in gaming environments. In Proceedings of the Workshop on Cultural and Personality Factors in Military Gaming, Alexandria, VA: Defense Modeling and Simulation Office. Hudlicka, E. (2005). The rationality of emotion . . . and the emotionality of reason. Presented at the MICS Symposium, March 4–6, Saratoga Springs, NY. Available: http://www. cogsci.rpi.edu/cogworks/IMoCS/talks/Hudlicka.ppt#479,22,AffectAppraisal [accessed April 2008].

OCR for page 149
0 BEHAVIORAL MODELING AND SIMULATION Hudlicka, E. (2006a). Depth of feelings: Alternatives for modeling affect in user models. Presented at the 9th International Conference, TSD 2006, September, Brno, Czech Republic. Hudlicka, E. (2006b). Summary of factors influencing decision-making and behavior. Psycho- metrix report #0. Blacksburg, VA: Psychometrix Associates. Hudlicka, E. (2007a). Guidelines for modeling affect in cognitive architectures. Psychometrix report #00. Blacksburg, VA: Psychometrix Associates. Hudlicka, E. (2007b). Reasons for emotions. In W. Gray (Ed.), Advances in cognitive models and cognitive architectures. New York: Oxford University Press. Hudlicka, E. (2008). What are we modeling when we model emotion? In Proceedings of the AAAI Spring Symposium—Emotion, Personality, and Social Behavior. (Technical report #SS-08-04.) Menlo Park, CA: AAAI Press. Hudlicka, E. (in preparation). Affective computing: Theory, methods, and applications. Boca Raton, FL: Taylor and Francis/CRC Press. Hudlicka, E., and Canamero, L. (2004). Preface: Architectures for modeling emotion. Presented at the AAAI Spring Symposium, Palo Alto, CA: AAAI Press, Stanford University. Hudlicka, E., and Fellous, J.-M. (1996). Review of computational models of emotion. Arlington, MA: Psychometrix Associates. Hudlicka, E., and Zacharias, G. (2005). Requirements and approaches for modeling indi- viduals within organizational simulations. In W.B. Rouse and K.R. Boff (Eds.), Organi- zational simulation (pp. 79–138). Hoboken, NJ: John Wiley & Sons. Hudlicka, E., Adams, M.J., and Feehrer, C.E. (1992). Computational cognitive models: Phase I. BBN report 5. Cambridge, MA: BBN Technologies Izard, C.E. (1993). Four systems for emotion activation: Cognitive and noncognitive processes. Psychological Review, 00(1), 68–90. Jones, R.M., Laird, J.E., Nielsen, P.E., Coulter, K.J., Kenny, P., and Koss, F.V. (1999). Auto- mated intelligent pilots for combat flight simulation. AI Magazine, 0(1), 27–41. Kieras, D.E., Wood, S.D., and Meyer, D.E. (1997). Predictive engineering models based on the EPIC architecture for a multimodal high-performance human-computer interaction task. Transactions on Computer-Human Interaction, 4(3), 230–275. Klein, G.A. (1997). The recognition-primed decision (RPD) model: Looking back, looking forward. In C. Zsambok and G. Klein (Eds.), Naturalistic decision making. Mahwah, NJ: Lawrence Erlbaum Associates. Laird, J.E. (2000, March). It knows what you’re going to do: Adding anticipation to a quakebot. (AAAI 2000 Spring Symposium Series: Artificial Intelligence and Interactive Entertainment, Technical Report #SS-00-02.) Palo Alto, CA: AAAI Press, Stanford University. Langley, P., and Choi, D. (2006). A unified cognitive architecture for physical agents. In Pro- ceedings of the Twenty-First National Conference on Artificial Intelligence, Boston: AAAI Press. Available: http://cll.stanford.edu/~langley/papers/icarus.aaai06.pdf [accessed April 2008]. Laughery, K.R., Jr., and Corker, K. (1997). Computer modeling and simulation of human/ system performance. In G. Salvendy (Ed.), Handbook of human factors and ergonomics, second edition (pp. 1375–1408). Hoboken, NJ: John Wiley & Sons. Lazarus, R.S. (1984). On the primacy of cognition. American Psychologist, (2), 124–129. LeDoux, J. (1998). Fear and the brain: Where have we been, and where are we going? Biologi- cal Psychiatry, 44(12), 1229–1238. Lewis, M., and Haviland-Jones, J.M. (2000). Handbook of emotions, second edition. New York: Guilford Press. Liebowitz, J. (1997). Worldwide perspectives and trends in expert systems: An analysis based on the three world congresses on expert systems. AI Magazine, (2), 115–119.

OCR for page 149
 MICRO-LEVEL FORMAL MODELS Lisetti, C.L., and Gmytrasiewicz, P. (2002). Can a rational agent afford to be affectless? A formal approach. Applied Artificial Intelligence, , 577–609. MacMillan, J. (2007). Technical briefing: Modeling the group. Available: http://www.tsjonline. com/story.php?F=2724207 [accessed Feb. 2008]. Marsh, C. (1988). The ISA expert system: A prototype system for failure diagnosis on the space station. In Proceedings of the st International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, volume , New York: ACM Press. Marsh, C. (1999). The F-16 Maintenance skills tutor. The Edge (MITRE Newsletter), (1). Martínez, J., Gomes, C., and Linderman, R. (2005). Workshop on research directions in architectures and systems for cognitive processing. Organized by Computer Systems Laboratory, Intelligent Information Systems Institute, on behalf of the Air Force Research Laboratory, July 14–15, Cornell University, Ithaca, NY. Martinho, C., Machado, I., and Paiva, A. (2000). Affective interactions: Towards a new gen- eration of affective interfaces. New York: Springer Verlag. Mathews, R.B. (2006). The People and Landscape Model (PALM): An agent-based spatial model of livelihood generation and resource flows in rural households and their environ- ment. Ecological Modelling, 4, –4. Mellers, B.A., Schwartz, A., and Cooke, A.D.J. (1998). Judgment and decision making. Annual Review of Psychology, 4, 447–477. Morrison, J.E. (2003). A review of computer-based human behavior representations and their relation to military simulations. (IDA Paper P-3845.) Alexandria, VA: Institute for Defense Analyses. Myerson, R.B. (1999). Nash equilibrium and the history of economic theory. Journal of Eco- nomic Literature, (3), 1067–1082. National Research Council. (1998). Modeling human and organizational behavior: Applica- tion to military simulations. Washington, DC: National Academy Press. National Research Council. (1999). Funding a revolution: Government support for computing research. Committee on Innovations in Computing and Communications: Lessons from History. Computer Science and Telecommunications Board, Commission on Physical Sci- ences, Mathematics, and Applications. Washington, DC: National Academy Press. National Research Council. (2003). The role of experimentation in building future Naval forces. Committee for the Role of Experimentation in Building Future Naval Forces. Naval Studies, Division on Engineering and Physical Sciences. Washington, DC: The National Academies Press. Nawab, S., Hamid, Wotiz, R., and De Luca, C.J. (2004). Improved resolution of pulse super- positions in a knowledge-based system EMG decomposition. In Engineering in Medicine and Biology Society, Proceedings of the th Annual International Conference of the EMBS ’04 IEEE (September, , 69–71), San Francisco, CA. Available: http://ieeexplore. ieee.org/iel5/9639/30462/01403092.pdf?isNumber= [accessed Feb. 2008]. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press. Norman, D.A. (1981). Steps towards a cognitive engineering: System images, system friendli- ness, mental models. Technical report for Program in Cognitive Science, University of California, San Diego. Nuortio, T., Kytöjoki, Niska, H., and Bräysy, O. (2006). Improved route planning and scheduling of waste collection and transport. Expert Systems with Applications, 0(2), 223–232. Olson, J.R., and Olson, G.M. (1990). The growth of cognitive modeling in human-computer interaction since GOMS. Human-Computer Interaction, 5(2–3), 221–265. Ortony, A., Clore, G.L., and Collins, A. (1988). The cognitive structure of emotions. New York: Cambridge University Press.

OCR for page 149
 BEHAVIORAL MODELING AND SIMULATION Ortony, A., Norman, D.A., and Revelle, W. (2005). Affect and proto-affect in effective func- tioning. In J.-M. Fellous and M.A. Arbib (Eds.), Who needs emotions?: The brain meets the machine (pp. 173–202). New York: Oxford University Press. Paiva, A. (2000). Affective interactions, towards a new generation of computer interfaces. New York: Springer. Paiva, A., Dias, J., Sobral, D., Aylett, R., Woods, S., Hall, L., and Zoll, C. (2005). Learning by feeling: Evoking empathy with synthetic characters. Applied Artificial Intelligence Journal, (3–4), 235–266. Pandey, V., Ng, W.-K., and Lim, E.-P. (2000). Financial advisor agent in a multi-agent finan- cial trading system. In Proceedings of the th International Workshop on Database and Expert Systems Applications (pp. 482–486). Available: http://ieeexplore.ieee.org/ iel5/7035/18943/00875070.pdf?isNumber= [accessed Feb. 2008]. Pearl, J. (1986). Fusion, propagation, and structuring in belief networks. Artificial Intelligence, (3), 215–288. Phelps, E.A., and LeDoux, J.E. (2005). Contributions of the amygdala to emotion processing: From animal models to human behavior. Neuron, 4(2), 175–187. Prada, R. (2005). Teaming up humans and synthetic characters. Unpublished doctoral dis- sertation, UTL-IS-Technical University of Lisbon, Portugal. Prendinger, H., and Ishizuka, M. (2003). Life-like characters: Tools, affective functions, and applications. New York: Springer. Prendinger, H., and Ishizuka, M. (2005). Human physiology as a basis for designing and evaluating affective communication with life-like characters. IEEE Transactions on Information and Systems, E-D(11), 2453–2460. Puerta, A.R., Neches, R., Eriksson, H., Szekely, P., Luo, P., and Musen, M.A. (1993). Toward ontology-based frameworks for knowledge-acquisition tools. Available: http:// bmir.stanford.edu/publications/view.php/toward_ontology_based_frameworks_for_ knowledge_acquisition_tools [accessed Feb. 2008]. Purtee, M.D., Krusmark, M.A., Gluck, K.A., Kotte, S.A., and Lefebvre, A.T. (2003). Verbal protocol analysis for validation of UAV operator model. In Proceedings of the 5th Interservice/Industry Training, Simulation, and Education Conference (pp. 1741–1750), Orlando, FL: National Defense Industrial Association. Raiffa, H. (1997). Decision analysis: Introductory lectures on choices under uncertainty. New York: McGraw-Hill. Reilly, W.S.N. (2006). Modeling what happens between emotional antecedents and emotional consequents. Paper presented at Agent Construction and Emotions (ACE 2006): Model- ing the Cognitive Antecedents and Consequences of Emotion Workshop, April, Vienna, Austria. Ritter, F.E., and Avraamides, M.N. (2000). Steps towards including behavior moderators in human performance models in synthetic environments. (Technical report #ACS-2000-1.) State College, PA: Pennsylvania State University. Ritter, F., Avramides, M., and Councill, I. (2002). Validating changes to a cognitive archi- tecture to more accurately model the effects of two example behavior moderators. In Proceedings of th CGF Conference, Orlando, FL. Ritter, F.E., Reifers, A.L., Klein, L.C., and Schoelles, M. (2007). Lessons from defining theories of stress. In W.D. Gray (Ed.), Integrated models of cognitive systems (IMoCS) (pp. 254–262). New York: Oxford University Press. Ritter, F.E., Shadbolt, N.R., Elliman, D., Young, R.M., Gobet, F., and Baxter, G.D. (2003). Techniques for modeling human performance in synthetic environments: A supplemen- tary review. Wright-Patterson Air Force Base, OH: Human Systems Information Analysis Center.

OCR for page 149
 MICRO-LEVEL FORMAL MODELS Sander, D., Grandjean, D., and Scherer, K.R. (2005). A systems approach to appraisal mecha- nisms in emotion. Neural Networks, (4), 317–352. Scherer, I.R., Schorr, A., and Johnstone, T. (2001). Appraisal processes in emotion: Theory, methods, research. New York: Oxford University Press. Scheutz, M. (2004). Useful roles of emotions in artificial agents: A case study from artificial life. In Proceedings of the Nineteenth National Conference on Artificial Intel- ligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence. Cambridge, MA: AAAI Press/MIT Press. Scheutz, M., and Schermerhorn, P. (2004). The more radical, the better: Investigating the utility of aggression in the competition among different agent kinds. Available: http://citeseer.ist. psu.edu/cache/papers/cs/30796/http:zSzzSzwww.nd.eduzSzzCz7EairolabzSzpublicationsz Szscheutzschermerhorn04sab.pdf/the-more-radical-the.pdf [accessed Feb. 2008]. Scheutz, M., Schermerhorn, P., Kramer, J., and Middendorff, C. (2006). The utility of affect expression in natural language interactions in joining human-robot tasks. In Proceedings of IEEE/ACM st Annual Conference on Human-Robot Interactions (HRI2006) (pp. 226–233), Salt Lake City, UT. Sheng, H.-M., Wang, J.-C., Huang, H.-H., and Yen, D.C. (2006). Fuzzy measure on vehicle rout- ing problem of hospital materials. Expert Systems with Applications, 0(2), 367–377. Sierhuis, M. (2001). Modeling and simulating work practice. BRAHMS: A multiagent modeling and simulation language for work system analysis and design (SIKS Dis- sertation Series No. 00-0). Unpublished doctoral dissertation, The University of Amsterdam, Amsterdam. Available: http://www.agentisolutions.com/documentation/ papers/BrahmsWorkingPaper.pdf [accessed Feb. 2008]. Sierhuis, M., and Clancey, W.J. (1997). Knowledge, practice, activities and people. In Proceed- ings of AAAI Spring Symposium on Artificial Intelligence in Knowledge Management, Stanford University, CA. Available: http://ksi.cpsc.ucalgary.ca/AIKM97/AIKM97Proc. html [accessed Feb. 2008]. Silverman, B.G., Bharathy, G., and Nye, B. (2007). Profiling as “politically correct” agent- based modeling of ethno-political conflict. Paper presented at the Interservice Industry Training, Simulation and Education Conference, Orlando, FL. Silverman, B., Johns, M., Cornwell, J., and O’Brien, K. (2006). Human behavior models for agents in simulators and games: Part I, enabling science with PMFserv. PRESENCE, 5(2), 139–162. Simon, H.A. (1967). Motivational and emotional controls of cognition. Psychological Review, 4, 29–39. Sloman, A. (2003). How many separately evolved emotional beasties live within us? Paper presented at the Workshop on Emotions in Humans and Artifacts, Vienna, Austria, August 1999 and to appear in Emotions in Humans and Artifacts, R. Trappl and P. Petta (Eds.), Cambridge, MA: MIT Press. Available: http://citeseer.ist.psu.edu/cache/papers/ cs/21550/http:zSzzSzwww.cs.bham.ac.ukzSzresearchzSzcogaffzSzsloman.vienna99.pdf/ sloman02how.pdf [accessed Feb. 2008]. Sloman, A., Chrisley, R., and Scheutz, M. (2005). The architectural basis of affective states and processes. In J.-M. Fellous and M.A. Arbib (Eds.), Who needs emotions?: The brain meets the robot (pp. 203–244). New York: Oxford University Press. Smith, C.A., and Kirby, L.D. (2001). Toward delivering on the promise of appraisal theory. In K.R. Scherer, A. Schorr, and T. Johnstone (Eds.), Appraisal processes in emotion: Theory, methods, research (pp. 121–140). New York: Oxford University Press. Stocco, A., and Fum, D. (2005). Somatic markers and memory for outcomes: Computational and experimental evidence. In Proceedings of the XIV Annual Conference of the Euro- pean Society for Cognitive Psychology (ESCoP 005), September, Leiden University.

OCR for page 149
4 BEHAVIORAL MODELING AND SIMULATION Stylianou, A.C., Madey, G.R., and Smith, R.D. (1992). Selection criteria for expert system shells. Communications of the ACM, (10), 30–48. Sun, R. (2003). Tutorial on the Clarion 5.0 Architecture. Technical Report, Cognitive Science Department, Rensselaer Polytechnic Institute. Sun, R. (2005). Cognition and multi-agent interaction. New York: Cambridge University Press. Tambe, M., Adibi, J., Al-Onaizan, Y., Erdem, A., Kaminka, G.A., Marsella, S.C., and Muslea, I. (1999). Building agent teams using an explicit teamwork model and learning. Artificial Intelligence, 0, 215–239. Trappl, R., Petta, P., and Payr, S. (2003). Emotions in humans and artifacts. Cambridge, MA: MIT Press. Trimble, E.G., Allwood, R.J., and Harris, F.C. (2002). Expert systems in contract man- agement: A pilot study. Defense Technical Information OAI-PMH Repository (United States). (Accession# ADA149363.) Available: http://stinet.dtic.mil/oai/ oai?verb=getRecord&metadataPrefix=html&identifier=ADA149363 [accessed April 2008]. Velásquez, J.D. (1999). An emotion-based approach to robotics. In Proceedings of the IEEE/ RSJ International Conference on Intelligent Robots and Systems, Kyongiu, Korea. Zacharias, G.L., Miao, A.X., Illgen, C., and Yara, J.M. (1995). SAMPLE: Situation aware- ness model for pilot-in-the-loop evaluation. In Proceedings of the st Conference on Situation Awareness in the Tactical Air Environment, Wright Patterson Air Force Base, OH. CSERIAC. Zachary, W., Cannon-Bowers, J., Bilazarian, P., Drecker, D., Lardieri, P., and Burns, J. (1999). The Advanced Embedded Training System (AETS): An intelligent embedded tutoring system for tactical team training. Journal of Artificial Intelligence in Education, 0, 257–277. Zachary, W., Jones, R.M., and Taylor, G. (2002). How to communicate to users what is inside a cognitive model. In Proceedings of the Eleventh Conference on Computer-Generated Forces and Behavior Representation (pp. 375–382), Orlando, FL: UCF Institute for Simulation and Training. Zachary, W., Santarelli, T., Ryder, J., Stokes, J., and Scolaro, D. (2001). Developing a multi- tasking cognitive agent using the COGNET/iGEN interactive architecture. In Proceedings of 0th Conference on Computer Generated Forces and Behavioral Representation (pp. 79–90), Norfolk, VA: Simulation Interoperability Standards Organization (SISO). Zadeh, L.A. (1965). Fuzzy sets. Information and Control, , 338–353. Zajonc, R.B. (1984). On the primacy of affect. American Psychologist, (2), 117–123. Zoll, C., Enz, S., Schaub, H., Paiva, A., and Aylett, R. (2006). Fighting bullying with the help of autonomous agents in a virtual school environment. Paper presented at 7th Inter- national Conference on Cognitive Modeling, Trieste, Italy.