Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Appendix C Candidate DIME/PMESII Modeling Paradigms A variety of modeling formalisms could be considered for DIME/Â PMESII modeling efforts. We review some of them here. Table C-1 compares selected modeling techniques by tabulat- ing them against key characteristics which ultimately determine modeling utility. In the remainder of this appendix, we define the characteristics, and provide a very brief overview of each modeling formalism. Expressivity of a modeling paradigm refers to its ability to capture and express an analystâs knowledge in terms of the constructs the para- digm offers. The expressivity of a concept graph is very high as it keeps the phrases used by the analysts intact in the model. In contrast, a neural network model is only able to keep the input-output relationships in the model. More expressive models are better able to capture the richness of PMESII domains and are typically easier to build, use, and understand by the modeler. The executable feature of a modeling technique refers to whether some useful information that is implicit in a model (e.g., degree of influence of one variable onto another) can be derived from the model via some kind of inferencing algorithm. A causal graph, for example, is an executable paradigm as it offers propagation algorithms, and so also is a trained neural network. In contrast, the concept-mapping model does not have such an algorithm. Nonexecutable modeling techniques are useful for visualizing complex models for human understanding and analysis; executable models are useful for providing automated analysis of the models. Reasoning of a modeling paradigm refers to the paradigmâs ability to detect the direction of influence (not just connection) of one variable to 389
TABLE C-1â PMESII Modeling Paradigms and Their Characteristics 390 Characteristics: Modeling Paradigms: Expressivity Executable Reasoning Adaptability Tools Exemplary Products Concept map High No Forward Medium Free (Research) CmapTools (cmap.ihmc.us) Backward COTS Decision Explorer (www.banxia.com/demain.html) Concept graph Medium No Forward Low Free (Limited) Graphviz (Graphviz) Backward (graphviz.org) Yes GOTS OCCAM (OCCAM) (http://www.cra.com/contract-r-d/ cognitive-systems-occam.asp) Social networks Medium Yes Forward Medium GOTS OCCAM Backward (cra.com/contract-r-d/ cognitive-systems-occam.asp) Causal graph Medium Yes Forward Medium COTS BNet (cra.com/bnet) Backward Free (Limited) C4.5 (http://www.rulequest.com/ Personal/) System dynamics Medium Yes Forward Low Free (Research) Ptolemy model Backward (http://ptolemy.berkeley.edu) Neural network Low Yes Forward High COTS NeuroSolutions (www.neurosolutions.com) Free (Research) Xerion (www.cs.toronto.edu/~xerion/) Situation theory Medium Yes Forward Low In-House PRISM (www.cra.com)
APPENDIX C 391 another. A belief network propagation algorithm, for example, incorporates both deductive and abductive reasoning, and thus is able to detect both forward and backward influences. On the other hand, the standard back propagation neural-network modeling paradigm is limited only to forward reasoning. Different modeling tasks require different kinds of reasoning. It is sometimes useful to be able to look at a state and reason about likely future outcomes (forward reasoning). For instance, one might want to attempt to predict the likelihood of social unrest by evaluating the current social, political, and economic state of affairs. Other times it is useful to look at externally available information and diagnose the likely underlying causes (backward reasoning). For instance, one might want to reason from observed social unrest back to the likely underlying political, economic, and social causes in order to properly address the causes of the unrest. For these reasons, it is important to support both forms of reasoning with the modeling tools we provide. Adaptability of a modeling paradigm refers to automatic adjustments by models, which are necessary to take into account new observations. It is hard to adjust structures of graphical models as they are built in consulta- tion with subject matter experts. But the strength of relationships among a set of variables within a model (e.g., probabilities in a belief network model or activation levels within a neural model) can be adjusted based on observations without changing their structure. Having models that can easily be adapted to represent new concepts and incorporate new data are generally preferable. Tools of a modeling paradigm refers to the currently available software tools implementing the paradigm, that is, whether such a tool is commercial off the shelf (COTS), government off the shelf (GOTS), open source, or freely available for research/commercial purposes. We now briefly describe the different modeling techniques shown in the table. Concept Maps Concept maps are a result of research into human learning and knowl- edge construction (Novak, 1998). In concept maps, the primary elements of knowledge are concepts, and relationships between concepts are propo- sitions. Concept maps are a graphical two-dimensional display of con- cepts, connected by directed arcs encoding brief relationships (e.g., linking phrases) between pairs of concepts forming propositions. Each concept node is labeled with a noun, adjective, or short phrase, and each edge is labeled with verbs or verb phrases describing the relation between the con- nected concepts. Concepts maps are highly effective in quickly capturing domain knowledge along DIME/PMESII dimensions.
392 BEHAVIORAL MODELING AND SIMULATION A popular tool for concept mapping is the CmapTools (Canas et al., 2004) package developed at the Institute for Human and Machine Cogni- tion (see http://www.ihmc.us). The package is freely available for both commercial and noncommercial use, and has many advantages over using sticky notes or a more general diagramming tool (e.g., it can record the entire mapmaking process). There are also COTS tools that can be used, such as Banxiaâs Knowledge Explorer. Concept Graphs Concept graphs are a formal system of logic based on the existential graphs of C.S. Peirce and semantic networks. Concept graphs explicitly represent entities/concepts and relationships between entities as nodes in a directed graph. They are mathematically precise and computationally tractable structures, which have a graphic representation that is humanly readable. For this reason, concept graphs have been used in a variety of applications for computer linguistics, knowledge representation, informa- tion retrieval, and database design. Their ease of use and generality make them immediately useful for modeling a wide variety of domains, including PMESII domains. Figure C-1 is an example concept graph encoding a generic behavioral model of a terrorist leader. Social Networks Social networks are similar to concept graphs, but they represent social structures. The nodes of the social network typically represent individuals Terrorist Group A Leads Leader X Angers Attr Attr Attr Aggressive Diplomatic Quick to Anger Causes Imminent Attack Attr Attr Attr Use of Threatening Calling for Inviting Suicide Phrases Jihad Bombers FIGURE C-1â Concept graph model for terrorist leader behavior. C-1.eps
APPENDIX C 393 and the links between them represent social relationships. Social network analysis (SNA) provides tools for reasoning about social networks, their strengths and weaknesses, the structural roles played by particular indiÂ viduals, and their dynamics over time. Because of the focus on the analysis of social structures, SNA is directly applicable to a range of PMESII model- ing tasks. SNA tools can be extended in a number of directions. For example, one can build on traditional SNA functionality by providing additional representational and analytic power by having nodes representing not only individuals, but also arbitrary entities, especially including groups. Links can be similarly extended to represent not only individual-to-individual relationships, but also individual-to-group relationships (e.g., member-of) and group-to-group relationships (e.g., rival political party). By providing built-in Bayesian and rule-based reasoning capabilities, one could enable automated analysis of the graph. For instance, a Bayesian network might represent that members of a group might have a high probability of hold- ing views that are promoted by that group, where the group, the indi- vidual, and the ideology are all represented in the network as nodes with a Â ppropriate links between them. In this case, an enhanced SNA tool could automatically create a new believes link between the individual and the ideology and annotate it with a particular probability. Causal Graphs A causal graph (e.g., a belief network) (Jensen, 1996) is a graphical, probabilistic knowledge representation of a collection of variables describ- ing some domain. The strength of causal graphs are their ability to repre- sent both the causal structure of a domain and the probabilistic elements of those causal relationships (X causes Y with some probability), thus enabling the modeling of both qualitative and quantitative details of the model. In addition, the ability of causal graphs to handle both forward (causal) reasoning and backward (diagnostic or abductive) reasoning makes them especially well suited to domains with many sources of data, some of which are uncertain, unreliable, or potentially missing. Many PMESII modeling problems fall within such a scope. Influence diagrams are a specialization of causal networks, augmented with decision variables and utility functions to solve decision problems. Decision trees are specialized influence diagrams that help to choose between options by projecting likely outcomes as utilities. Such extensions to causal graphs make it possible to also reason about the costs and ben- efits of possible decisions. This functionality can be used to both support intelligent decision making and to model likely decisions on the part of the entities being modeled.
394 BEHAVIORAL MODELING AND SIMULATION Bayesian reasoning tools, such as those provided by Microsoft (MSBN; see http://research.microsoft.com/research/dtg/#bayesian), Norsys (Netica; see http://www.norsys.com/index.html), and Charles River Analytics (BNet; see http://www.cra.com), can support construction and reasoning with causal graphs. There are also other existing COTS solutions to model- ing influence diagrams and decision trees, such as C4.5 (see http://www.Â rulequest.com/Personal/). System Dynamics Models As described in Chapter 4, system dynamics models, such as the StabilizaÂtion and Reconstruction Operations Model (SROM) (Robbins et al., 2005) can be used to analyze the organizational hierarchy, dependen- cies, interdependencies, exogenous drivers, strengths, and weaknesses of a countryâs PMESII systems to enable more efficient resource expenditure. SROM models PMESII systems at the national and regional levels, includ- ing the interactions between regions. They also take into account demo- graphic data, insurgent and coalition military, critical infrastructure, law enforcement, indigenous security institutions, and public opinion. The SROM models developed by the AFRL/IF NOâEM group were built using the Ptolemy heterogeneous modeling software (see http://Âptolemy. berkeley.edu), which is developed and supported by the Electrical Engi- neering and Computer Science department of the University of California, Berkeley. While developed primarily for modeling of real-time embedded systems, its heterogeneous processing model makes it an effective tool for integrating a variety of data processing algorithms. Neural Networks A neural network is a nonlinear information-processing paradigm that models complex systems with a large number of highly interconnected processing elements (a.k.a. neurons or nodes), arranged in multiple layers, working in unison to solve specific problems. Neural networks offer some of the most versatile ways of mapping or classifying a nonlinear process or relationship. Neural networks have been successfully used in diverse paradigms, such as recognition of speakers in communications, diagnosis of hepatitis, recovery of telecommunications from faulty software, inter- pretation of multimeaning Chinese words, undersea mine detection, texture analysis, three-dimensional object recognition, hand-written word recogni- tion, and facial recognition. Neural networks would be useful in building PMESII models for those domains that have highly complex nonlinear relationships between input and output variables.
APPENDIX C 395 A large number of neural network construction kits and runtime engines exist, including the Xerion tool from the University of Toronto (see http://www.cs.toronto.edu/~xerion/) and the NeuroSolutions tools from NeuroSolutions (see http://www.nd.com/products/nsv3.htm). Situation Theory Situation theory models information processing and flow, that is, how an agent extracts information from the world and how it is subsequently transferred between agents. Situation theory provides a paradigm for describing the world, an ontology for representing it, and a suite of infer- ences for reasoning about it. Situation theory is unique in that it places situations alongside individuals, relations, and locations as first-class mem- bers of its ontology. Situations provide partial descriptions of the world in terms of the features individuated by some agent. They are defined in terms of the relationships they support; that is, they represent relationships between relationships. Situations provide a powerful representation of complex events spread over both space and time and, therefore, serve as a natural representation of a variety of PMESII models. Situation theory has been applied to a variety of fields including natural language understanding (Barwise and Perry, 1983), information visualization (Lewis, 1991), coop- erative social interaction (Devlin and Rosenberg, 1991), and both Level 2 (Steinberg and Bowman, 2004) and Level 3 (Steinberg, 2005) data fusion. References Barwise, J., and Perry, J. (1983). Situations and attitudes. Cambridge, MA: MIT Press. Canas, A., Hill, G., Carff, R., Suri, N., Lott, J., Eskridge, T.C., GÃ³mez, Arroyo, M., and Carvajal, R. (2004). CmapTools: A knowledge modeling and sharing environment. In Proceedings of First International Conference on Concept Mapping (pp. 125â133). Pamplona, Spain: Universidad PÃºblica de Navarra. Devlin, K., and Rosenberg, D. (1991). Situation theory and cooperative action. In Proceedings of the Third Conference on Situation Theory and Its Applications, Oiso, Japan. Jensen, F.V. (1996). Bayesian networks basics. AISB Quarterly, 94, 9â22. Lewis, C.M. (1991). Visualization and situations. In J. Barwise, J.M. Gawron, G. Plotkin, and S. Tutiya (Eds.), Situation theory and its applications II (pp. 553â580). Stanford, CA: CSLI. Novak, J.D. (1998). Learning, creating, and using knowledge: Concept maps as facilitative tools n schools and corporations. Mahwah, NJ: Lawrence Erlbaum Associates. i Robbins, M., Deckro, R.F., and Wiley, V.D. (2005). Stabilization and reconstruction opera- tions model (SROM). Presented at the Center for Multisource Information Fusion Fourth Workshop on Critical Issues in Information Fusion: The Role of Higher Level Informa- tion Fusion Systems Across the Services, University of Buffalo. Available: http://www. infofusion.buffalo.edu/ [accessed Feb. 2008].
396 BEHAVIORAL MODELING AND SIMULATION Steinberg, A. (2005). An approach to threat assessment. In Proceedings of the Eighth nternational Conference on Information Fusion, Volume 2, Philadelphia, PA. I Available: ttp://ieeexplore.ieee.org/iel5/10604/33513/01592001.pdf?isNumber= Â[accessed h Feb. 2008]. Steinberg, A., and Bowman, C. (2004). Rethinking the JDL data fusion levels. In Proceedings of National Symposium on Sensor and Data Fusion. Available: http://www.infofusion. buffalo.edu/tm/Dr.Llinasâstuff/Rethinking%20JDL%20Data%20Fusion%20Levels_ B Â owmanSteinberg.pdf [accessed Feb. 2008].