Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 64
Defense Modeling, Simulation, and Analysis: Meeting the Challenge B Social Behavioral Modeling Kathleen Carley A behavioral model is a model of human activity in which individual or group behaviors are derived from the psychological or social aspects of humans. Behavioral models include a diversity of approaches. The computational approaches important from the DoD perspective are social network models and multiagent systems. An important caveat has to do with cognitive models. Cognitive models focus on the way in which cognition works, including aspects such as information gathering, processing, and utilization. Developers often build these models using general cognitive frameworks such as Brahms, Soar, Act-R, or various neural network platforms. Such models have traditionally focused on the individual in isolation and have rarely been used to model social dependent aspects of behavior that occur when multiple individuals are together. The key exceptions here are the cognitive multiagent systems (described later). Cognitive modeling has been used to address a number of DoD concerns, ranging from in-depth models of specific foreign leaders to detailed models of a human for use in evaluating various weapon systems or communication tools. Social network models focus on the way in which relations among actors, such as who knows whom, constrain and enable access to information and behavior and serve as a basis for power and prestige. Multiagent systems focus on the way in which social behavior emerges from the actions of heterogeneous agents. Additional features, applications, and the state of the art will be described for each of these approaches in turn. SOCIAL NETWORK MODELS Social network analysis (SNA) is a computer-supported form of statistical analysis, derived from graph theory, that focuses on relational data (connections among nodes) rather than attribute data (features of nodes). Social network models are simply models of social behavior that take such relations into account. They may be realized as computational multiagent models, mathematical models, regression models, or conceptual models. Rather than detailing these models, the focus here is on SNA and the way in which such analysis and such considerations influence models. SNA has received a great deal of attention since 9/11. As a result, many companies and individuals are claiming that they have expertise in the area even when they have no formal training or background. Catch phrases for fighting terrorism—“Disconnect the dots” and “It takes a network to fight a network”—and for doing business—“It’s not who you know but who or what who you know knows” and “Are you networking?”—have appealed to our imagination and raised awareness of SNA. Further, there have been successful applications of this approach. For example, social network information was used to locate Sadaam Hussein, and several of the tools have been used in various criminal investigations. Students use social network information in Friendster to vet their dates. The Nature of SNA Models “Social network analysis” is a common term of art used to capture the different types of analyses done in three areas: traditional social network analysis, link analysis, and dynamic network analysis. These areas vary based on the number of nodes (multimode) and links (multilink) in the network and the scientific traditions out of which they emerged. These and other differences are summarized in Table B.1. Traditional social network analysis centers on relatively simple networks. A typical social network analysis works with a single network connecting people to people by some relationship (perhaps they work together). Analysts in this area primarily use computational techniques to statistically analyze these networks. This area has a long tradition predating World War II. It emerged from the social sciences, particularly from anthropology and sociology, and has now spread to organization science, economics, physics, and computer science. Much of the work in this area has focused
OCR for page 65
Defense Modeling, Simulation, and Analysis: Meeting the Challenge TABLE B.1 Feature Comparison Feature Social Network Link Analysis Dynamic Network Analysis Entity studied The network A set of links Either the network or a set of links Multilink One or two links Many links One or many links Multimode One or two modes Many modes One or many modes Focus Identify key actors and groups Anomaly detection Identify key actors and groups Networks evolve? No No Yes Locates network elite? Yes No Yes Locates patterns of behavior? No Yes No Locates patterns across networks? No Needs work Needs work What evolves? Nothing Nothing Agents, groups, and networks Predicts and assesses individual behavior Few behaviors Many behaviors Many behaviors Predicts and assesses group behavior Few behaviors No Few behaviors Handles missing information? No Needs work Needs work Optimized search? No Yes Sometimes Locates groups? Yes Yes Yes Analysis of change Qualitative Assumes the future is the same as the past Quantitative Handles streaming data No Needs work Needs work on characterizing the size and shape (topology) of the underlying networks, identifying who stands out (which individuals because of their relations to others occupy key positions in the network), and how the structure of the network or an individual’s position within it influences behavior. There are numerous SNA computational tools, ranging from network visualizers to packages for analyzing network data, and new ones appear daily. Link analysis centers on discovering patterns by looking at the relations among entities. Analysts in this area use computational techniques to locate patterns and subgroups. This area has emerged largely from computer science, with particular attention to work in machine learning. Some of the roots in this area are in forensics. Extraction of links often requires massive data preprocessing or restructuring of databases (Goldberg and Wong, 1998). Advanced data-processing techniques are combined with machine learning to enable rapid database transformation and pattern extraction. Much of the work in this area has focused on the identification and recognition of patterns, data mining, and node iden tification. There are a growing number of tools, many of which are available on the Web. Common tools exist for doing a variety of tasks, including extracting links from databases (Goldberg and Senator, 1998) and texts (Lee, 1998) and analyzing the extracted links (Chen and Lynch, 1992; Hauck et al., 2002). Dynamic network analysis (DNA) is an emergent field centered on the collection, analysis, understanding, and prediction of dynamic relations (such as who talks to whom) and the impact of such dynamics on the behavior of individuals or collectives (Carley, 2003). Analysts combine computational techniques, such as machine learning and artificial intelligence, with traditional graph and social network theory and with empirical research on human behavior, groups, organizations, and societies to develop and test tools and theories of relational enabled and constrained action. This area builds on social network analysis and link analysis and adds computer simulation to the mix to look at network evolution. There are a growing number of DNA tools, some of which embody most of the SNA techniques.
OCR for page 66
Defense Modeling, Simulation, and Analysis: Meeting the Challenge Application Areas Essentially any problem in which there are relational data—data about whether entities of one type relate to other entities of the same or different type—can be addressed using network analysis. Social network analysis, link analysis, and dynamic network analysis all have their own strengths. Factors that determine the effectiveness of one or the other type of analysis include the scale of the underlying network, the completeness of the existing data, and the level or types of errors in the data. Social network analysis is particularly useful for understanding the connections among political elites, identifying groups and cliques in organizations, understanding the flow of information, understanding disease propagation in a group, or identifying the elite or the isolates. Grouping algorithms are useful for breaking a large network into a set of subnetworks such that the members are either tightly connected to each other (e.g., they form a clique) or are similar (e.g., connect to the same others even if not to each other). Applications are generally done at the individual level—person to person. However, the same tools can be, and have been, applied to organizations. In general, for most measures, the assumption is that the user has complete or almost complete data, that the connections are all of the same type (e.g., who lends money to whom), and that the nodes are all of the same type (e.g., they are all people). Link analysis is particularly useful for identifying anomalous patterns. Typical applications are locating money laundering profiles or other sequences of activities associated with specific crimes and locating groups of people who have special relations to each other. Dynamic network analysis has a variety of applications over and above those afforded by social network analysis. These derive from the fact that some networks can be co-analyzed, such as social and knowledge networks. As such, the tools can be used to assess organizational health and adaptability, to assess whether the movement of personnel between ships might reflect movement of information among owners, to locate emergent groups in terms of both who talks to whom and what are they talking about, to identify points of influence, and so on. Moreover, the simulation component facilitates assessing network evolution and evaluating courses of action designed to alter networks. Illustrative applications include disease spread, change in beliefs, assessment of various isolating or information-spreading courses of action, team design and assessment, identification of points of influence, and location of emergent subgroups. State of the Art There are a large number of computational tools within network analysis. These have been developed in the United States and Europe, at universities and by private companies. Few of these tools are interoperable. Some of the basic measures are available as open source, and algorithms for most measures have been published, although not collected into a single compendium. In general, network analysis tools are increasingly widely used. Most metrics have received some level of validation or verification. Many of the underlying algorithms have been optimized (although not all tools contain the optimized algorithms). In addition, there is a rapidly growing body of basic and applied research. From a defense standpoint it is important to note that there is a rapidly growing body of applications on both red and blue force assessment. There is a wealth of information on the interpretation limits of the various measures. However, this information has not been systematically collected and organized. There are technical references; however, they are out of date (e.g., Wasserman and Faust, 1994). There is no excellent undergraduate textbook. Knowledge needs to be acquired through classes at universities, special short courses offered by key practitioners to industry, didactic seminars at major conferences, and articles in key journals. In addition, practitioners and researchers in each of the three areas tend to attend different core conferences and read different journals. In general, in traditional social network analysis, the most advanced tools are for binary and summarized data. Across this area the visualization tools are in their infancy, with the most advanced such tools being for networks of less than 200 nodes. Many of the link analysis tools use hidden Markov models or Bayesian updating, both of which are well-understood techniques. The most advanced DNA tools build directly on SNA and/or link analysis. Across the board, most tools stand alone and are not Web-enabled. Only a few tools are relatively easy to integrate into other systems. Only a few of the existing tools have been tested and optimized for largescale networks (at least 106 nodes). Issues of measure robustness and sensitivity to missing or erroneous data are currently being addressed by several active research programs. Key Limitations There are a number of technological and practical limitations. In many cases, however, there is ongoing research to overcome them. One difficulty in this area is that the basic measures are so easy and the promise so high that many people and many companies are claiming expertise despite having no training in the area. Basic metrics are now in use in a wide variety of applications; however, claims about applicability and interpretation of results are often inappropriate, there is a great deal of reinventing the wheel, and there are spurious claims of novelty for well-understood approaches. One of the key limitations in all of these areas is visualization, which is in its infancy. There are many tools for representing graphs, each with its own unique features. In general, most of the visualization techniques do not scale well for networks with more than 200-300 nodes. Visualization is of-
OCR for page 67
Defense Modeling, Simulation, and Analysis: Meeting the Challenge ten used for interpreting the results of the network analysis. The problem here is that exactly the same network laid out in two different ways is likely to be interpreted in two different ways by the person examining the picture. This is particularly true when the interpretation is done by a novice. Interpretation of network metrics is also difficult. Even when metrics are normalized, there is little readily available information to tell how important a particular score is. What most analysts do is use the numbers in a relative fashion, comparing within the same data set whether a node is higher or lower than another node on some metric or comparing the metrics related to two data sets. However, there are no absolute guidelines. Nor are there compendiums of typical values and ranges for common social networks. Most of the standard graph-theoretic metrics scale as N and a few as N2, where N is the number of nodes. There is a normalized form for most metrics. Thus, most metrics can be used easily on graphs of varying sizes. There are a few metrics, however, that have been touted as critical, such as the “betweeness centrality” measure, that do not scale well. To achieve greater speeds for these metrics a special-purpose graph metric chip might be needed. Most clustering algorithms and pattern-location algorithms also scale at best as N2 or N3. These algorithms are still fairly new, and research is under way to improve the scalability of the algorithms. In general, these tools are data-greedy. There are no standard techniques for estimating missing data or the size of the network or for dealing with erroneous data. Within link analysis, there are insufficient techniques for reducing the amount of data needed for robust learning in machine-learning pattern-location algorithms. Currently, however, there is ongoing research on the robustness and sensitivity of the underlying algorithms and metrics. A common problem with all of the group location algorithms is that they locate a set of discrete groups; that is, nodes can only be in one group. This is true for techniques using some form of clustering or blocking in social network analysis and for pattern-location algorithms in link analysis. Robust, scalable fuzzy group techniques that generate socially meaningful groups are needed. Currently the tools assume that the data have been precollected and, in many cases, preprocessed into specialized forms. Over time, analyses are typically done on historical temporal data. With the exception of some link analysis tools, very few of the computational tools can handle streaming data in an automated fashion. Ideally, these tools would be linked to live data streams and so provide updated information on networks as they change. However, for that to be feasible extensive research is needed on (1) determination of the meaningful temporal chunks for presenting relational data, (2) algorithms for updating metrics based on new data, (3) automated tools for parsing streaming data, and (4) visualization of the dynamics. Relational data can be collected in a variety of ways, from automated data capture of various relational data streams (e.g., transaction records), to direct observation, to questionnaires. There are two related issues. First, little work has been done on (1) network-based privacy, (2) node anonimization, and (3) deanonimization in networks. This is likely to be a growing issue as more companies, such as search engines and e-mail vendors, increasingly provide social-network-based services. Second, little work has been done on estimating critical gaps in the data collected, which would allow data collectors to know where to focus. Finally, from a defense perspective, network analysis tools have to take into account multimode, multilink data. The social network in isolation is of little value for evaluating courses of action. More predictive power and more analyses are made possible as the multiplicity of modes and links increases. For example, estimates of an actor’s power require an understanding of how the individual is linked to others, to issues, to resources, and so on. This being said, key areas that need work are linking social network data (actor-to-actor) to tasks or events and locations. Neither the geotemporal aspects of networks and nor their resource-task aspects are well understood. MULTIAGENT MODELS As previously noted, multiagent systems focus on the way in which social behavior emerges from the actions of heterogeneous agents. Multiagent systems (MASs) are computer-based simulation programs in which there are a set of actors (called agents), each of whom can take action. Overall results derived from such systems depend on the sequence of actions taken by the agents. The agents typically act in parallel but need not. The agents are typically heterogeneous, but need not be. The agents typically can learn but need not. MASs are often described as bottom-up systems because the behavior of higher-order entities, e.g., groups or populations, are driven by actions at the agent level. This is in contrast to system dynamic models, which are often described as top-down and in which the behavior of lower-order entities, e.g., agents, are inferred from change at the top level. Both types of models fall under the rubric of complex adaptive systems—especially when learning or evolution is involved and three or more rules or equations result in nonlinear interactions among components. MASs go by a variety of names, often indicating the type of system that they are. Common names are multiagent-based systems, complex adaptive systems, agent-based systems, multiagent network systems, multiagent dynamic network systems, and cellular automata. In some cases, the name used is broad and applies to tools that are not agent-based as well. For example, complex adaptive systems encompass a wide variety of techniques including, but not limited to, multiagent systems and system dynamic systems.
OCR for page 68
Defense Modeling, Simulation, and Analysis: Meeting the Challenge TABLE B.2 Common Differences Across Types of Multiagent Systems Type of System Number of Agents Algorithm Type Cognitive Sophistication Social Sophistication Grid Based Multiagent cognitive Few Rules High Low No Multiagent dynamic network Many Equations + rules Moderate High No Cellular automata Many Equations or rules Low Low Often Multiagent rule-based system Many Rules Low Low Yes Types of Multiagent Systems There are many types of MAS. It is possible to classify them in a number of ways—for example, by the number of agents, the basic type of algorithm, the cognitive sophistication of the agents, the social sophistication of the agents, and whether or not they are grid based. A few classes of systems illustrate these differences: Multiagent cognitive models (such as a multiagent Brahms model). Multiagent dynamic-network models (such as Construct). Cellular automata and multiagent rule-based systems (such as those in Swarm, Repast, or Mason). Table B.2 summarizes these differences. One caveat is that within any class, the actual level of general realism depends on the degree to which the model is utilizing actual data and the detail inherent in the underlying algorithms. Another caveat is that in principle MASs are ubiquitously applicable to problems that involve two or more actors whose behavior depends, at least in part, on the behavior of the other or others. The exact area of applications depends on the type of MAS. As noted, these classification factors are the common dimensions along which MASs vary: Number of agents. On the one hand, multiagent systems include models with between 2 and 10 very cognitively sophisticated agents performing very in-depth, knowledge-intensive tasks. In such models, interactions among agents are typically prescribed by protocols for interaction and hierarchical precedents for who does what. Such models are more common in computer science and engineering; illustrative models are those involving Brahms or Soar. On the other hand, a MAS may be made up of thousands or millions of cognitively simpler agents doing relatively simpler tasks. In this case, interactions among agents are the result of the agents meeting and greeting each other, trying to occupy the same space, and/or exchanging or consuming resources. Such models are more common in the social and organizational sciences, biology, and physics; illustrative models are those coming out of the Santa Fe Institute or the Brookings Institution. Algorithm type. In some MASs, the agents are systems of equations specifying the state of the agent and how it changes or learns as new information arrives. In such models, machine-learning and pattern-recognition software may be used to create adaptive agents. Some MASs of this type employ neural network technology or simulated annealing technology and so enable agents to act as heuristic-based optimizers. In other MASs the agents are a body of rules. In such models, expert-system and pattern-matching software may be used to enable dynamics. In these MASs the rules rarely change over the course of a simulation unless a heuristic optimizer controls the simulation and forces rule change through either automated subgoaling and rule construction procedures (as in Soar) or through the use of a heuristic-based optimization procedure such as a genetic algorithm. Cognitive sophistication. In general, the cognitive sophistication of the agents is inversely proportional to the number of agents. Thus, you are likely to see an MAS with a few very sophisticated agents and an MAS with many cognitively trivial agents. When the agents are cognitively sophisticated, the cognitive model often includes features such as recognition, planning, memory, and decision-making modules. Models are often built using a handful of actors, each built in one of the common cognitive modeling platforms such as Brahms, Soar, ACT-R, or neural nets. In such cases, the agents may have features that enable them to forget, make mistakes, and create new modes of behavior. The most cognitively sophisticated MAS are those that use an underlying cognitive modeling architecture such as Brahms or Soar. Models written directly in a high-level language such as C++ are likely to be moderately cognitively sophisticated, whereas those written in an MAS framework such as Swarm, Repast, or
OCR for page 69
Defense Modeling, Simulation, and Analysis: Meeting the Challenge Mason are typically cognitively extremely simplistic. Models that opt for large numbers of cognitively simplistic agents argue that social processes and complexity are an emergent property of interaction among large numbers of simple heterogeneous agents. Social sophistication. Most multiagent models are extremely unsophisticated socially. That is, rarely do such models take into account the impact of sociodemographic characteristics, social networks, or interaction with social groups and organizations. Typically, in MASs with a few cognitively sophisticated agents, social factors are either ignored or prescribed in terms of a communication and command hierarchy; as such, social behavior is constant. In contrast, most MASs with millions of cognitively simple agents, particularly those built in the MAS frameworks, do not model real social networks or groups but may differentiate actors on anywhere from two to five sociodemographic dimensions. There is a new class of models, however, the multiagent dynamic-network mode, in which networks such as social, knowledge, and resource networks enable and constrain interaction among agents and the networks coevolve as the agents interact. Grid based. Most MASs with vast numbers of agents have the agents operate on a grid. There are two forms of grid-based models. In the first, each point in the grid is an agent, and the agent’s health and action are a function of the health and actions of nearby agents. In the second, cells in the grids are locations through which agents move (right-left, up-down), where they consume or leave resources and interact with the agents they meet in the same or neighboring cells. The classic example of a grid-based MAS is the game Life. Today, many MASs based on grids are barely more complex than the original Life system, although modern systems use a toroid rather than a strict grid to avoid edge effects. The MASs with a few cognitively sophisticated agents typically do not operate on a grid. Rather, if they need location they treat location as a variable in the rules that they use to operate. Grid-based MASs are in sharp contrast to dynamic-network MASs, in which the agents operate in an ever-changing social space where “nearness” is defined on the basis of social, cultural, political, knowledge, or task factors. In this case, if location is needed, it is often treated as proximity and is just one of many factors defining the nearness of two agents and their propensity to interact. MAS Toolkits There are a number of MAS toolkits currently available. These toolkits are a framework language in which to build an MAS. They facilitate system building because they already have built-in procedures for common functions such as displaying agent interaction, displaying change in variables over time, garbage collection, general input/output, and some statistical procedures. In some cases, the toolkits are made available along with sample agents. In general, from a learning perspective, such toolkits work well in the classroom because they reduce time spent on extraneous factors and let the novice quickly build a prototype system. However, from a deployment perspective, these frameworks have some drawbacks for anything but proof-of-concept systems. In general, MASs built in these tools are slow, and often better optimization can be achieved by writing in languages such as C++. The frameworks are best suited for many agents at the same level of granularity. Thus if you want agents representing humans to interact with agents representing companies or with agents representing institutions, the basic communication, learning, and behavioral features are not available. Such multigranular models can be built in these frameworks; however, it is often more complex than building them directly in an object-oriented language. Most of the toolkits do not have drill-down explanation facilities. Increasingly, these toolkits are making it possible to run the simulation in a Monte Carlo fashion and extract statistical properties; however, the toolkits rarely export data in a form readable by standard statistical packages, data-farming environments, or response-surface analysis tools. Some of the MAS toolkits have regular user groups, training seminars or courses, and online help. Some of the tool kits are open source, such as Repast; others are held by companies (e.g., Swarm) or universities (e.g., Mason). In general, translators from one toolkit to another or from one language (such as C++) to a toolkit do not exist. Consequently, it typically takes about 75 percent of the time it took to develop the original system to rebuild it in the toolkit, assuming that the original system had moderate documentation. The concept of toolkits for MAS is a powerful one. Today, however, the extant toolkits are still in their infancy. State-of-the-Art Applications and Limitations The value of any simulation, including MAS, is partly tied to the level of realism in the model. Any simulation system is a model and so should be less complex than the real world; however, oversimplification results in models that are so high level or so incorrect that the results can be overinterpreted or misinterpreted and so should not be used for policy setting and decision making. The rule of thumb is to make the model only as complicated as it needs to be to address the issue of concern and the necessary level of fidelity. In MASs, adding more rules or equations increases the realism of the system and its usefulness for decision making. Opponents often argue that the more equations or rules, the worse the model. Arguments include appeals to parsimony, Occam’s razor, or understandability. A typical argument is that as the model increases in complexity (number of variables and rules/equations) it becomes increasingly likely that
OCR for page 70
Defense Modeling, Simulation, and Analysis: Meeting the Challenge the model can be made to fit any possible outcome. This argument derives from econometrics, where as the number of variables approaches the number of cases, the underlying data can be completely and perfectly modeled. This argument, however, is not directly applicable to MAS. In MAS, the addition of new rules and equations serves to increase the number of outcome, or dependent, variables that can be generated rather than, as in econometrics, the number of independent variables, whose relation to dependent variables can be explained. Further, in an MAS, the rules and equations are effectively a multiple constraint set, which reduces rather than increases the number of outcomes a MAS can generate. A side product is that the addition of empirically based rules and equations often increases the plausibility of the results generated by the model by reducing the space of implausible results. The validation of MASs is a complex issue worthy of several volumes. Rather than trying to review all aspects of validation, only a few high-level points will be made. First, most MASs are never, and probably never should be, validated. The simpler the model, the less likely that it can be meaningfully validated using techniques other than generic face validation. The level of validation required of such models depends on their purpose. If the purpose is to demonstrate a proof of concept, or that something is possible, then minimal, if any, validation is needed. Face validation typically suffices. Second, MASs are difficult to validate in full and are generally validated only within a small area of performance. A typical approach to validation is to run a virtual experiment using the MAS, take the generated data, statistically analyze the results to generate the response surface, and then contrast the response surface with real data. Since it is easy to generate so much data that no existing statistics package can handle them or so much data that most desktops cannot store them, only small portions of the overall response surface can be estimated at once. The size of the analyzed response surface is often dictated by the user’s interests, critical policy or decision-making questions, the storage capacity of the machine doing the analysis, the data capacity of the statistical tool, and the time it takes the simulation to produce the necessary data. Third, MASs are difficult to tune and validate as changes in one part of the system have unforeseen effects on other parts. As noted, a MAS can be thought of as a set of mutually constraining and interacting forces. As such, a change in one component often necessitates the revalidation of earlier validated or tuned components. For some systems, intelligent software systems are needed to do the validation of the MAS. Finally, MASs are often difficult to validate, as the necessary real-world data may not be available. The realism of MASs can be increased and their value to DoD increased when they are linked to real data. Most groups that build MAS systems have contrasted, at best, the results of one dependent variable with real data. Only a few systems, such as some recently created for the Defense Advanced Research Projects Agency or the BioWar system, use massive amounts of real data to set the input specifications of the models and other data to validate the system. In general, this means linking MASs to database systems. The key technical challenge here is that as the ontology in the database changes, the MAS needs to be augmented. There are currently no tools to facilitate such changes. A second challenge is that, for validation, it is important to have the MAS produce data in the same form as the real data—that is, create a comparable database. There are currently no standardized tools for doing statistical comparison of data in two identically structured databases. MASs using cognitively sophisticated agents tend to require the use of knowledge engineering techniques. Such models tend to be special purpose and have minimal reuse. Their key value is to take the place of human teams in war-gaming situations and in equipment testing and design situations and to evaluate processes that facilitate team behavior. In general, these models use various cognitive architectures with multiagent components added and so are often limited to only a small number of agents. Their strength is looking at detailed task-related behavior. As previously noted, such models tend to use predefined social interactions. This limits their use in war games as the MASs do not adapt the interaction process but just the task-based communications and actions. Typical grid-based MASs with millions of cognitively unsophisticated agents are generally useful only for high-level explorations of general concepts. They are valuable for starting groups to think outside the box and for provoking discussions. These models are rarely sophisticated enough to be used as an adaptive adversary in war gaming or for evaluating task-based behavior. The strength of these models is their ability to look at population-level trends resulting from local action. As such, they show promise in areas such as marketing, determining the impact of psychological operations, information diffusion studies, and disease transmission studies. Rarely do such models generate actionable intelligence. Now consider multiagent dynamic-network systems when they are tied to empirical data. Such models utilize agents with moderate levels of cognitive sophistication and high levels of social sophistication. This makes them useful for war gaming to look at adaptive adversaries. Given current technology, this combination results in models that can handle more agents than the cognitively sophisticated models but that run more slowly than the grid-based, cognitively simplistic models. As such, the strength of these models lies in representing and reasoning about reasonably large populations. The added cognition and social sophistication inherent in these models makes it possible to produce actionable results. However, getting a model to the point of producing actionable results takes a multiperson, multiyear data collection effort on top of a multiyear model development effort.
OCR for page 71
Defense Modeling, Simulation, and Analysis: Meeting the Challenge One of the key factors limiting MAS models from a DoD perspective is the modeling of action. Currently, actions can be modeled at a very high level (pro-con, hostile, friendly, neutral) or at a very detailed level (fire a particular weapon). There is neither a middle ground nor a hierarchy relating actions at one level to another. MASs that try to model actions tend to be either very generic or single-use. A basic ontology of actions is needed for the state of the art to advance. Development of an MAS It is important to note that, with MAS toolkits, even bad programmers and novice simulators can build interesting and seemingly powerful MASs. Such models can be built in the course of a few months. As a result, we are now seeing thousands of small systems being built by individuals or small teams. For example, individual soldiers with little or no training in simulation are now building MASs and using them to inform critical decision making and policy. Thus, the use of MASs enables the analyst to systematically consider the interaction among more factors and so to base decisions on a more thorough analysis. However, the development of MASs by those not trained in simulation means that the results of the systems are often misinterpreted, and classic mistakes are often made that cause the results from the model to reflect incorrect simulation practices rather than interactions among the factors modeled. Very detailed sophisticated models that produce actionable results often need to be developed by a team working collectively for 3 to 5 years. It makes sense to use separate teams for data gathering, validation, and usability testing as each of these areas requires different types of scientific skills. In addition, the team building the model often needs to employ many of the same techniques for development that are used in system engineering. THE WAY AHEAD Key advances and applicability to defense modeling require that MASs and network analysis techniques be integrated into tool chains. For example, pattern-discovery techniques can be used to derive equations from historical data that can then be used in MASs to evolve future systems. MAS techniques can be used to evaluate causes of action and suggest areas for further SNA data collection. Combining these techniques will enable new types of problems to be solved; for instance, combining social network metrics with a pattern-discovery technique is key to building an understanding of how networks grow and evolve. This is not to suggest that DoD should move to a large integrated behavioral model—quite the contrary. The development of MAS frameworks and the explosion of network analytic tools is making social behavioral modeling widely available and is leading to the development of many small, single-purpose tools. If they are to be fully explorted, they need to be made interoperable. It is important to note that it would not be feasible to require all tools to be written in a single language or to use a single framework; rather, the solution will be to integrate models not only from diverse domains but also in diverse languages. Multiple models and visualization tools should be available to address diverse problems, but in a way that data (real and virtual) can be easily shared among the various tools. A variety of things are needed to support such interoperability, including standards for the interchange of relational data. Behavioral modeling tools need to be Web-enabled, and XML IO languages must be developed, along with a uniform vocabulary for describing relational data, which is particularly critical as the tools and metrics are coming out of at least 20 different scientific fields.1 For defense and intelligence applications, we need to further explore common platforms and data-sharing standards so that tools written in the unclassified realm can be rapidly moved, without complete redesign, to the classified realm. Interoperability and common platforms and ontologies for these tools will enable novel problems to be addressed more rapidly by regrouping existing models and they will enable various subject-matter experts to interact through their models, thereby allowing a broader approach to problems, reducing the likelihood of a biased solution, and facilitating rapid development and deployment. Current tools are either very data greedy or become more valuable as they are linked to real data. However, there is a dearth of relevant data currently available in clean, preprocessed form. To reduce the time spent by analysts in data collection and increase the time spent in analysis, automated and semiautomated tools for data gathering, cleaning, and sharing are needed. Such tools should include natural language processing tools for extracting relational data from audio and text sources, Web-scraping tools, automatic ontology generators, and visual interpretation tools to extract network data from photographs and visual images. Appropriate subtools for node identification, entity extraction, and thesaurus creation are also needed. The development and availability of these tools in an interoperable environment are critical for providing masses of data that can be used for model tuning and validation. More rapid data collection would also mean the availability of more data sets for doing meta-analyses, thereby enabling the theoretical foundations of the field and our understanding of social behavior to be improved. Finally, these tools are essential to providing the wealth of data needed by the social behavioral modeling tools in order that the models make reasonable forecasts or provide reasonably accurate analyses of situations and organizations. 1 These fields include anthropology, sociology, psychology, organization science, marketing, physics, electrical engineering, ecology, biology, bioinformatics, health services, forensics, artificial intelligence, robotics, computer science, mathematics, statistics, information systems, medicine, civil engineering, and communications.
OCR for page 72
Defense Modeling, Simulation, and Analysis: Meeting the Challenge Improved speed for many of the algorithms could be provided by computer architectures designed for relational data or by the use of special integrated circuits with embedded versions of the less scalable algorithms. This would enable a speed savings beyond that afforded by current vector technology. Such technology would facilitate faster processing and enable more real-time solutions, particularly for largescale networks. To reduce the qualitative aspect of interpretation in this field, a living archive of collected network data is needed, replete with information on metrics for the nodes in each data set. Such an archive could be used to set context information. For example, such information could be used to evaluate whether the density of particular networks is exceptionally high or exceptionally low, or whether values for connectedness of individuals are out of range. Such an archive would facilitate meta-analysis and comparative analysis. This is critical to increasing the theoretical foundations of the field and improving our understanding of social behavior. MASs designed for applied settings need to be placed in data-farming environments. These environments need to be augmented with special-purpose tools for running massive virtual experiments, improved visualization and analysis tools, and semiautomated response surface generators. Current data-farming tools are often cumbersome to use, require code modification of the MAS, or are limited by the processor speed and storage capabilities of the machines that they run on. In order for MASs to be routinely run in data-farming environments, new, more flexible environments need to be developed and made easily available to analysts using MAS; and MASs need to be developed with wrappers so that they can be placed in these environments. Standardized input/output formats need to be developed. By routinely placing a MAS in a data-farming environment, a better understanding of the range of possibilities forecast by the model will be derived. This will enable the MAS to better support policy and decision making. Currently, when MASs are used to inform policy and critical decisions, the models are often run only a few times in carefully controlled virtual experiments. While this approach enables the analyst to explore more possibilities more systematically than not using a simulation at all, errors could still be made if the results are interpreted beyond the scope of the experiment. By placing the models in a data-farming environment, the number of virtual experiments considered, the range of possibilities examined, and the scope conditions analyzed can be expanded often by several orders of magnitude, providing a stronger basis for decision making. Further, once a model has been validated, the response surface equivalent can be used as a rapid model in training situations where the users don’t have time to wait for the MAS to finish running. Another avenue that is likely to promote major breakthroughs is the linkage of social behavioral modeling to gaming environments, particularly online multiplayer games such as Everquest and America’s Army. Major research initiatives are needed to explore the link between social behavioral modeling and gaming tools. Possible research areas are the realism of the social behavior exhibited in these models, the use of the MAS to provide flexible opponents and/or to make the apparent number of game players larger and so force players to think about group scale issues, the ability to track and analyze behavior using dynamic network analysis techniques, and the use of these games to generate data to test tools. Key benefits here would be improved training tools and visual what-if scenario evaluation. As previously noted, there needs to be additional development in a number of areas, including attachment of models to streaming data, improved visualization, metric robustness studies, and so on. Progress here will require linking social networks to other types of data such as location and event information and linking diffusion theory to other forms of theory such as action and cultural theory. This will require funding both basic and applied research. It will also require increased recognition for and acceptance of applied social science research in universities. Currently there are a number of funded research efforts in cultural modeling, geospatial link analysis, and adversarial modeling, all of which are supporting work along these lines, much of it directed at providing usable systems in several years. This is a positive development, particularly when such modeling efforts are based on strong empirical and theoretical foundations. However, although much basic research remains to be done in developing a task ontology, a unified model of culture, or even a shared definition of culture, relatively little research funding is being directed to it. The key here is not to simply invest in the social sciences but to invest in the mathematical and computational social sciences that will ultimately support defense needs. One benefit will be an improved understanding of basic social and cultural phenomena. Another benefit will be a decrease in misleading models that appear to be social but are not theoretically or empirically sound. At the same time, most of the research community, particularly in the social sciences, is not focusing on applications. The mere idea of hard deliverables, common practice in engineering and computer science, is contrary to the culture of most social science departments. Thus while there is a strong need for quantitative social science modeling on defense issues there is a dearth of highly trained social scientists involved in applied work. Universities need to expand their undergraduate social science curriculums to include more of the mathematical and computational social sciences. In particular, undergraduate courses should routinely include social network analysis, basic simulation, and multiagent systems. Universities need to encourage and facilitate applied research and to adopt engineering-style curricula focused on social and policy applications. Master’s programs that combine social and computational science need to be
OCR for page 73
Defense Modeling, Simulation, and Analysis: Meeting the Challenge developed. Military universities such as West Point and the Naval Postgraduate School should also offer social network and MAS courses. The development of such curricula and degree programs is vital to our national intellectual strength if we are to remain at the forefront in this area and to have a stronger workforce of computational social analysts capable of developing and using social behavioral models. Analysts engaged in social behavioral modeling who are trained in computer science, engineering, or physics should work in teams with social scientists to avoid reinventing the wheel or making common-sense assumptions about social processes that have no empirical basis. Corporations need to provide time and resources for selected personnel to become jointly trained in computer science and social science either by increasing the number of personnel sent to master’s programs, bringing in relevant faculty to teach short courses, or engaging in more joint research with universities as equal partners contributing the missing skill, social or computational. The key advantage of teaming is that it will improve model development and will serve as a stopgap until more computational social analysts are trained. Expected Outcomes Success in the activities outlined above would facilitate the rapid development and deployment of social behavioral models that allow systematic reasoning about various courses of action in a wide range of realms. More courses of action could be evaluated in less time and more systematically than is done with conventional tabletop war gaming or current non-computer-assisted analysis of relational data. Such models would also reduce time spent in data processing and increase time spent in analysis and interpretation. They would facilitate what-if analysis and could ultimately support near-real-time, what-if analysis in the field. This would be a clear force multiplier. These activities would increase the maturity of this field, improve scientific theory, facilitate rapid linking of models to solve novel problems, and encourage new discoveries. They would promote the development of a new science that combines computation and society, just as the previous combination of computer science, design, and psychology led to the new science of human-computer interaction. REFERENCES Carley, Kathleen M. 2003. “Dynamic network analysis.” Dynamic Social Network Modeling and Analysis: Workshop Summary and Papers, Ronald Breiger, Kathleen Carley, and Philippa Pattison, eds. Washington, D.C.: The National Academies Press, pp. 133-145. Chen, H., and K.J. Lynch. 1992. “Automatic construction of networks of concepts characterizing document databases.” IEEE Transactions on Systems, Man and Cybernetics 22(5):885-902. Goldberg, H.G., and R.W.H. Wong. 1998. “Restructuring transactional data for link analysis in the FinCEN AI system.” Proceedings of 1998 AAAI Fall Symposium on Artificial Intelligence and Link Analysis. Menlo Park, Calif.: AAAI Press. Goldberg, H.G., and T.E. Senator. 1998. “Restructuring databases for knowledge discovery by consolidation and link formation.” Proceedings of 1998 AAAI Fall Symposium on Artificial Intelligence and Link Analysis. Menlo Park, Calif.: AAAI Press. Hauck, R.V., H. Atabakhsh, P. Ongvasith, H. Gupta, and H. Chen. 2002. “COPLINK concept space: An application for criminal intelligence analysis.” IEEE Computer Digital Government Special Issue 35(3):30-37. Lee, R. 1998. “Automatic information extraction from documents: A tool for intelligence and law enforcement analysts.” Proceedings of 1998 AAAI Fall Symposium on Artificial Intelligence and Link Analysis. Menlo Park, Calif.: AAAI Press. Wasserman, S., and K. Faust. 1994. Social Network Analysis. New York, N.Y.: Cambridge University Press.
Representative terms from entire chapter: