7
Modeling Sociocultural Behavior

The workshop’s fifth panel was devoted to a discussion of various methods and tools used to apply sociocultural knowledge and understanding to real-world situations and, in particular, to the issue of computational modeling of human behavior. As the panel’s moderator, Robert Albro of American University, noted, modeling phenomena from the human, sociocultural, and behavioral sciences is quite different from modeling phenomena from the physical sciences, and these differences lead to a variety of questions and issues that need to be addressed in order to develop useful models of human behavior.

This final panel had a different format from the four earlier ones. Each of the four panelists prepared a paper that was available online1 to the workshop attendees in advance (see Appendix B for an abstract of each paper). During the panel discussion, each paper author—Laura Mc Namara, Mark Bevir, Robert Sargent, and Jessica Glicken Turnley—gave a short overview of his or her paper; Albro, who had prepared a response paper in advance, then commented on the papers, summarizing and synthesizing their main points; and, finally, the workshop participants were given the opportunity to ask questions and to make comments.

As Albro noted in his response, one overarching theme to emerge from the papers and the presentations is that the most effective way to

1

The complete papers are available on the workshop web page http://www7.nationalacademies.org/dbasse/Committee%20on%20Unifying%20Social%20Frameworks.html [October 2010].



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 73
7 Modeling Sociocultural Behavior T he workshop’s fifth panel was devoted to a discussion of vari- ous methods and tools used to apply sociocultural knowledge and understanding to real-world situations and, in particular, to the issue of computational modeling of human behavior. As the panel’s mod- erator, Robert Albro of American University, noted, modeling phenomena from the human, sociocultural, and behavioral sciences is quite different from modeling phenomena from the physical sciences, and these differ- ences lead to a variety of questions and issues that need to be addressed in order to develop useful models of human behavior. This final panel had a different format from the four earlier ones. Each of the four panelists prepared a paper that was available online1 to the workshop attendees in advance (see Appendix B for an abstract of each paper). During the panel discussion, each paper author—Laura McNamara, Mark Bevir, Robert Sargent, and Jessica Glicken Turnley— gave a short overview of his or her paper; Albro, who had prepared a response paper in advance, then commented on the papers, summarizing and synthesizing their main points; and, finally, the workshop participants were given the opportunity to ask questions and to make comments. As Albro noted in his response, one overarching theme to emerge from the papers and the presentations is that the most effective way to 1 The complete papers are available on the workshop web page http://www7.nation alacademies.org/dbasse/Committee%20on%20Unifying%20Social%20Frameworks.html [October 2010]. 73

OCR for page 73
74 SOCIOCULTURAL DATA TO ACCOMPLISH DOD MISSIONS use models of sociocultural knowledge and behavior is not as “stand- alone problem-solving technologies” but rather as part of a broader effort to understand human behavior, in which the models are used to offer insights, trigger ideas, and generate new stories as a way of aiding deci- sions and judgments made by humans. The panelists offered a wide range of ideas and approaches to thinking about models, which, for the pur- poses of this chapter, are grouped into four broad categories: interpreting the outputs of modeling, how to make sense of data, meaning in models, and the limits of models. INTERPRETING THE OUTPUTS OF MODELING The first broad issue can be roughly described as how to interpret the outputs of models of sociocultural knowledge and behavior. In her paper, “Why Models Don’t Forecast,” Laura McNamara of Sandia National Lab- oratories noted that some people think of models and simulations as predictive technologies. “I’m not joshing when I say this: I’ve actually heard people talk about the importance of developing some kind of a computational crystal ball.” But models don’t forecast; people do. And the reason is that any sort of modeling is always going to involve human judgment in various areas, from the types of questions to address and what to include in the model to how to deal with data and how to inter- pret the output of the model. Robert Sargent of Syracuse University noted that there are two major types of models: causal models and empirical models. Causal models require sufficient knowledge about the system being modeled, including how the system works, the relationships among the various components of the system, theories about the functioning of different components, and so on. Empirical models, by contrast, are constructed from data and do not depend on any knowledge of the system; the system is a “black box.” First, sufficient amounts of system data are collected, next the data are researched to find relationships among the data, and then an empirical model is constructed using these relationships. Sargent said that causal models are preferred over empirical models for a variety of reasons, including that they use causal relationships instead of data relationships. One of the major challenges in building models, McNamara said, is their verification and validation. Verification refers to ensuring that the model is internally consistent, that is, that the software code is actually doing what it is supposed to be doing. The validation, or ensuring that a model actually corresponds to some external reality, is trickier. One problem is the issue of referents: What aspects of the natural world is the model going to be checked against? The number of choices is prac-

OCR for page 73
75 MODELING SOCIOCULTURAL BEHAVIOR tically unlimited, and the best choices are not always clear. A second problem is conceptual model validation. As McNamara explained, “In the social sciences, different people from different perspectives can bring complementary, but still quite different, perspectives to the same problem, and there is no independent arbiter to assess which one is ‘right,’ so the issue of conceptual model validation is always a matter of negotiation. A final problem facing the validation of a model is how one deals with uncertainty.” None of these issues is well understood, McNamara said. One of the clear themes that emerged from this panel was the difficulty—or impossibility—of separating a model of sociocultural knowledge and behavior from the people and organizations that have developed it. For example, Jessica Glicken Turnley of the Galisteo Con- sulting Group, Inc., and the Joint Special Operations University noted, in discussing her paper, “The Dangers of Rushing to Data: Constraints on Data Types and Targets in Computational Social Modeling and Simu- lation,” that a model is not a representation of the entire world. It is a selection of parts of the real world, and that selection will reflect the judg- ments, the goals, and the biases of the people making it. One way to think about this process of selection is that it is much like choosing an analogy or metaphor when one wishes to explain something or capture the essence of something. “One of the interesting things about analogies,” Turnley said, “is that they allow you to see part of the world, but not another part of the world. They actually help constrain—or con- struct, depending on if you’re positive or negative—which part of the world you see.” As an example she offered the idea of the “human ter- rain,” which is an analogy with geographic terrain. “We’re saying these two things are like each other in some way, and so you think of the human or the social dimension in the same way you think of geographic dimensions.” In particular, geographic terrain is an artifact—it exists independently of any human interactions, and it is pretty much unchanged by most contact with humans. Generally, she noted, people tend to think of human culture in terms of interac- tions between people, who respond to each other and together create the culture: “It exists in the production, or the interaction, of people. It sort of exists in the moment.” But if one thinks of human culture as “human terrain,” as in an analogy with geographic terrain, one arrives at a very different view of culture, one that is more like a fixed landscape. Turnley said: “I can sort of float above it [culture] and touch it and not change it, and it doesn’t change me.” The implication, she said, is that whoever determines that logic of selection for a model plays a major role in determining what the model user will see about the world. “My bottom line here is that creating and

OCR for page 73
76 SOCIOCULTURAL DATA TO ACCOMPLISH DOD MISSIONS developing and building a model is itself a creative process that then con- strains what the model users subsequently see about the world.” “It’s a sense-making exercise, and so we need to think about it in very different kinds of ways than we think about more analytic types of exercise—more as a creative product than an analytic product.” Given this view of models, Albro observed that the models should not be seen as technological black boxes “into which data are plugged and out of which meaningful results are self-evidently generated.” This is particularly important, he continued, because of the way that modeling and simulation are often talked about in the context of decision making: “A given model’s potential value is evaluated in terms of how useful it is [in] facilitating high-consequence decision making. In fact, models are given a primary role in moving ‘from data to decision.’ A danger here is that computational models acquire too large a role in decision making, rather than being understood as merely one feature among many of com- plex interpretive environments.” A better way to think about models of sociocultural knowledge and behavior, he said, is as part of a larger process in which models, model- ers, and users interact. In particular, one should recognize that the key stakeholders in modeling are “meaning makers.” Models should not be thought of as “approximations of poorly understood sociocultural realities but as theory-driven, partial and selective representations” that can help decision makers “generate new scenarios and new stories, to become parts of the encompassing and dialogically interpretive scene of decision making. Understood this way, models contribute to fluid frameworks for discussion rather than forecasting any particular socio- cultural result.” HOW TO MAKE SENSE OF DATA In modeling physical phenomena, data are generally straightforward and concrete: place and time, mass, velocity, temperature, pressure, and so on. By contrast, Albro said, “sociocultural information is better under- stood as interpreted and interpretable ‘meanings’ rather than as objective data that matches in clear-cut fashion with some aspect of the world.” Thus in modeling sociocultural phenomena, the question of exactly how to define and interpret data is open to discussion and debate, and the four presenters offered different viewpoints and described a variety of difficul- ties that arise in dealing with sociocultural data. Discussing his paper, “The Importance of Interpretation,” Mark Bevir of the University of California, Berkeley, described data about sociocul- tural phenomena as “data about the webs of meaning that inform peoples’ actions.” And, since meanings are always forming webs—interacting with

OCR for page 73
77 MODELING SOCIOCULTURAL BEHAVIOR other meanings and actions—they can never be properly isolated as indi- vidual qualities. This has various implications, he said. “The first is that all data is inherently actually debatable, and any attempt to say that some data isn’t debatable is merely a human position: ‘We’ll accept it if, say, the correlation is above this level, but not if it’s below that level.’ So when you hear something about wanting to have a fixed amount of data, that’s quite problematic, because there’s no point where you’ve got enough data or not got enough data. It’s always us who decides what that point is; there’s no absolute decision that makes that right.” Similarly, he said, there is no such thing as the right data or the wrong data. “We should grab all that we can. We should recognize we’re never going to have a sufficient amount to be absolutely certain. We should just get what we can.” Robert Sargent, whose specialty is operations research, has a very different view of data. In the paper he prepared for the workshop, “A Perspective on Modeling, Data, and Knowledge,” he wrote, “Data gener- ally refer to some collection of numbers, characters, images, or audios that are unprocessed. Knowledge is obtained from data by interpreting the data or through processing the data.” Structured data are used to build models. Unstructured data, such as videos, web pages, and texts of e-mail messages, must be converted into structured data by being processed in some way—counted, classified, compared, etc.—to become structured data prior to being used for building models. In short, Albro commented, the presenters straddled an “epistemo- logical divide” in their conceptualizations of data. “At issue across the panelists is whether, when referring to ‘data,’ we are referring to empirical sociocultural facts of some sort—as unstructured, raw, and connected to the world—or referring to always already interpreted meanings. This is not a trivial difference.” That difference has implications for how the data are affected by the modeling process. If data are, as Sargent sees them, empirical facts about the world, then processing those data does not necessarily cause them to lose any content and may actually add value by discerning various patterns. But if the data are the more rich sociocultural data discussed by McNamara and Bevir, inserting them into a model may strip them of some or much of their meaningful content. Turnley specifically addressed this issue in her paper, writing: “Com- putational models require quantitative data, or (to put it another way), data that can be manipulated quantitatively. Much of the data collected about sociocultural phenomena are in narrative form. Furthermore, many of the targets of interest are abstract phenomena, such as beliefs, motiva- tions, and the affective dimensions of behavior. . . . What has happened

OCR for page 73
78 SOCIOCULTURAL DATA TO ACCOMPLISH DOD MISSIONS in practice with these computational models is that context-sensitive eth- nographic data is being converted into computationally manipulable data through the use of surrogates which strip it of context.” Albro commented that this poses a challenge for those who design and operate models of sociocultural knowledge and behavior, “how com- putational models can address the problem of richness, not just as a mat- ter of adding layers of complexity, but, more importantly, so as not to efface meaningful context.” MEANING IN MODELS Assuming that interpretative meanings are the basic unit in sociocul- tural knowledge, Albro said in his response, then an important question for modeling is where these meanings are to be found and what their relation- ship is with the data. That is, are data—particularly the sorts of raw, objec- tive, and unstructured data that Robert Sargent described—prior to and distinct from meanings, or are meanings the only sort of data that will or should appear in models of sociocultural knowledge and behavior? Albro illustrated this question by comparing the divergent perspectives of Bevir and Sargent concerning what constitutes a meaningful unit of analysis. Bevir’s point of view is that any concept or proposition—as a datum— does not have “intrinsic properties and objective boundaries” and that explanations of sociocultural phenomena arise from tracing out and understanding the conceptual connections in “webs of belief.” This, Albro commented, makes the conceptual boundary between data and meaning hard to locate, which in turn “poses a challenge to any effort to organize information into comparable units or sets, as available for standardized measure, or as subject to some kind of operation or manipulation.” Turnley, whose concept of sociocultural knowledge has a great deal of overlap with Bevir’s, spoke of analogies as ways in which people inter- pret the world and thus create meaning, rather than as bits of preexisting knowledge waiting to be discovered. “In such accounts,” Albro said, “we are invited to understand computational models as actively producing sociocultural knowledge rather than simply representing it.” Meaning is created by people and their models. “Sargent, however, describes data much differently,” Albro con- tinued. “He explains, for example, that quantitative variables are also qualitative, since they also contain all necessary qualitative information. In this scenario, variables are mutually exclusive and discrete vehicles from which information can be extracted. This sets up a very different state of affairs from that of Bevir and Turnley.” In Sargent’s view, data are understood as “vehicles of meaning” and “promise access to an objective reality divisible into standardized parts that already contain

OCR for page 73
79 MODELING SOCIOCULTURAL BEHAVIOR their significance and which it is the purpose of the modeling process to simply extract and represent.” It is this view of data and meaning that is being implicitly accepted when people speak of applying data mining and data extraction as part of modeling work. In such efforts, Albro said, data are judged to be “good” or “complete” or “reliable” according to how easy it is to standardize them for comparison and to extract them uniformly. Similar goals are in play when people are interested in increasing the interoperability of models and making data fungible, so that one user’s model can easily become another user’s data. For qualitative data, Albro observed, such an approach to dealing with the data—particularly the way in which meaningful contexts are stripped away—has major consequences. “Hard-to-classify ‘field notes’ must quickly take the form of more standardized ‘field reports,’ which need to rely upon a commonly used ‘code book’ of some sort, like the popular ASCOPE [Area, Structures, Capabilities, Organizations, Peoples, and Events] system for the classification of field data. Relatively ‘thin’ and more easily extractable data sources are given priority, such as journalism, national opinion surveys, or polling data.” When data are seen in this way, the job of models becomes to generate “significant information about a patchwork world of data points as checked-off cultural boxes represent- ing quantifiable variables of cultural difference.” But the results generated from such an approach to modeling the world could well be meaningless, Albro suggested. “There are, in short, epistemological consequences in assuming that cultures can be divided up into vehicles of extractable meaning.” It is important that people using models of sociocultural knowledge and behavior grapple with these issues concerning the data used in the models, Albro said, and in particular to think about “the relative compat- ibility of such different epistemological departure points for data.” Judging from some of the earlier presentations at the workshop, Albro said, it seems as though in practice the data used in sociocultural models end up being those data that are easier and most convenient to collect and to put into the models. The “ground truth” ends up being replaced by data collected by web mining and data extraction programs, from online forums, blogs, and YouTube and other websites, which are convenient because the information is already formatted as HTML files, Word docu- ments, PDF files, or PowerPoint slides, or is in the form of downloadable video, image, or audio files. “Too often the differences between virtual and nonvirtual realities get lost in the shuffle,” Albro said. “While social media web content has its values, we should not confuse this with in-theater col- lection of data on the ground [in military operating environments], which is rarely done with regard to computational social science applications.”

OCR for page 73
80 SOCIOCULTURAL DATA TO ACCOMPLISH DOD MISSIONS In the workshop’s keynote address, MG Flynn said that the number of District Narrative Assessments will be increasing in the future, which will provide a great deal of additional data that can be used in models. How- ever, Albro noted, modelers—including some in the workshop—complain that “unstructured” qualitative data cannot be used by their models. If the District Narrative Assessments are to be of use to the modelers, they will have to be created in a standard format with interchangeable categories. But fitting everything into standard formats and defined categories makes it unlikely that “information outside of established expectations would find its way into the data sets of such models,” Albro said. THE LIMITS OF MODELING Much of the panel’s discussion, particularly during the roundtable section, focused on the question of what models can and cannot do and what is reasonable to expect from them. A key comment came when Turnley observed that sociocultural mod- els will probably never be good at making exact predictions of what will happen. They can, however, be expected to provide information on the probabilities of various things happening—what she referred to as “pos- sibility spaces.” Thus computational social models can be used at the strategic level and possibly at the operational levels, but they are never likely to be useful at a tactical level, she said. “Somebody brought it up sort of facetiously the other day: Do I attack the village kinetically, or do I give them soccer balls? I don’t believe we will ever get a model to say, if you attack it kinetically then this will happen, but the model can say that there’s a possibility space that encompasses a range of futures.” In response to a question about whether sociocultural models can be used to generate knowledge, Turnley answered that they absolutely can. Studies have shown, for example, that using the creative power of models allows people to see the world in a new way, to see things that they might otherwise not have seen. “Think about the kind of knowledge that’s gen- erated, for example, by reading literature or history.” In general, because of the difficulties in validating these models, it is not possible to use them for the same sort of theory exploration and testing that is possible in the physical and biological sciences, but they can be used to expand the hori- zons of one’s thinking. McNamara offered a second example of using the models to generate knowledge. A model of brain activity and memory formation was devel- oped specifically for use in research. Its purpose was to serve as a test bed so that researchers could “begin to generate ideas about hypotheses and do sensitivity analyses in a virtual environment before they actually brought in human subjects.” It helped the researchers hone their hypoth-

OCR for page 73
81 MODELING SOCIOCULTURAL BEHAVIOR eses and get a sense of what data they were going to collect before the real experiments started. McNamara noted, however, that this use was not the type that the military is most interested in—high-consequence decision making that affects other human lives. Sargent noted that an alternative to using only models to understand a situation is to use domain experts, with or without the models. They can be experts on the system, on the problem being addressed, or on other rel- evant aspects. As an example of this approach, Sargent pointed to David Kennedy’s work on reducing homicide rates. In this case the experts were local police officers who were able to give Kennedy the insights he needed to attack the problem. Generally, Bevir said, he doesn’t expect models to do much, but he did have one suggestion for how they might be useful. “What they might help us to do is to come up with stories . . . to transform the beliefs, desires, and intentionality of local actors. We do that through spreading narratives.” Coming up with such narratives can be a difficult task, he noted. In the case of Afghanistan, for example, Americans are facing the presence of narratives that already exist because of the American presence in that country. “We’re trying to spread narratives when most people’s day-to- day experience of the American presence is going to challenge the narra- tives we want to spread. And the narratives are not going to spread unless they’re plausible to the people we want them to spread among, which means they have to map onto their day-to-day experience of the American presence.” It is a phenomenally difficult problem, he said, but models are one tool that may help to figure out how to solve it.

OCR for page 73