use models of sociocultural knowledge and behavior is not as “stand-alone problem-solving technologies” but rather as part of a broader effort to understand human behavior, in which the models are used to offer insights, trigger ideas, and generate new stories as a way of aiding decisions and judgments made by humans. The panelists offered a wide range of ideas and approaches to thinking about models, which, for the purposes of this chapter, are grouped into four broad categories: interpreting the outputs of modeling, how to make sense of data, meaning in models, and the limits of models.
The first broad issue can be roughly described as how to interpret the outputs of models of sociocultural knowledge and behavior. In her paper, “Why Models Don’t Forecast,” Laura McNamara of Sandia National Laboratories noted that some people think of models and simulations as predictive technologies. “I’m not joshing when I say this: I’ve actually heard people talk about the importance of developing some kind of a computational crystal ball.” But models don’t forecast; people do. And the reason is that any sort of modeling is always going to involve human judgment in various areas, from the types of questions to address and what to include in the model to how to deal with data and how to interpret the output of the model.
Robert Sargent of Syracuse University noted that there are two major types of models: causal models and empirical models. Causal models require sufficient knowledge about the system being modeled, including how the system works, the relationships among the various components of the system, theories about the functioning of different components, and so on. Empirical models, by contrast, are constructed from data and do not depend on any knowledge of the system; the system is a “black box.” First, sufficient amounts of system data are collected, next the data are researched to find relationships among the data, and then an empirical model is constructed using these relationships. Sargent said that causal models are preferred over empirical models for a variety of reasons, including that they use causal relationships instead of data relationships.
One of the major challenges in building models, McNamara said, is their verification and validation. Verification refers to ensuring that the model is internally consistent, that is, that the software code is actually doing what it is supposed to be doing. The validation, or ensuring that a model actually corresponds to some external reality, is trickier. One problem is the issue of referents: What aspects of the natural world is the model going to be checked against? The number of choices is prac-