B
Abstracts of Background Papers
THE IMPORTANCE OF INTERPRETATION
Mark Bevir
This briefing paper describes a broad consensus in current philosophy of social science and then considers the implications of this consensus for the ways one might think about data, knowledge, and policy making.
Since the late 20th century, philosophy has been dominated by meaning holism. Holists believe that the meaning of a sentence or belief depends on the wider language game or web of beliefs of which it is a part. This holism has given rise to comparative epistemology, constructivist ontology, and contextualizing historical explanations. Current philosophy thus supports a view of the social sciences as an attempt to interpret other people’s interpretations of the world.
Interpretive social science encourages certain views of data and knowledge. First, all kinds of techniques generate valid data, and ethnographic and historical studies are important supplements to other data. Second, models, frameworks, and correlations are reifications, so one should consider if they need to be disaggregated. Third, correlations, models, and frameworks are just more data, not explanations, and—to explain such data—one has to tell stories. An interpretive social science suggests lessons for policy makers. First, practitioners should take an eclectic approach to data and remember that all data are partial and provisional. Second, practitioners should remain aware of the diversity of beliefs and actions as well as the historical and cultural contexts that influence them.
Finally, practitioners should consider multiple stories that reveal new aspects of situations.
WHY MODELS DON’T FORECAST
Laura A. McNamara
The title of this paper, “Why Models Don’t Forecast,” has a deceptively simple answer: models don’t forecast because people forecast. Yet this statement has significant implications for computational social modeling and simulation in national security decision making. Specifically, it points to the need for robust approaches to the problem of how people and organizations develop, deploy, and use computational modeling and simulation technologies.
I argue that the challenge of evaluating computational social modeling and simulation technologies extends far beyond verification and validation and includes the relationship between a simulation technology and the people and organizations using it. This challenge of evaluation is not just one of usability and usefulness for technologies but extends to the assessment of how new modeling and simulation technologies shape human and organizational judgment. The robust and systematic evaluation of organizational decision-making processes, and the role of computational modeling and simulation technologies therein, are a critical problem for the organizations that promote, fund, develop, and seek to use computational social science tools, methods, and techniques in high-consequence decision making.
A PERSPECTIVE ON MODELING, DATA, AND KNOWLEDGE
Robert G. Sargent
This paper presents and discusses the problem-solving methodology used in operations research. The advantages presented using this methodology include (1) the development of a problem statement, (2) the construction and use of a causal mathematical model based on system knowledge, and (3) the data requirements determined from the steps of the methodology. Also discussed is how this methodology differs from the method of first collecting significant amounts of data and then attempting to develop models from that data.
Two major types of models, causal and empirical, are compared and discussed; this includes the strengths and weaknesses of each type. This paper also discusses why causal models are preferred, the importance of understanding that causal models contain system relationships and empirical models contain data relationships, and the different kinds of graphical and mathematical models for each model type. Different
kinds of data and measurement scales for data are also described. System knowledge, needed for developing causal models, is discussed and depicted in a table containing different levels of system knowledge and types of system knowledge.
The modeling process and obstacles that may arise during this process are described. The importance of validation of models, model solutions, and model theories is stressed. Finally, the use of domain experts in problem solving is discussed, including why it is an important approach for solving social system problems.
THE DANGERS OF RUSHING TO DATA: CONSTRAINTS ON DATA TYPES AND TARGETS IN COMPUTATIONAL SOCIAL MODELING AND SIMULATION
Jessica Glicken Turnley
By the time most modeling projects address data, the project team has made significant decisions in the course of the project that determine the type of data they need and constrain which part of a comprehensive picture they will provide. I argue that it is not possible to create, a priori with data, a comprehensive picture of some area of interest.
A model is not all things and all relations in the target domain but a selection from them. That selection is made by the modeling team which constructs the model. By exercising this selection process, the team acts as sort of a prism, controlling which part of the target domain one sees and how one sees it. The model as artifact, once it is constructed, embodies this prism.
This gives great power to the people involved in the modeling process. I have parsed that process into different social roles, each of which contributes differently: the questioner, who poses the question that initiates the process and establishes the model’s purpose; the user, who exercises the model in a particular sociotechnical environment; a disciplinary or theoretical expert who identifies the elements to include in the model and the relationships among them; the data provider; and the model builder, who captures relevant theory and data in the chosen medium.
A model is much more than an artifact or bucket into which data can be dumped. It actually is a process of creating a particular way of looking at the world. It is like Karl Weick’s sense making, a process that “structures the unknown,” using theory to choose elements of the target domain that are relevant to a particular problem. Rushing too quickly to the data question is likely to lead the team to the dangerous and impossible request to collect everything or to collect the wrong things. And finally, by definition, no model will provide a comprehensive picture of anything. In fact, the creative power of models may actually cause people to revise the picture through the very act of constructing the analytic tool.