The use of science in policy is a human activity embedded in social processes and structures, a point now emphasized several times. We have emphasized as well that every field of science produces usable knowledge but explaining whether, how, and why that knowledge is used is a task of social science. This task leads us to ask what it means for science “to be of use” in policy. The relevant research literature on that question, summarized in Chapter 3, makes two central points:
1. Scientists are concerned with “improving use” by intensifying and strengthening research, specifically by developing stronger evidence of the effectiveness of social and technical interventions.
2. A scientific specialty on knowledge utilization is concerned with understanding precisely what “use” means and determining the relative weight of factors—timeliness, relevance, clarity and brevity of presentation, etc.—said to “increase the use” of science. It focuses on mechanisms for bridging the acknowledged gap between scientists and policy makers.
Both efforts have made major contributions to what we know about use. But we conclude that the inevitable indeterminacy and context-specific nature of use prevents these two efforts from providing a fully satisfactory understanding of the use of science or a satisfactory guide on how to strengthen that use in policy making.
This chapter provides a research agenda that, if seriously pursued, holds promise of providing a more satisfactory explanation and guide. We take our cue from an observation made 35 years ago by a deeply informed scholar (Weiss, 1978, p. 26):
Social scientists tend to start out with the question: how can we increase the use of research in decision-making? They assume that greater use leads to improvement in decision-making. Decision makers might phrase it differently: how can we make wiser decisions, and to what extent, in what ways, and under what conditions, can social research help?
Weiss’s own answer to her question frames the issue in a way the committee finds helpful (Weiss, 1978, p. 78):
[H]ow to increase the use of social research in policy making is only one way to conceptualize the problem. An alternative view is: how can public policy making be improved, and what role can the social sciences play in that improvement? It may be that we have been concentrating too hard on the first formulation and not hard enough on the second.
Our proposed research framework is based on a view of policy makers engaged in an interactive, social process that assembles, interprets, and argues over science and whether it is relevant to the policy choice at hand and, if so, using that science as evidence supporting their policy arguments. Policy argument as a form of situated, practical reasoning directly leads to a concern with how evidence, in the specific way now defined, is used rather than how it is produced.
The research framework is presented under three headings: policy argumentation, psychological processes, and a systems perspective. Understanding science as evidence deployed in policy argument requires (1) investigating what makes good arguments in the policy domain—arguments that are accepted by policy makers as valid and sound—and the psychological processes influencing that acceptance; (2) investigating cognitive operations—mental models, schemata, prior knowledge, situated cognition, and related organizational circumstances—as well as institutional logics, practices, cultural assumptions (Coburn et al., in press; Hutchinson and Huberman, 1993; Spillane et al., 2002); and (3) investigating policy making from a systems perspective.
Policies result from practical arguments that offer reasons for taking a specific policy action (Ball, 1995). These practical deliberations (also referred to as policy arguments; see, e.g., Dunn, 1990; Fischer, 1980; 2007; Manzer, 1984; Marston and Watts, 2003; Stone, 2001) often involve what science says about likely outcomes of different policy choices. As emphasized in Chapter 1, they also involve political considerations insofar as policy choices influence who has and retains power and normative considerations regarding the desirability (or undesirability) of a proposed action, value judgments, and considerations of legitimacy (Esterling, 2004; Gasper, 1996).
Policy arguments have identifiable characteristics. For example, they are based on “a process through which diverse assumptions, interpretations, and contentions are commonly deliberated through an extended critical debate about policy recommendations and other proposals for public action” (Dunn, 1990, p. 324). Policy arguments generally constitute a package of considerations backed by reasons presented to persuade particular audiences of the validity of and need for a given action (Majone, 1989). The arguments consider not just the policy choice at hand, but how that policy interacts over time with many other policies—does opening a charter school in the community decrease or increase housing prices; do housing prices affect the local labor supply; does the labor supply affect whether a chain store locates in the community?
Obviously, it is a complex undertaking to sort out how the multiple characteristics of policy argument function together to yield a coherent, valid, and persuasive argument (Gasper, 1996; Hambrick, 1974; Toulmin, 1969). Although such an appraisal of policy arguments is necessary to understanding how science is used, that exercise is outside the scope of our report. It serves our purposes simply to emphasize that scientific findings, warrants, inferences, data and qualifications attached to these features of science are assembled in policy arguments in more or less compelling, fair, and balanced ways. This raises familiar issues: is relevant science ignored; does the quality and strength of evidence support the policy claims made; is evidence (pro and con) fully presented, etc.
More specifically, understanding how science is used in policy requires investigating what makes for reliable, valid, and compelling policy arguments from the perspective of policy makers and those they need to persuade. For example, arguments that certain consequences will follow from an intervention in a specific circumstance may involve a chain of reasoning
with multiple premises. Surfacing and examining those premises and the extent to which they are accepted is critical to understanding whether the argument is perceived as valid (Cartwright, 2011). For arguments that involve statistical or probabilistic reasoning, it is critical to understand how probabilities are perceived and interpreted (Kahneman, 2011). It is necessary to investigate the ways in which argumentative strategies can mislead by making unwarranted assumptions, relying on unwarranted premises, or relying on fallacies in reasoning (Thouless, 1990; Toulmin, 1979) and, in general, why flawed arguments can nonetheless be persuasive.
We can now more explicitly see that science—data, findings, theories, concepts, and so on—becomes evidence when it is used in a policy argument. Although the term “evidence” so used is frequently encountered as claims about predicted or actual consequences—effects, impacts, outcomes or costs—of a specific action, that is but part of the story. Science can be used as evidence for early warning of a problem to be addressed (species loss, cyberterrorism, racial tensions), for target setting (gender pay equity, reduced school dropout rates), for implementation assessment (is it working here as it worked there), and for evaluation (cost-effective, unexpected outcomes).
It should now be clear that when use is the goal, focusing on producing good science is necessary but not sufficient. Strengthening the use of good science needs to take the next step of understanding how science is embedded in policy argumentation, and how science can provide the kind of information likely to inform these arguments. This directs attention to research in two areas: situated cognition (see, e.g., Anderson, Reder, and Simon, 1996; Elsbach, Barr, and Hargadon, 2005; Greeno, 1998; Spillane et al., 2002) and learning organizations (see, e.g., Moynihan and Landuyt, 2009; Senge, 1990).
Situated cognition is concerned with the interactions between cognitive schemata and organizational context—in which context (organizational rules, norms, resources, and procedures) is not simply a backdrop for the way users make sense of science as evidence, but actively influences and shapes cognitive processes, including creativity, innovation, learning, and strategic thinking. Situated cognition is a science relevant to organizational design supportive of continuous learning, critical thinking, and learning from experience and experimentation. Situated cognition emphasizes that learning is inseparable from doing, and thus is needed in examining the way researchers and stakeholders involved in addressing a particular problem collectively engage in learning about and solving that problem (Van
Langenhove, 2004). Social science can investigate situated cognition in organizations, as well as help policy-making organizations and groups operate as learning organizations (Common, 2004; Easterby-Smith, 2000; Gilson et al., 2009; Leeuw et al., 1994; Moynihan and Landuyt, 2009; Olsen and Peters, 1996; Vince and Broussine, 2000).
Keeping in mind that attention to policy argument is the necessary first step in constructing a research agenda relevant to understanding the use of science in policy, we turn to the second of our three components.
There is an extensive literature in cognitive social psychology and behavioral decision theory on how people make judgments, decisions, and choices. Research is well developed in management sciences (e.g., Bazerman and Moore, 2008) and consumer behavior (e.g., Kivetz et al., 2008), and it has significant application in political science in the study of international relations and the making of foreign policy (e.g., Goldgeier and Tetlock, 2001; Jervis, 1976; Lau and Levy, 1998; Steinbruner, 1974).
These sciences have not, however, been applied to collective reasoning and group decision making in public policy settings at anything close to the level needed.1 Of primary interest here are the branches of behavioral sciences that deal with social judgment theory (Cooksey, 1996), heuristics and biases (Kahneman, 2011; Tversky and Kahneman, 1974), learning and judgment making in teams (National Research Council, 2011b), and naturalistic decision making (Kahneman and Klein, 2009; Klein, 1998; Klein et al., 1993).
Research has deepened knowledge about the fallibility of human decision making, particularly the many cognitive biases to which people are subject (Kahneman, 2011). People have a proclivity to ignore evidence that contradicts their preconceived notions (confirmation bias); they may assess the frequency of an event by the ease with which instances are brought to mind (availability bias); and they may be overly cautious (loss aversion) (Kahneman et al., 2011; Tversky and Kahneman, 1974). Hypotheses about
1However, there is a research literature on group dynamics that deals with jury deliberations and other small group decision making, which includes sociological studies on such factors as peer pressure, perceived consensus, status differentiation, and gender differences. It constitutes a different theoretical and research tradition than the literature discussed here but also could be brought to bear on public policy decision making.
types of biases have been experimentally tested and extended by neuropsychology and evolutionary psychology (see, e.g., Gazzanaga, 2008).
How cognitive biases operate can be seen in an example from medical science. A medical practitioner explains why new research findings on the overuse and sometimes risky use of screenings for prostate cancer, colonoscopy for colon cancer, and mammogram testing will be ignored by most doctors (Bach, 2012, p. D5):
Against the gravitational pull of doctor-knows-best culture … [g]uidelines written by academic types only impact the fringes of our practices. And despite the apparent move toward evidence-based medicine and comparative effectiveness research, most of us still feel that our own experiences and insights are the most relevant factors in medical decision-making.
Policy makers also inhabit a culture that stresses the importance of experience and insight, and this culture is always at play when deciding how much to defer to “guidelines written by academic types.” The social science that is needed to understand the use of science is not research about the consequences of those decisions: it is research about the decision process itself. This is true whether it is an individual decision maker, as in the medical example, or, as is more often the case in policy decisions, a group-based decision.
A committee or agency making a policy decision may prematurely accept as true something that has been presented only as a possibility and then interpret existing data or seek out data confirming what has been decided (mindset or group-think biases). A dramatic example occurred among the scientists who advised President Gerald Ford on a swine flu vaccine (Neustadt and Fineberg, 1978). Research also shows “how close-knit groups can become so homogeneous that they do not realize limits to their in-group perspectives” (National Research Council, 2011a, p. 17), sometimes labeled the false consensus bias. Both individuals and groups mistakenly generalize to populations—say, people on welfare—on the basis of information readily accessible to them, such as the situation in their immediate neighborhood or anecdotes about “welfare queens.”
Decision making in organizations is influenced by structures that aggregate and report information. These structures no less than individuals can be biased. Institutionalized racism and sexism are well known examples. The 1986 Challenger space shuttle disaster was a consequence
of organizational as well as technical deficiencies. That is, “the inability of various subunits in the National Aeronautics and Space Administration to integrate what each knew and from their different methods for processing information” (Zegart, 2011; cited in National Research Council, 2011a, p. 16). There are many ways that organizational factors “impair information integration,” including “the need for secrecy, ‘ownership’ of information, everyday turf wars, intergroup rivalry, and differing skill sets.…” (National Research Council, 2011a, pp. 16-17).
Researchers who study cognitive biases do more than describe them. They study how biases can be overcome or circumvented (Kahneman, 2011; Kahneman et al., 2011). For example, the National Research Council (2011a, 2011b) has advised the Office of the Director of National Intelligence on how to improve intelligence assessments by recognizing group biases of intelligence analysts.
Bringing the insights of cognitive science to policy argument will present special challenges. In policy making, cognitive biases necessarily interact with values, norms, culture, and political power in ways unique to policy settings. Hammond (1996, pp. 264-265) describes the challenge in stark terms:
the policy maker’s task of integrating scientific information into the fabric of social values is an extraordinarily difficult task, for which there is no textbook, no handbook, no operating manual, no equipment, no set of heuristics, no theory, not even a tradition—unless a record of confusion can be called a tradition.
This challenge notwithstanding, behavioral decision theory and related fields can substantially increase understanding of policy argument and how science is used, misused, and ignored. Such understanding would be reason enough to recommend to cognitive scientists that they direct attention to “policy argumentation.” But there is a further reason for including these fields in our research framework: it is becoming clear that cognitive science and behavioral economics can directly address policy design.
An example is the automatic contribution arrangements of the Pension Protection Act of 2006. This legislation, informed by behavioral economics, allows employers to enroll employees in a retirement savings plan (at a default contribution rate and default asset allocation) unless they explicitly opt out. This approach is in direct contrast to the previous arrangements in which employees were not enrolled unless they explicitly opted in. In-
troducing opt-out rules significantly increased employee participation in retirement savings plans (Beshears et al., 2010). For other examples of using knowledge about behavioral biases, see Congdon and Kling (2011), Orszag (2008), and Thaler and Sunstein (2008). Decision processes that increase stakeholders’ commitments and public participation have also met with some success (see National Research Council, 2008).
The third component of our research agenda re-emphasizes that policy—and therefore the use of science in policy—unfolds in unusually complex settings. Greater emphasis must be placed on social science that takes this reality into account both in studying use and in researching solutions to social problems.
A report from the Institute of Medicine (2010a, pp. 5-6) noted:
The real world is a complex system … many influences … are all interacting simultaneously. A systems perspective helps decision makers and researchers think broadly about the whole picture rather than merely studying the component parts in isolation.… A systems perspective can enhance the ability to develop and use evidence effectively and suggest actions with the potential to effect change. It can allow the forecasting of potential consequences of not taking action, possible unintended effects of interventions, the likely magnitude of the effect of one or more interventions, conflicts between or complementarity of interventions, and priorities among interventions.
A “systems perspective” is not one thing. It includes a number of approaches—complex systems, critical systems thinking, activity systems, and soft systems—and it includes various methodologies—agent-based modeling, microsimulation, systems dynamics modeling, and network analysis (see, e.g., Berry et al., 2002; Carrington et al., 2005; Christakis and Fowler, 2009; Epstein, 2006; Meadows, 2008; Miller and Page, 2007; Mitchell, 2009; Watts, 2003). The broad goal is “to provide insights into the way in which people, programs, and organizations interact with each other, their histories, and their environments” (Rogers and Williams, 2006, p. 80).
A number of policy areas have been studied from a systems perspective. For security policy, Jervis (1997) concludes that systems cannot be under-
stood through examining only the attributes and goals of their elements. There are systems effects on individual actors and on the system as a whole, including emergent, indirect, and delayed effects, as well as unintended and unpredictable consequences from the interactivity of the system’s elements. Concepts associated with studying complex systems—emergence, nonrecursive effects, adaptation—have been used to examine integration and innovation in primary health care organizations (North American Primary Care Research Group, 2009). A systems perspective has also been used to improve cooperative interaction in research communities and among researchers, policy makers, and public groups (see, e.g., Leischow et al., 2008; Midgley and Richardson, 2007). It has gained a strong foothold in evaluating complex social interventions (Eoyang and Berkas, 2007; Hargreaves, 2010; Midgley, 2007; Williams and Hummelbrunner, 2011). And it has been used in comparative cross-national studies of the use of science in regulatory policy making (Jasanoff, 2005). Menendian and Watt (2008) used concepts from systems theory to develop an understanding of contemporary racial conditions.
A recent white paper submitted to the National Science Foundation (Page, 2011) proposes that the social sciences develop methodologies for measuring and categorizing the complexity of social processes and structure interdisciplinary research to unpack how purposive actors respond to incentives, information, and cultural norms and how their psychological predispositions interact to produce social outcomes. The Office of Behavioral and Social Sciences Research at the National Institutes of Health (NIH) joined with 11 other NIH institutes in requesting research proposals to develop projects that use systems science methodologies relevant to understanding and explaining behavioral and social issues in health (described in Consortium of Social Science Associations, 2011). NIH also sponsored a mini symposium in July 2011 on how systems science can be used to inform public policy, using childhood obesity as an example.2 More recently, NIH announced a funding opportunity to develop theory and methods to better understand complex social behavior through a systems perspective (National Institutes of Health, 2012b). Along the same lines is the call of the James S. McDonnell Foundation, as part of its 21st century science initiative, to develop tools for the study of complex, adaptive, nonlinear systems in a variety of fields, including biology, biodiversity, climate, demography,
2A videocast of the symposium, “Harnessing Systems Science Methodologies to Inform Pubic Policy: Systems Dynamics Modeling for Obesity Policy in the Envision Network,” is available: http://videocast.nih.gov/summary.asp?fle=16756 [February 2012].
epidemiology, technological change, economic development, governance, and computation. The 2008 Global Science Forum of the OECD focused on complexity science for public policy (OECD, 2009).
“Perhaps the most important location” where systems thinking is called for “is in making decisions and crafting policies that help navigate the complex structures that populate the world in which we live” (Sterman, 2006, p. 513). Moreover, because there is a lack of “a meaningful systems thinking capability,” policies “often fail or worsen the problems they are intended to solve.” In a world that is interconnected, “Systems thinking is an iterative learning process in which we replace a reductionist, narrow, short-run, static view of the world with a holistic, broad, long-term dynamic view, reinventing our policies and institutions accordingly” (Sterman, 2006, p. 509). A systems perspective is compatible with many forms of scientific investigation, including the effort to produce knowledge about the efficacy and effectiveness of policy interventions. Moreover, particular methods, such as agent-based modeling, can be evaluated with experimental designs to determine whether the interventions operate as expected. The proponents of system-based approaches recognize that experiments needed for these evaluations may be quite complex and that data may be based on simulations rather than measurement, but they have concluded that studies of complex systems should be anchored in sound quantitative methods.
Systems thinking is often important to understand the consequences of policies. A former assistant director of the National Science Foundation (Bradburn, 2004, p. 39) wrote:
Governmental policies are blunt instruments to bring about social change. They almost never consider the dynamics put in motion by those changes. Thus, they inevitably suffer from unintended consequences. These unintended consequences are often large enough to nullify the positive effects of the policies or, even, to produce the opposite effect from that intended.… I approach [this issue] from the perspective of a social systems theorist and fault applications of social science analysis and research that fail to think through the dynamics of social systems and to pursue research that enables us to model more completely the effects of policy changes. I do not underestimate the difficulty of this task, but it is the direction that I think social sciences must be going.
We obviously strongly endorse social science continuing to improve its capacity to assess conditions and to help design and evaluate policies directed at those conditions. But this indispensible work provides little information about whether what is learned is used. Improving the scientific understanding of what occurs at the science-policy intersection involves going beyond the focus on what research “use” means and going beyond the effort to produce better science.
Social science has methods and theories that can significantly expand on whether what is learned is used, and can, in the process, add a new dimension to what science offers to policy. Our perspective urges broad social science attention to what happens during policy arguments, with a specific focus on whether, why, and how science is used as evidence in public policy.