This report is about using science as evidence in public policy. Science identifies problems—endangered species, obesity, unemployment, and vulnerability to natural disasters or bioterrorism or cyber attacks or bullying. It measures their magnitude and seriousness. Science offers solutions to problems, in some instances extending to policy design and implementation, from improved weapons systems to public health to school reform. Science also predicts the likely outcomes of particular policy actions and then evaluates those outcomes, intended and unintended, wanted and unwanted. In these multiple ways science is of value to policy, if used.
The report title—“using science as evidence in public policy”—takes on a specific meaning in this report. Policy makers offer reasons for their policy actions, reasons that bear on whether to take action at all, that address the interests and values at stake, and that claim the policy will work as intended, without unwanted consequences. These reasons are embedded in a policy argument; and a policy argument, to borrow a term from philosophy, is a form of practical reasoning. The term “argument” here has no pejorative implications. A policy argument is intended to persuade others to accept the reasons supporting or opposing a policy action.
In this report, a general term, “using science in public policy,” has a precise meaning: knowledge based in science is presented as evidence to support
reasons used in a policy argument. Knowledge based in science is broadly taken to mean data, information, concepts, research findings, and theories that are generally accepted by the relevant scientific discipline. Science is not the only source of knowledge used in policy argument—beliefs, experience, trial and error, reasoning by analogy, and personal or political values are also used in policy argument. How science interacts with nonscientific reasons given for public policies is among the issues we address, especially the complicated but inevitable interaction of politics, values, and science.
“Use” is another key term in the report. We review how it is defined and studied in the research specialty known as knowledge utilization. We consider what is known about if, when, and why use occurs, the various efforts to improve use, and how the current interest in evidence-based policy relates to use. The report focuses on what is poorly understood about use and might be better understood if social science research shifted its focus from defining use to studying what occurs in policy arguments when relevant science is available.
“Policy” is broadly construed in this report. It is used to describe specific and detailed adjustments to established policies, such as modifying the rate at which capital gains are taxed. It is also used for more general topics, such as school reform or deficit reduction, each of which can encompass dozens of discrete policy choices and instruments. And it is used even more broadly to reference policy domains, such as welfare policy or security policy. We even stretch the term to include the broadest of national policy goals, such as strengthening the market economy or protecting the civil rights of all Americans, which involve hundreds of discrete policies adopted and modified over decades. The general principles laid out in this report would be applied differently depending on the level of policy specified, on the particular policy sector (e.g., social welfare or national security) and on whether the policy target is a current condition, such as stopping illegal immigration, or one anticipated years or even decades hence, such as future energy needs in a world of 9 billion people. These differences matter, but we do not take them up. We consider what it means for science “to be of use” in a framework that does not depend on a carefully formulated definition of policy.1
1We restrict attention to the use of science in government public policy. There are of course other arenas where policies with public consequence are made—business policies about product lines or investment strategies, university policies about diversity initiatives or tenure criteria, and advocacy group policies about pressure tactics or fundraising goals. Although points made in this report are applicable beyond the arena of government policy, this is not our topic.
It is also important to say what the report is not about. It is not about the impact of science on society or about the payoff of investing in social science. These issues are being actively discussed in leading scientific institutions and in funding agencies, and we discuss this heightened interest in Chapter 2. Clearly, unused science cannot have any impact, but use does not equal impact. To assess impact and, beyond impact, return on investment, requires analysis beyond the scope of this committee’s charge. Our focus is restricted to use.
This report is addressed to scientists in general and to social scientists in particular. The use of science as evidence in policy making—irrespective of its disciplinary source—is a social phenomenon, and therefore a proper object of analysis for the social sciences. We present a research framework that can improve the scientific understanding of the use of science in public policy. Although some argue that the improved use of science will lead to improved policy choices, that is not our claim here. The question of what “improved” policy or “better” policy making entails and on what criteria such improvements might be judged is beyond our scope. What science does, with lesser to greater certainty and confidence, is describe conditions of interest to policy makers (or that might come to interest them when they are described), probe into natural and social conditions that may give rise to the need for policy action, predict what is likely to happen if action is taken (or not taken) to address those conditions, and, once an action is taken, explain what did happen and why.
Scientists—when they are practicing science—do not tell policy makers what should interest them or what policy choices they should make. Scientists deal with accurate description of conditions and with explanations about the causes or consequences of those conditions. Physicists and mathematicians at Los Alamos estimated the destructive consequences of the atom bomb. Social scientists in the Office of Strategic Services (predecessor to the Central Intelligence Agency) estimated the bomb’s effect on Japan’s civilian morale. Scientists could say, with varying degrees of certainty, that, if an atomic bomb is dropped, the consequences are likely to be this rather than that. There was no scientific basis on which to say whether to drop the bomb. That decision fell to President Harry S. Truman and his political and military advisers, who had to weigh factors in addition to those based in science.
Science does, however, bring one special asset to the table. It is a process of producing knowledge directed by systematic and rule-governed efforts that guard against self-deception—against believing something is true because one wants it to be true. We are not claiming that scientists are immune to self-deception; we are claiming that correctly doing science results in disinterested knowledge. For this reason, when the question on the table is what are the “real” conditions or what will “probably” happen if we implement one policy instead of another, science is on balance a more dependable and defensible guide than informed hunches, analogies, or personal experience.
Dependable and defensible does not equal certainty. Science is always uncertain and can, over time, be wrong—19th century race science, for example. But, of course, no source of knowledge or mode of reasoning escapes uncertainty and error when it comes to assessing what policies do or fail to do. Scientific investigations—whether in geology, biochemistry, epidemiology, or sociology, and across the policy issues each addresses, from toxic waste disposal, to bioterrorism, infectious diseases, and social violence—will, on balance, be a more dependable ground on which to argue that a policy action will or will not have certain effects than other sources of knowledge. Whether policy makers use the results of scientific investigations is an altogether different matter, and the subject of this report.
There are several social sciences and an even greater number of methods, approaches, theories, and research strategies in something as broad and indeterminate as understanding the human condition. What the social sciences share is their analytic focus on the behavior, attitudes, beliefs, and practices of people and their organizations, communities, and institutions. The social sciences study social phenomena, including social phenomena conditioned and caused by or responsive to matters that are investigated in the natural sciences—earthquakes, infectious diseases, ocean currents.
In associating the label science to specialties ranging from cultural anthropology to neuropsychology, we use the term differently than it is used by disciplines, such as physics or chemistry, which have a well-developed set of comprehensive, generative theories that both explain and predict phenomena. Social science may be understood by some of its practitioners in this way, but we favor what is indicated by the German term “Wissenschaft” and its linguistic equivalents that refer to any disciplined, systematic inquiry
with established methods and rules of evidence and inference that protect the investigator from self-deception.2
Many conditions at stake in a policy choice are not social—collapsing bridges, atmospheric pollution, species loss. Evidence from engineering, chemistry, and ecology describes those conditions and their causes. Yet even when the policy is about physical or biological conditions, the need to consider the human actor is seldom absent when considering policy options. Biochemistry and epidemiology show that smoking is dangerous to health; different social sciences assess policy options to reduce tobacco use: increasing the cigarette tax (economics), restricting where people can smoke (political science, social psychology), requiring warning messages (social psychology). Geology and physics assess the leakage risks of storing nuclear waste at a proposed repository, but safety also depends on a warning symbol that can communicate radiation danger for thousands of years, and, for that, linguistics, anthropology, and other social sciences involved in risk communication are needed. There is a large and growing list of policies guided by natural and social science. Topics in the disciplines of science, technology, engineering, and mathematics are matters for experts in these fields. What topics can be taught, at what levels, and how to teach the topics effectively are matters for educational psychologists and learning experts.
We begin to see that there are two ways in which social science matters to policy. First, social science contributes to understanding conditions and consequences of concern to policy makers; second, social science has methods and theories applicable to investigating the use of science in policy. Use, we have said, is itself is a social phenomenon. Use occurs in specific kinds of social organizations—executive agencies, legislatures, or expert committees—each conditioned by organizational norms, cultures, and patterns of interaction that are studied in sociology, social psychology, and organizational specialties. Use involves political choices in a wide variety of policy settings and thus is a topic for researchers in political science and public administration who investigate policy networks, intermediaries, lobbyists, knowledge brokers, and institutional rule making. Use is a particular kind of decision making and is examined with concepts from philosophy, such as argumentation and practical reasoning, as well as psychological theories, such as behavioral decision theory. Use depends on users learning
2Science fraud is a deliberate effort to deceive others, to persuade them to believe what is known to be false. Fraud is not our concern in this report, except to make the obvious point that it can undermine the confidence of policy makers looking at scientific evidence and not knowing if it is responsibly or fraudulently produced.
what sources of knowledge are dependable guides, and is investigated using cognitive theory at the individual level and sociocognitive theory at organizational levels. Use is highly contextual, conditioned by situated norms and habits, and is studied anthropologically and sociologically. Finally, use of science in policy can be seen as selecting among bodies of knowledge or expert opinion; it is then a topic in the sociology of knowledge, including science and technology studies.
In summary, the social sciences have two responsibilities. The first is to accurately describe human behavior and social conditions, including their causes and consequences, and, when policies are implemented to change those behaviors and conditions, to assess the consequences. This responsibility is most frequently discussed as social science investigation of behavior and social conditions. But we emphasize that the responsibility extends to many policies that address natural conditions, when the policy intends, anticipates, or will be affected by changes in human behavior and social structures.
The second responsibility of the social sciences is to focus their formidable array of methods and theories on understanding how social and natural scientific knowledge is used as evidence in the policy process. This responsibility is anticipated in the committee’s statement of task (see Box 1-1) and developed in detail in the report.
Statement of Task
The committee will develop a framework for further research that can improve the use of social science knowledge in policy making. The committee will review the knowledge utilization and other relevant literature to assess what is known about how social science knowledge is used in policy making. The framework will indicate the potential for new ways of understanding the use of social science knowledge in policy making. The framework will also have implications for the content and scope of training in schools and programs that prepare students for careers that use social science knowledge in policy making.
A familiar argument views science as a means of rescuing policy from short-sighted influence peddling and power politics (DeLeon, 1988; Dryzek and Bobrow, 1987; Majone, 1989; Stone, 2001). The view that science can be a counterweight to self-interestedness in politics and thereby ensure that policy reflects the public interest has a distinguished tradition, dating to the American progressive movement and famously voiced even earlier by Woodrow Wilson (1901) in his Ph.D. thesis, Congressional Government: A Study in American Politics. That view—which could be found as well in the early 20th century among English new liberals and European Christian and social democrats—held that modern knowledge of society, grounded in the new social sciences, could generate useful policy ideas based on putatively objective and factual bases. Henig (in press) has described the influence of this way of thinking on education policy:
The argument that politics is the enemy to be kept at bay has been influential in shaping America’s thinking and its actions, both historically and on the contemporary scene. It informed and justified structural changes successfully promoted by the Progressive Reformers of the early 20th century. “There is no Democratic or Republican way to pave a street,” was a slogan of the time, with the implication that there was, instead, an objectively correct way, best determined via technical and scientific expertise. Policies like teacher certification, civil service protections, and the formal assignment of education policy making to school boards independent from municipal governments and the political machines that often controlled them were portrayed as a way to empower the experts, who would both know and respect objective data, and explicitly buffer them from political interference, patronage politics, and faddish and emotion-driven popular whims.
This tradition has contemporary adherents. The Urban Institute, in making the case for evidence-based policy, states that a “question that figures into all public policy decisions—What political and social values do the proposed options reflect?—is largely outside the scope of evidence-based policy” (Dunworth et al., 2008, p. 1). The hope that science could be
separated from politics is summarized (although not endorsed) by Deborah Stone (2001, p. 376):
Inspired by a vague sense that reason is clean and politics is dirty, Americans yearn to replace politics with rational decision-making. Contemporary writings about politics, even those by political scientists, characterize it as “chaotic,” “the ultimate maze,” or “organized anarchy.” Politics is “messy,” “unpredictable,” an “obstacle course” for policy and a “hostile environment” for policy analysis.… Policy is potentially a sphere of rational analysis, objectivity, allegiance to truth, and the pursuit of the well being of society as a whole. Politics is the sphere of emotion and passion, irrationality, self-interest, shortsightedness, and raw power.
Holding to a sharp, a priori distinction between science and politics is nonsense if the goal is to develop an understanding of the use of science in public policy. Policy making, far from being a sphere in which science can be neatly separated from politics, is a sphere in which they necessarily come together (Jasanoff, 1990). As suggested in the Urban Institute quotation, “evidence-based policy” stops where politics and values start. Our position is that the use of that evidence or adoption of that policy cannot be studied without also considering politics and values.
For both descriptive and prescriptive reasons, then, evidence-influenced politics is a more informative formulation than evidence-based policy. It is descriptively informative in the sense that it occurs whenever scientific evidence enters into political deliberations about policy options, and this occurs much more regularly than the apolitical, narrowly focused activities characteristic of evidence-based policy. We support this assertion throughout this report, starting below in the section on democratic theory. Evidence-influenced politics is also prescriptively important. Policy routinely involves value and related considerations that are outside the expertise of science. Even when values are at stake, scientists can legitimately advocate for attending to knowledge that accurately describes the problem being addressed or that predicts probable consequences of proposed actions. It is our normative position that if policy makers take note of relevant science, they increase the chances of realizing the intended consequences of the policies they advance. This is evidence-influenced politics at work.
The relative weight in any policy choice of the three strong forces—
political considerations, value preferences, scientific knowledge—shifts depending on many factors; a short list includes
• the accuracy and persuasiveness of the descriptive analysis of the targeted social condition;
• the reliability of instruments and data sets used to assess the magnitude, gravity, and trajectory of the condition;
• the level of certainty about the direction and strength of causal inferences linking intervention to desired outcome;
• whether the task is evaluating what has happened or is estimating what will happen;
• the weight accorded to knowledge that comes from experience and practical expertise;
• the level of concerns about unwanted or unplanned consequences;
• the social values at stake, and how widely they are shared; and
• the power base of organized political interests.
Some mixture of politics, values, and science will be present in any but the most trivial of policy choices. It follows that use of science as evidence can never be a purely “scientific” matter; and, it follows that investigating use cannot exclusively focus on the methods and organizational settings of knowledge production or on whether research findings are clearly communicated and how.
Rigorous investigation of how science is used in the United States has to start with the principles and realities of the nation’s democratic politics. Obviously our treatment of such a vast terrain is highly selective, commenting on only a few issues to illustrate a broader point: there is no way to examine “using science in public policy” apolitically. Our selective entry point is the theory of democratic accountability. This theory emphasizes electoral competition among ambitious people who want power and want to retain it after they get it. (See Schumpeter, 1942, for a representative treatment of this theory.) To realize their political ambitions, aspiring or incumbent leaders “count the votes.” This is critical to democratic accountability. When leaders are indifferent to the strength of their political support, the link between democratic accountability and elections is correspondingly weaker. Making policy choices based, even in part, on gaining or retaining majority support
is, for Schumpeter and others, a necessary feature of democratic accountability. Counting the votes, however, can lead to “ignoring the evidence” about policy consequences in favor of responding to voter preferences. The tension in choosing between being a trustee of the public good or a delegate responsive to one’s voting constituency—eloquently expressed by Edmund Burke in the 19th century—is inescapable in a democracy.
A similar logic holds for interest group politics. Politics enters the policy process through organized interests, which invest resources—estimated at $3.49 billion in 2010 (Center for Responsive Politics, 2011)—to directly influence policy.3 This process, like electoral politics, may ignore, downplay, distort, or vociferously contest scientific knowledge that fails to support a group’s desired policies. But the suppression of interest groups’ preferences is not an option in a functioning democracy. Institutional arrangements in democracies are, after all, designed around the assumption that policy choices are contested.
Democratic political theory also places values at the center of politics. Esterling (2004) contrasts normative and instrumental reasoning, making the point that arguments for why a policy is desirable or undesirable can be made independently of its immediate social consequences. Legislators might agree with science showing how mandating helmets for motorcyclists reduces highway fatalities, and yet disagree about whether to “use” the science. To accuse a libertarian who prefers minimal government and maximum individual choice of “ignoring the evidence” about fatality rates misses the point. Just as electoral calculations and interest considerations cannot be suppressed in a democracy, neither can value preferences. In fact, political principles, such as the first amendment, are designed to promote forceful value expression.
The neoconservative critique of the social welfare state blended scientific and normative arguments. Wilson (1996, p. viii) described the law of unintended consequences as an “article of faith common to almost every adherent” of neoconservatism:
Things never work out quite as you hope; in particular, government programs often do not achieve their objectives or do achieve them with high or unexpected costs.… Neoconservatives, accordingly, place a lot of stock in applied social science research, especially the sort that evaluates old programs and tests new ones.
Other voices in the neoconservative movement, with a less scientific bent than Wilson, simply started from the premise that the market is superior to the state in producing solutions to social problems ranging from poverty to education. The Heritage Foundation writes that its mission is “to formulate and promote conservative public policies based on the principles of free enterprise, limited government, individual freedom, traditional American values, and a strong national defense.”4
If democratic politics invites competition for power, contesting interests, and the expression of diverse values—all of which interact in complicated and not always welcoming ways toward science at the policy table—another feature of democracy more clearly does open space for science. Democracy rests on the obligation of rulers to give reasons for policies. It is not acceptable to say “Fight this war or pay this tax because I am your ruler and I say so.” The obligation to provide reasons generally involves explaining that a given policy will prevent a social harm or advance a desired public welfare goal—such as why one public health intervention rather than another saves lives, why security practices are needed to protect against terrorism, or why increasing teacher salaries will improve educational outcomes. When there is a scientific basis for a proposed policy—about the effectiveness of a vaccine or the deterrent effect of airport security or the correlation between teacher pay and student performance—and the reason given for the policy is the effects it will produce, the use of science provides more dependable as well as more defensible reasons than does unsupported presumption or speculation.
Here, however, we again emphasize that a dependable and defensible reason will not necessarily be used just because it is available. Re-election concerns, interest group pressure, and political or moral values may be given more weight and may draw on reasons outside the sphere of what science has to say about likely consequences. A democracy as readily allows the conservative mission of the Heritage Foundation noted above as it does the liberal agenda of the Center for American Progress, which is “dedicated to improving the lives of Americans through progressive ideas and action.”5
We summarize this brief foray into democratic theory with a current policy debate: school choice. It was not inherent in this issue that it be framed as one putting “market solutions” on one side of an ideological divide and “government’s responsibility for public welfare” on the other.
Charter schools, for example, were initially favored by educators and parents in order to escape “rigid and monotone bureaucracies, to be free to start schools employing innovative pedagogies, to allow families having a bad experience with their neighborhood school to look for a better fit for their child without having to exit the public system” (Henig, 2009, p. 148). Conservative foundations, which had been advocating for a universal school voucher system, turned to charter schools as a better test case for claiming that market choice was inherently superior to government provision of social services, including education. Advocates on the left, who might otherwise have defended charter schools as a progressive public-sector reform, opposed them in making “a tactical decision to fight the battle on this market versus pubic education ground” (Henig, 2009, p. 148). This tactical decision rested on the assumption that Americans had a deep allegiance to public education.
This was democratic politics at work. Partisan and ideological lines formed and hardened in ways that affected the role of science. Prospects “quickly faded that research could easily and simply unfold, methodologically and systematically driven by its own internal logic” (Henig, 2009, p. 148). Instead, research became enmeshed in the battle over clashing values and partisan interests.
Yet that is not the entire story. Researchers who sharply differ on whether charter schools yield positive effects, attacking each other’s methods in the process, nevertheless agree on an important common and by now familiar finding. Factors outside the school, most particularly the role of family and community, account for more of the variation in school outcomes than do a school’s characteristics, in this case, charter schools or traditional public schools.
[T]he core of the research enterprise has not been corrupted … below the radar screen the collective enterprise of research is performing more or less as we might hope it would.… Good studies, as they accumulate, are pushing weaker studies to the margins, and studies claiming large, uniform, and unambiguous results are in some instances revealed to be unreliable outliers. (Henig, 2009, p. 143)
In the charter school example, all three forces—politics, values, science—are in the mix. The use of science cannot but be affected by how a policy issue is framed, and that initial step is largely beyond the reach
of science. Yet science as it accumulates can reduce the range of political disagreement.
Commentary on the use of science in public policy frequently argues that its use will produce better policy or improve policy making. We offer a narrower but, we believe, more scientifically sound position, particularly with reference to the social sciences. Social science does not promise “better policy.” It is not social engineering, misguided accusations notwithstanding. It is, simply, a guide to understanding problems, the conditions that give rise to those problems, and the outcomes likely to occur when policy addresses those problems. In this very specific sense, social, as well as natural sciences, are a more reliable (“better”) guide than what is otherwise available to policy makers in considering many issues.
The United States has established a loose but large network of institutions and practices focused on providing scientifically grounded descriptions and causal explanations of conditions that are or could become the object of policy attention. The next chapter uses the shorthand term “policy enterprise” to describe this network. Its workings, its funding, and its purposes are the proximate context for a fresh examination of the science-policy nexus generally and the issue of use in particular.
Chapter 3 moves to the substantive material of the report, reviewing how knowledge use has been studied over the last half-century, what has been learned from that research effort, and what remains poorly understood. Chapter 4 presents a research framework, briefly summarizing selected concepts and research fields—especially related to practical reasoning, cognitive and social psychology, and systems thinking—for their application to deepening understanding of how science interacts with policy. The final chapter explains who needs to do what to advance the research framework outlined in Chapter 4. Appendix A reviews selected research methods that are particularly appropriate for research related to public policy when the social science task is to describe causes and consequences of social conditions and to assess the outcomes when policy tries to change those conditions. Appendix B contains the biographical sketches of committee members and staff.