With the arrival of big social science and the growth of the policy enterprise, the federal investment in social science brought attention to whether the knowledge being produced was being used. Research on what was labeled “knowledge utilization” got under way. We address that research under three headings: decisionism and its critique, the metaphor of two communities (researchers and policy makers), and the evidence-based policy and practice initiative.
As an introduction to these issues we take brief note of the characteristics of our three central topics—social science, policy, using science—that challenge any attempt at a comprehensive account for the when, how, and why of science use in policy.
Scholarship on what happens at the interface of science and policy has to contend with two phenomena—policy making and use—that are particularly difficult to define. To begin with, investigations of these phenomena are launched in different disciplines, including anthropology, political science, psychology, and sociology and their myriad subfields and cross-fields, from science and technology studies to political psychology, from behavioral economics to historical sociology. Each of these fields has its own established principles of evidence and inference. They use different methods—experimental, analytic, quantitative, and qualitative. They work
at different levels of analysis—from individual behavioral decision theory to systems theory. They focus on different processes: from structural determinism and constrained probabilities at one end of a continuum to willful effort and chance happenings at the other. They draw on epistemologies as varied as positivism, critical realism, and postmodernism. Individual social scientists bring different motivations to their work—from expansion of theoretical knowledge to practical problem solving, from mapping policy options to advocacy of particular policies. Social scientists bring their expertise to universities, think tanks, the media, advocacy groups, corporations, and government agencies. This range—across fields of study and individual motivations and career lines—produces a lot of variability, which, of course, determines the way the science-policy nexus is framed.
Complicating matters is the absence of a generally accepted explanatory model of policy making. Instead, multiple descriptive policy process models offer ways to understand how policy is made and how science might enter into that process. There are, for example, rational models—including linear, cycle or stage, incrementalism, and interactive. There are models that question rational model assumptions, including behavioral economics, path dependency, and bureaucratic inertia. There are political models, including policy networks, agenda setting, policy narratives, advocacy coalition frameworks, punctuated equilibrium theory, and deliberative analysis models (see Baumgartner and Jones, 1993; Hajer and Wagenaar 2003; Kingdon, 1984; Lindblom, 1968; Neilson, 2001; Sabatier, 2007; Sabatier and Jenkins-Smith, 1993; Stone, Maxwell, and Keating, 2001).
There are models that focus on different stages of the policy process and thus on different ways that social science can contribute, including: descriptive analyses that present conditions needing policy attention, such as a slowdown in small business start-ups; social indicators that document long-term trends, such as gender differences in pay scales; social experiments on alternative policy designs, such as school vouchers; and evaluation research on the effectiveness of a policy, such as neighborhood policing.1
Political science is the discipline that has devoted the most attention to the policy process. On the issue of use, it has reached a general conclusion (Henig, in press):
1For a careful discussion of how evidence is used at different stages of the policy process, see McDonnell and Weatherford (2012).
[T]he main thrust of the political science literature serves as a warning against idealized visions of pure data being applied in depoliticized arenas. Although generalizations about an entire discipline inevitably are oversimplifications, the center of gravity within the field encourages skepticism about proposals for a rational, comprehensive, science of public policy making and regards data and information as sources of power first and foremost.
It is difficult to assess how widely this characterization is accepted outside of political science, but it is clear that the various models and frameworks do not coalesce into anything remotely resembling a powerfully predictive, coherent theory of policy making. Lacking that, it is improbable and perhaps impossible to reach a widely agreed-upon understanding of the use of science in policy making. “Use” itself, consequently, is elusive, seen differently depending on the perspectives brought to it and the policy and institutional arenas in which it is investigated (Neilson, 2001; Webber, 1991; Weiss, 1991). A political psychologist at the Central Intelligence Agency concerned with what transforms an angry, unemployed teenager into a terrorist uses research evidence very differently from an economist at the RAND Corporation designing a randomized controlled field trial (RCFT) on classroom size and school performance. Many researchers underscore the conceptual confusion about use and conclude that different definitions of use are needed and appropriate for different purposes (e.g., Oh, 1997; Rich, 1997; Weiss, 1979).
This conclusion is consistent with the fact that policy choices are context dependent. A school district deciding whether to establish charter schools is less interested in a comparative study of charter and public schools across the country than in wanting to know how well a charter school will perform under its conditions, which differ depending on whether the district is in the central city or suburb, with a homogenous or diverse population, with a historically competent or incompetent school administration. The usefulness of research is not assessed in terms of variance explained from a large sample of schools, but whether it is informative about a very specific choice.
Given the context-dependent nature of the use of science, typologies are a common way of mapping the landscape (for a summary, see Nutley et al., 2007; see also Bogenschneider and Corbett, 2010; Renn, 1995). A frequently cited typology is that of Weiss (1979, 1998; see also Weiss et al., 2005):
• Instrumental uses occur when research knowledge is directly applied to decision making to address particular problems.
• Conceptual uses occur when research influences or informs how policy makers and practitioners think about issues, problems, or potential solutions.
• Tactical uses involve strategic and symbolic actions, such as calling on research evidence to support or challenge a specific idea or program, such as a legislative proposal or a reform effort.
• Imposed uses (which is perhaps a variant on instrumental uses) describe mandates to apply research knowledge, such as a requirement that government budgeting be based on whether agencies have adopted programs backed by evidence.
Other scholars add a fifth category, symbolic or ritualistic use—that is, the organizational practice of collecting information with no real intent to take it seriously, except to persuade others of a predetermined position or even to delay action (Leviton and Hughes, 1981; Shulha and Cousins, 1997). It is a frequent complaint among scientists that policy makers use scientific evidence as confirmation of prior beliefs. This complaint, however, overlooks the fact that, when policy makers argue on the basis of evidence, it is more difficult for their opponents to ignore that evidence, or to leave it unchallenged. “My science versus your science” has the merit of putting science in play, and over time opens more space for policy arguments that include scientific evidence.
Weiss emphasizes that each of the four uses—which also applies to the fifth use noted—can be found in particular situations, but that no one of them offers a complete picture. Scholars who debate typologies of use generally conclude that, although typologies are heuristically valuable, they are not easily applied empirically. Boundaries are blurred, and access to users’ cognitive processes is unattainable. In fact, it is unlikely that users themselves can make sharp distinctions in explaining how they use knowledge (Contandriopoulos et al., 2010). The empirical application of typologies in research is difficult because use is “a dynamic, complex and mediated process, which is shaped by formal and informal structures, by multiple actors and bodies of knowledge, and by the relationships and play of politics and power that run through the wider policy context” (Nutley et al., 2007, p. 111).
Typologies of use fail to meet the standard criteria of scientific typologies in which each category consists of an internally coherent set of variables,
with the value of each variable predictably correlating with the values of each of the other variables in that particular category. In the periodic table of chemical elements, for example, hydrogen is distinguished from other chemical elements by its atomic weight, its specific gravity, its bonding properties, the temperature at which it freezes and boils, and other traits. Each of these traits differs consistently and predictably from those same traits in helium or in any other chemical element (see Stinchcombe, 1987). In the social world it is impossible, in any practical sense, to construct typologies that meet this standard. Typologies of social conflict, ethnic or racial groups, or government corruption are never going to have categories with internally coherent variables whose values covary in completely predictable ways. It is unrealistic to expect a clear and unambiguous typology for a phenomenon as complex as the use of science in policy.
To address the charge given to this committee—to understand the use of science in policy—is thus to simultaneously deal with three elusive phenomena:
• Scientific findings from multiple sources and that are at times contradictory;
• A policy-making process, that is variable along many dimensions; and
• A phenomenon, “use,” that changes its meaning depending on the perspective brought to it and one’s location in the complex space where policy is made.
With this challenging landscape in mind, we turn to the recent scholarship on knowledge utilization.
The scholarship on knowledge utilization has, virtually from its beginnings, been skeptical of rational models of the relationship between research and policy. Rational models assume that decisions unfold through five stages (Nutley and Webb, 2000, p. 25):
1. A policy problem requiring action is identified and goals, values, and objectives are clearly set forth;
2. All significant ways of addressing the problem and achieving the goals or objectives are enumerated;
3. The consequences of each alternative are predicted;
4. The consequences are then compared with the goals and objectives; and
5. A strategy is selected in which consequences most closely match the goals and objectives.
Weiss and Bucuvalas (1980, p. 263) summarized the essence of this model: “a decision is pending, research provides information that is lacking, and with the information in hand the decision maker makes a decision.” Rational models have also been characterized as “decisionism”—“a limited number of political actors engaged in making calculated choices among clearly conceived alternatives” (Majone, 1989, p. 12; see also Rein and White, 1977; Rich, 1997).
Criticisms of this model have focused on several significant defects; for example, that decisions made are optimal, that is, based on complete information and an examination of all possible alternative courses of action (see the work of Simon , who introduced satisficing as a replacement for maximizing); or, that the model is a normative account of policy making (see the work of Braybrooke and Lindblom  and Lindblom , authors who substitute incrementalism for rational models). Other critics argue that rational models underemphasize or ignore the important role that value judgments play in policy arguments (Brewer and deLeon, 1983); or that linear problem solving is “wildly optimistic,” because it “takes an extraordinary concatenation of circumstances for research to influence policy decisions directly” (Weiss, 1979, p. 428).
More recent examinations of the relationship between research and policy making echo these concerns. For example, Gormley (2011, pp. 978-979) notes:
A hypodermic needle theory of scientific impact on policy, which anticipates direct, immediate, and powerful effects, is flawed for several reasons. First, scientific research is one of many inputs into the policy process.… Second, scientific knowledge accumulates through multiple studies, some of which reach different conclusions.… Third, the applicability of a given study to a particular policy choice is a matter of judgment.… Fourth, scientific research is translated, condensed, repackaged, and reinterpreted before it is used. Fifth, the use of scientific information by public officials, when it is occurs, is more likely to involve justification
(reinforcement of a prior opinion) than persuasion (conversion to a new opinion).
Although we share Gormley’s view, there are situations in which discrete decisions are directly triggered by the use of some specific scientific knowledge—for example, the direct, even formulaic translation of census results into congressional apportionment or formula-based fund allocations that are legislatively required. There also are situations in which a user is considered sovereign in her or his capacity to mobilize evidence and, consequently, to modify her or his behavior on the basis of that evidence—for example, the choice of a preferred clinical treatment (Contandriopoulos et al., 2010). But these examples are exceptions to the rule, and uncommon at that. It is estimated that evidence-based programs accounted for less than 0.2 percent of nonmilitary discretionary spending in fiscal 2011.2
In almost all decision-making situations, the use of science takes place in “systems characterized by high levels of interdependency and interconnectedness among participants” (Contandriopoulos et al., 2010, p. 447). No single decision maker has the independent power to translate and apply research knowledge. Rather, multiple decision makers are embedded in systemic relations in which use not only depends on the available information, but also involves coalition building, rhetoric and persuasion, accommodation of conflicting values, and others’ expectations.
In criticizing rational models and decisionist thinking, Weiss and others suggest that use is less a matter of straightforward application of scientific findings to discrete decisions and more a matter of framing issues or influencing debate (Weiss, 1978, p. 77):
Social science research does not so much solve problems as provide an intellectual setting of concepts, propositions, orientations and empirical generalizations.… Over a span of time and much research, ideas … flter into the consciousness of policy-making officials and attentive publics. They come to play a part in how policy makers define problems and the options they examine for coping with them.
2The George W. Bush administration piloted a program linking federal financing to clear demonstration of program effectiveness. These evidence-based programs “accounted for about $1.2 billion out of a $670 billion budget for nonmilitary discretionary programs in the 2011 fiscal year” (Lowrey, 2011).
Although Weiss suggested that this enlightenment model is perhaps the way science is most frequently used in policy making, she did not claim it was the way it ought to happen. “Many of the social science understandings that gain currency are partial, oversimplified, inadequate, or wrong.… The indirect diffusion process is vulnerable to oversimplification and distortion, and it may come to resemble ‘endarkenment’ as much as enlightenment” (Weiss, 1979, p. 430).
In sum, the research on knowledge utilization reflects a consensus about what should be ruled out: (1) that the science/policy nexus can be uniformly understood in terms of rational decision-making models; (2) the assumption of a specified single actor with freedom to achieve goals formulated through a careful process of rational analysis characterized by a complete, objective study of all relevant information and options; and (3) the definition of use as problem solving in the sense of a direct application of evidence from a specific set of studies to a pending decision. Although evidence may occasionally be used in such narrow ways, these depictions of “use” do not accurately reflect the full realities of policy making.
Knowledge utilization research, in agreement about what is ruled out, is less clear about what should be ruled in. It has, however, pointed to the importance of closing the distance between the “two communities” of scientists and policy makers.
Viewing use from the perspective of two communities has been a recurring motif in knowledge utilization studies (see Caplan, 1979). The basic idea is refreshingly simple. Scientists and policy makers are separated by their languages, values, norms, reward systems, and social and professional affiliations. The primary goal of scientists is the systematic search for a reliable and accurate understanding of the world; the primary goal of policy makers is a practical response to a particular public policy issue.
Like any binary distinction, this one oversimplifies, though there is a crude truth to several distinctions rooted in the different tasks facing researchers and policy makers. They differ in the outcomes they value—knowledge about the world in all its complexities versus knowledge helpful in reaching feasible solutions to pressing problems—and in the incentives, rewards, and cultural assumptions associated with these different outcomes. They also differ in habits of expression—probabilistic versus certain statements about conditions or people. And they differ even in modes of
thought—deductive and general versus inductive and particular (Szanton, 2001, p. 64). This is described as “research think” and “political think.” The “culture of the researcher tends to add complexity and resist closure. The culture of the political actor tends to demand straightforward and easily communicated lessons that will lead to some kind of action” (Henig, 2009, p. 144).
Differences between the two communities are associated with a contrasting list of supply-side and demand-side problems (Bogenschneider and Corbett, 2010; Furhman, 1994; Nutley et al., 2007; Rosenblatt and Tseng, 2010). On the supply side are researchers who fail to focus on policy-relevant issues and problems, cannot deliver research in the time frame generally necessary for effective policy making, do not relate findings from specific studies to the broad context of a policy issue, ineffectively communicate their findings, depend on technical arguments that are inaccessible to policy makers, and lack credibility because of perceived career interests or even partisan biases. On the demand side are policy makers who fail to spell out objectives in researchable terms, have few incentives to use science, and do not take time to understand research findings relevant to pending policy choices.
This framing of the use problem offers little guidance as to which of the long list of factors, from either side, best explains variance in use, let alone how the factors interact and whether they apply only in specific settings or have general applicability (Bogenschneider and Corbett, 2010; Johnson et al., 2009). Although the two communities framework has been helpful in understanding the differing expectations of researchers and policy makers and problems of communication between them, it has not been able to offer a systematic explanation of use. Thinking about how best to bridge the gap between the two communities has, however, led to practices of translation and brokering and to more intensive interactions between researchers and policy makers.
Translation is a supply-side solution to the use problem. It was developed in clinical diagnostic, preventive, and therapeutic practices. The idea is simple: basic science is translated into clinical efficacy, efficacy is translated into clinical effectiveness, and effectiveness is translated into everyday health care delivery (Drolet and Lorenzi, 2011). The oft-invoked catchphrase is “bench to bedside.” One important sign of the seriousness with which
translation is taken is the U.S. Department of Health and Human Services initiative, Translating Research into Practice (TRIP) Program, that focuses on implementation techniques and factors associated with successfully translating research findings into diverse applied health care settings (see Agency for Healthcare Research and Quality, 2012).
Translational strategies have now moved beyond health care, introducing additional and somewhat differently focused activities. One is evidence-based registries, a compilation of scientifically proven interventions. They are considered tools to improve practice in various fields, including social services, criminal justice, and education. A different initiative is the Campbell Collaboration,3 an international organization conducting systematic reviews of the effects of social interventions.
The translation strategy is well institutionalized in education. The U.S. Department of Education’s Institute of Education Sciences (IES) was established in part to develop the science that could be translated into strategies to change education practice in public schools. The What Works Clearinghouse of the IES aims to provide educators, policy makers, and the public with an independent, and trusted source of scientific knowledge relevant to education policies and practices.4 IES also supports 10 regional educational laboratories, the role of which is similar to that of extension agents in the agricultural field: taking research results and putting them into practice in school districts and classrooms (see U.S. Department of Education, 2012).
The movement toward evidence-based approaches in practice settings began more than 40 years ago in medical practice. Archibald Cochrane (1972) railed against ineffective and sometimes harmful therapies despite randomized clinical trials showing that better treatments were available. In response to his call for systematic reviews of such trials, the Cochrane Collaboration5 was established. Its rigorous model of research synthesis has been adopted in other fields, including the above-noted Campbell Collaboration and the What Works Clearinghouse.
Although translation strategies have largely been applied to practices, the logic of translation is applicable to questions of using science in policy. Begin with a dependable, valid scientific base that provides evidence about
4For example, see the IES guides in education, such as “Turning Around Chronically Low-Performing Schools” (May 2008): available: http://ies.ed.gov/ncee/wwc/practiceguide.aspx?sid=7 [July 2012].
what works so that policy makers can readily grasp its relevance to the decision or task at hand, and make that science available in the form of research summaries or lists of demonstrably effective social interventions. The research record, however, is far from clear on whether translation (of either social or medical science research) works and is an effective strategy for enhancing use (see, e.g., Glasgow and Emmons, 2007; Green and Seifert, 2005; Lavis, 2006; Slavin, 2006).
While translation is primarily a matter of repackaging technical findings in terms more readily consumable by policy makers, brokering is a two-way conversation aided or mediated by a third party. Brokering involves filtering, synthesizing, summarizing, and disseminating research findings in user-friendly packages. It is generally seen as the task of intermediary organizations, such as think tanks, evaluation firms, and policy-oriented organizations, including those focusing on specific target populations or specific social issues as well as those organized around particular political persuasions. These organizations (Bogenschneider and Corbett, 2010, p. 94):
do research and evaluation, but they also have one foot in the policy world. They see policymakers as their primary clients. In addition to producing knowledge, they also see their role as translating extant research and analysis in ways that enhance their utility for those doing public policy.… To greater and lesser degrees, these firms bridge the knowledge-producing and knowledge consuming worlds.
Science and technology studies describe brokering as occurring in boundary organizations occupying a territory between research and policy making (Guston, 2000).6 In contrast to translation strategies that generally are one-way efforts in dissemination, brokering involves interaction and two-way communication. Intermediary organizations and knowledge brokers are increasingly being viewed as critical in promoting the capacity for evidence-based, or evidence-informed, decision making (e.g., Dobbins et al., 2009a).
6In this view, the National Research Council can be viewed as a brokering organization, synthesizing research in a consensus-based process and then presenting it in a form intended to contribute to improved policy making.
If brokering occurs, use is not something that happens when experts “here” hand off research to policy makers “there.” A brokering model views use as emerging from multidirectional communication and ongoing negotiation among researchers, policy makers, planners, managers, service providers, and even the public. Often this interactive process will involve consideration of more than one stream of research as relevant to a given policy (e.g., Sudsawad, 2007).
To bridge the gap between the differing cultures of the producers and consumers of scientific knowledge will require, according to some scholars, cultural changes in each community. Bogenschneider and Corbett (2010, pp. 299 ff.) write that the culture of research should change, perhaps through education and training on how to do more policy-relevant research, developing incentives for doing such research and developing opportunities to work with policy makers. The user or consumer culture should also change, perhaps by institutional innovations that improve policy makers’ access to research, helping them communicate their policy needs to researchers, and providing forums to discuss research agendas. In more ambitious formulations, research literacy of the general public should be improved through education (see also Carr et al., 2007; Gigerenzer et al., 2008).
An Interaction Model
Closing the distance between the two communities has taken an additional step in what is labeled the interaction model (Contandriopoulos et al., 2010; Greenhalgh et al., 2004). This model goes beyond transfer, diffusion, and dissemination and even beyond translation and brokering. The interaction label covers a family of ideas directed to systemic changes in the means and opportunities for relationships between researchers and policy makers (Bogenschneider and Corbett, 2010). It holds that the relation between researchers and users is not only not linear it is iterative and even “disorderly” (Landry et al., 2001, p. 335).
One source for an interest in interaction is science and technology studies documenting the co-evolution of social and technological systems (Jasanoff, 2004; Jasanoff et al., 1995). Another source is the use of systems thinking to better understand the complex adaptive systems involved in diagnosing and solving public health problems and the interactions among the design of prevention interventions, testing their efficacy and effectiveness, and disseminating innovations in community practices. A third is the emphasis on practical reasoning, the argumentative turn in policy analysis discussed in
the next chapter (Fischer and Forester, 1993; Hajer and Wagenaar, 2003; Hoppe, 1999).
Research that works in close proximity to practice settings illustrates the interaction framework. First noted in corporate research (Pelz and Andrews, 1976), and later in the life sciences (Louis et al., 1989), the publication of Pasteur’s Quadrant (Stokes, 1997), with its emphasis on use-inspired research, increased its visibility. This research influenced how the National Academy of Education (1999) set research priorities and its interest in how to hold policy specialists, researchers, and professional educators, program developers, and curriculum specialists collectively accountable for educational outcomes. Collaborations of this kind formed the basic design concept for the Strategic Education Research Partnership. These involved connecting researchers to teachers, bringing in research communities, school administrations, and educational policy makers (see National Research Council, 1999a; Smith and Smith, 2009). The Carnegie Foundation for the Advancement of Teaching and Learning is also promoting a framework for research and development labeled improvement research (Bryk et al., 2011), which synthesizes the work of researchers and practitioners.
In this spirit, the Institute of Medicine (IOM) created a Roundtable on Evidence-Based Medicine, which then became the Roundtable on Value & Science-Driven Health Care, to foster interaction among stakeholders interested in building a continuously learning health care system in which science, information technology, incentives, and culture are aligned to bring together evidence-based practice and practice-based evidence (see Green, 2006). This effort and its attendant workshops (Institute of Medicine, 2007, 2010b, 2011a, 2011b) stress the importance of rigorous science and applying the best evidence available. The goal is understanding how health care can be restructured to develop knowledge from science and from the health care process and to then apply it on many fronts: health care delivery and health improvement, patient and public engagement, health professional training, infrastructure development, measurement, costs and incentives, and policy. The IOM’s reports on these activities draw attention to active collaboration, exchange, and appraisal of research and policy and to what is known by researchers and users of research about practice—drawn from the life-cycle of therapies, their development, testing, introduction, and evaluation.
As attractive as these initiatives are, there are cautionary voices. There is a difference across political time, policy time, and research time. One should take care not to mistake one for another (Henig 2009, p. 153):
The pressure for fast, simple, and confident conclusions, however, is generated by the needs of politicians—not necessarily the needs of the policy. Political time is defined by election cycles, scheduled reauthorization debates, and the need to respond to short-term crises or sudden shifts in public attention. But a consideration of the history of public policy suggests that societal learning about complex problems and large-scale policy responses takes place on a much more gradual curve.
Interaction models offer an insight into what the use of science means in practice. Evidence from science is not simply there for the taking. It emerges and is made sense of in the particular circumstances that give rise to a policy argument (see Chapter 4 for discussion of policy argument). “Making sense” is iterative. It involves negotiating what kind of situation-specific knowledge is relevant to a policy choice, whether it is firmly established and available under the constraints of time and budget, and what political consequences might follow from using it. In this framework, formal linkages and frequent exchanges among researchers, policy makers, and service providers occur at all steps between knowledge production and knowledge use (Huberman and Cox, 1990). What emerges is a social as well as a technical exercise. Conklin et al. (2008, p. 7) explain this framework:
Strategic interactions (between human actors within and between organizations) therefore address both sides of the research-policy interface. On the one hand, decision-makers highlight policy relevant research priorities; on the other hand, researchers can interpret research findings in local contexts. In so doing, a common understanding of a policy problem, and its possible solutions, is built between different actors in the two communities.…
Spillane and Miele (2007) underscore the point in observing that what information is noticed in a particular decision-making environment, whether it is understood as evidence pertaining to some problem, and how it is eventually used all depend on the cognitions of the individuals operating in that environment. Furthermore, what these actors notice and make sense of is determined in part by the circumstances of their practice environment. Examining use, then, also requires examining “the practice of sense making, viewing it as distributed across an interactive web of actors and key aspects of their situation—including tools and organizational routines”
(p. 49). It also introduces the idea that research might “be interpreted and reconstructed—alongside other forms of knowledge—in the process of its use” (Nutley et al., 2007, p. 304).
Focusing on understanding institutional arrangements—how the agencies, departments, and political institutions involved in policy making operate and relate to one another—may be what matters most in improving the connection between science and policy making. For example, a study of drug misuse in government agencies in Scotland and England (Nutley et al., 2002) suggests that three aspects of microinstitutional arrangements within and between the agencies mattered a great deal in understanding how research evidence was (or was not) used:
1. How different agencies integrated research with other forms of evidence,
2. How agencies collectively dealt with the fragmentation of research evidence resulting from different agencies producing different types of evidence given their respective research cultures, and
3. What mechanisms were in place to integrate evidence and policy making (co-location of research and policy staff, cross-government work groups, establishment of quasi-policy bodies that specialize in the substance of a policy domain, etc.)?
Nutley et al. (2007, pp. 319-320) conclude
[T]here is now at least some credible evidence to underpin [their view] … that interactive, social, and interpretive models of research use—models that acknowledge and engage with context, models that admit roles for other types of knowledge, and models that see research use being more than just about individual behavior—are more likely to help us when it comes to understanding how research actually gets used, and to assist us in intervening to get research used more.…
If this conclusion holds up, it is a step toward accumulating what the committee believes is lacking: understanding institutional arrangements that facilitate the use of science in policy.
There is an important cautionary observation about efforts to overcome the “two communities” challenge. There are tensions between scientific
engagement with practical policy problems and the long-standing assumption that science maintains its authority by virtue of its independence from politics (Jasanoff, 1990; Jasanoff et al., 1995). Persons working to bring scientists and policy makers closer need to be mindful that this tension is never far from how scientists think about and engage the policy uses of their work.
Current discussions about the use of research knowledge are heavily influenced by “evidence-based policy and practice.” The goal is realizing better and more defensible policy decisions by grounding them in the conscientious, explicit, and judicious use of the best available scientific evidence (Davies et al., 2000). The initiative explicitly rejects habit, tradition, ideology, and personal experience as a basis for policy choices: they are to be replaced with a more dependable foundation of “what works,” that is, what the evidence shows about the consequences of a proposed policy or practice. With access to an evidence base, argue the proponents, policy makers will make better decisions about the direction, adoption, continuation, modification, or termination of policies and practices. Dunworth et al. (2008, p. 7) note:
[W]hile scientific evidence cannot help solve every problem or fix every program, it can illuminate the path to more effective public policy.… [T]he costs and lost opportunities of running public programs without rigorous monitoring and disinterested evaluation are high … without objective measurements of reach, impact, cost effectiveness, and unplanned side effects, how can government know when it’s time to pull the plug, regroup, or, in business lingo, “ramp up?”
The use of science is, of course, not a logical or inevitable outcome of having the science. In fact, the normative claim that policy should be grounded in an evidence base “is itself based on surprisingly weak evidence” (Sutherland et al., 2012, p. 4).
The approach of evidence-based policy and practice assumes that there is an agreement among policy makers and researchers on what the desired ends of policy should be. “The main contribution of social science research is to help identify and select the appropriate means to reach the goal” (Weiss 1979, p. 427). This, in turn, depends on the quality of the science providing
evidence to the policy maker, and thus the evidence-based approach places a premium on improving policy-relevant research, often through the use of RCFTs.
In the settings in which they are carried out, RCFTs provide a strong, if not the strongest, form of scientific evidence of cause and effect. Circumstances may permit such experiments in a desired setting, such as when scarce resources are allocated by lottery, for example with admission to magnet schools or charter schools or the allocation of health care resources. An example of the latter is the Oregon Health Insurance Experiment in which names were drawn by lottery for the state’s Medicaid program for low-income, uninsured adults (Finkelstein et al., 2012).
Even when RCFTs are conducted in one setting, inference from them may be applied to other settings or contexts with concurrent collection of information on other variables or factors that differ in different settings and that may influence the results. So-called substitutes for randomized trials, however, such as “natural” experiments and “quasi-experiments,” as Sims (2010) argues, are not actually experiments. They are often invoked as a way to avoid confronting “the complexities and ambiguities that inevitably arise in nonexperimental inference.” For these situations and even in conjunction with randomized experiments, there are nonexperimental methods of drawing causal inferences and model-based methods for adjusting experimental results for inherent biases. Appendix A provides a review of some of these research methods and sets them in the context of the varied statistical methods for research and evaluation.
The active debate regarding the appropriate methodology for a given research question promotes attention in the policy community to the desirability of producing the best possible evidence under a given set of circumstances, especially the strongest evidence that bears on policy implementation and policy consequences. Bringing attention to the importance of strong evidence in policy making advances the goal of using science even though the specific formulation of an evidence-based policy approach offers little insight into the conditions that bring about its use.
Despite their considerable value in other respects, studies of knowledge utilization have not advanced understanding of the use of evidence in the policy process much beyond the decades-old National Research Council (1978) report. The family of suggestive concepts, typologies, and frame-
works has yet to show with any reasonable certainty what changes have occurred in the nature, scope, and magnitude of the use of science as a result of different communication strategies or different forms of researcher-user collaborations (Dobbins et al., 2009b; Mitton et al., 2007). There is little assessment of whether innovations said to increase the use of science in policy have had or are having their desired effects.
A recent study reporting the results of a collaborative procedure among 52 participants covering a range of experiences in both science and policy identified 40 (!) key unanswered questions on the relationship between science and policy—this despite nearly four decades of research on the question of “use” (Sutherland et al., 2012). One extensive review of the literature reaches the striking conclusion that knowledge use is “so deeply embedded in organizational, policy, and institutional contexts that externally valid evidence pertaining to the efficacy of specific knowledge exchange strategies is unlikely to be forthcoming” (Contandriopoulos et al., 2010, p. 468 [italics added]).
Our conclusion is not that pessimistic. If “use” is broadly understood to mean that science—or, more specifically, in the language of evidence-based policy and practice, scientific evidence of the effectiveness of interventions—is incorporated into policy arguments, we agree that there probably will never be a definitive explanation of what strategies best facilitate or ensure that incorporation. But this conclusion does not rule out that the possibility that new approaches in the study of the science-policy nexus might reveal factors or conditions that have thus far been missed. Perhaps the preoccupation with defining use, identifying factors that influence it, and determining how to increase it has detracted from the search for alternative ways in which social science can contribute to understanding the use of science in policy. That possibility is the subject of Chapter 4.