Using science in public policy is on the nation’s agenda. One reason is the growing demand for performance measures and enhanced accountability in federal agencies and not-for-profit organizations. Another is the call for evidence-based policy and practice, part of a broader focus on data-driven decision making across government agencies.1 And in a period of fiscal restraint there is pressure to demonstrate that government-supported science offers benefit to taxpayers, a matter often discussed as “broader impacts.”
“Broader impact,” when associated with the social sciences, is typically understood as being useful for policy. There is an extensive network of intermediary organizations dedicated to bringing research knowledge to the world of policy making. A growing number of policy schools and programs are preparing students for careers in these many nongovernmental organizations, as well as government agencies and corporations, whose strategies depend on reliable knowledge of the society, economy, and polity. We summarize all of this as a policy enterprise, which is defined in more detail
1A memo from the U.S. Office of Management and Budget (May 18, 2012) to agency heads stresses that “Agencies should demonstrate the use of evidence throughout their Fiscal Year (FY) 2014 budget submissions. Budget submissions also should include a separate section on agencies’ most innovative uses of evidence and evaluation … the Budget is more likely to fund requests that demonstrate a commitment to developing and using evidence.” Available: http://www.whitehouse.gov/sites/default/fles/omb/memoranda/2012/m-12-14.pdf [July 2012].
below. Actors in this policy enterprise have a stake in whether social science knowledge is useful to and used in policy making.
Although this report specifically addresses what is known and what needs to be studied about how social science is useful for policy making, the analysis bears on how evidence from all sciences is used. The stakes are high for every branch of science that claims to advance social welfare, contribute to economic growth, and enhance national security. The Introduction noted that “use,” being a social phenomenon, is investigated with social science theories and methods, not with the theories and methods of physics, chemistry, biology, or engineering—even though these sciences place an enormous range of issues on the policy agenda. The social sciences should approach their responsibility to study “use” alert to consequences for the physical, biological, and engineering sciences as well.
Sustained attention to the use of social science in policy making received a noticeable boost in the post–World War II period, when leaders saw in “big science” a path to economic growth and social betterment. The U.S. National Science Foundation (NSF), created in 1950, is the premier institutional expression of this vision. The social sciences were initially excluded from the NSF, but a new role for social science nevertheless emerged (by the 1960s the NSF was funding social science). A convenient marker of the new role is the highly influential study of public schools known as the Coleman report (Coleman, 1966), undertaken in response to a 1964 congressional instruction that the commissioner of education investigate “the lack of availability of equal education opportunities for individuals by reason of race, color, religion, or national origin.” By the standards of social science at the time, this study was big: 600,000 students and 60,000 teachers in 4,000 public schools.
Its size was only one of its distinctive characteristics. The study emphasized educational outcomes, breaking from a research tradition that had largely focused on inputs, such as expenditure per student. The Coleman study is best known for its controversial finding. Student test results and educational aspirations, which were the outcomes measured, could be explained as much by family background variables as by school characteristics (such as classroom size). As noted in the discussion of charter schools in the previous chapter, researchers are still actively investigating the subtleties of
this key finding, as well as the report’s companion finding that minorities enter school burdened with accumulated educational disadvantages.2
The Coleman study signaled that large-scale social science projects could inform the nation on critical policy challenges. A nation that used science to design radar and make the atomic bomb for its war effort could also declare a “war on poverty” and a “war on drugs,” expecting the social sciences to help policy makers design programs and then evaluate whether programs were having their intended effect.
The government launched an ambitious agenda of nationally scaled social experiments based on randomized field trials. Examples included studies of a negative income tax, housing allowances, health insurance, and time-of-use electricity pricing. The government funded specialized institutes, such as the Institute for Research on Poverty at the University of Wisconsin. University-based survey capacities grew apace, notably the Institute for Social Research (ISR) at the University of Michigan and the National Opinion Research Center (NORC) at the University of Chicago. The federal government, the main provider of social and economic statistics, made its data easily available for analysis by university-based researchers, who in turn began to influence what statistical data were collected. A steady stream of studies based on secondary analysis of labor, health, income, crime, and related statistics from what has grown to nearly 90 federal programs and agencies underpinned debates about policy challenges and options. Hundreds of dissertations used federal statistics, and the writers of these dissertations became professors and researchers in university social science departments and interdisciplinary centers, where they produced the next generation of researchers trained to ask “big” questions about social welfare and economic trends, public health, and school reform. An early preoccupation in this research was whether the policies were having the expected outcomes.
Measuring outcomes quickly moved to the center of debate over the significant investments in the “Great Society” programs. The first Handbook of Evaluation Research was published in the mid-1970s (Guttentag and Struening, 1975). Policy and program evaluation focused attention on research at the intersection of what policy makers needed to know and what social science research offered. Is a program producing its intended
2See also Mosteller and Moynihan (1972), which arose from a Harvard faculty seminar to reassess Coleman’s research. Although some conclusions were contrary to Coleman’s findings, the reanalysis generally agreed with the relationship between educational achievement and equality of opportunity.
outcomes? Is it cost effective? Research in the 1970s challenged basic assumptions about the merits of the Great Society social programs, pointing out costly program failures and unintended negative consequences.
New ways of linking knowledge to policy making began to appear under formulations that are familiar today: evidence-based policy, performance metrics, impact assessment, and comparative effectiveness research. These formulations in turn led to institutional innovations, such as best practice guidelines and the philanthropically funded Coalition for Evidence-Based Policy.
The expanding influence of social science over the last half-century was aided by significant improvements in research methods. Advances in qualitative methods allowed researchers to examine complex problems with increasingly sophisticated case-study approaches, moving into research sites heretofore less carefully examined—including corporate decision making and laboratory science. There were significant advances in large-scale data collection, along with improved methods of analysis allowing policy analysts to handle census data and surveys with thousands of respondents and hundreds of variables. More recently, of course, there has been exponential growth in computing power and in analytic techniques. Further expanding the capacity of social science is the availability of administrative data and commercially collected digital data (Lazer et al., 2009). “Big data” is the recently coined term to describe, especially, the flood of data provided as a by-product of electronic media and transactions. Newly formed university centers and programs are actively exploring data visualization, data mining, and internet data in the new field of computational social science.
With huge amounts of accessible data, the technical knowledge to analyze the data, and hundreds of organizations seeking to link research to policy making, it is not surprising to find strong political interest in financial and performance audits, process monitoring, and impact evaluations, all of which are part of the broad interest in ways to hold officials and institutions accountable. Social indicators are pressed into service to describe what is going well, or not so well, in society, leading to such efforts as the “Measure of America” (Lewis and Burd-Sharps, 2010, Social Science Research Council, 2012), modeled on the United Nations’ Human Development Index (United Nations, 2012), and the Key National Indicators Initiative (State of the USA, 2012). Ranking schemes, from corporate corruption to happiness to university performance, are ubiquitous in the media, their sometimes-questionable assumptions and methodologies notwithstanding.
This increasing sophistication of measurement and quantification
does more than provide technical tools. It can have an independent effect on what kinds of policies reach the political agenda, and who is likely to be favored. When demographic analysis showed that racial minorities were undercounted at higher rates than the white population, the U.S. Census Bureau examined whether a statistical technique used in wildlife studies (dual-system estimation) could be used to adjust the count, to reduce, or even to eliminate this differential undercount. The research and methodological work led to an intense and highly partisan battle stretching across two decades, with litigation that reached the Supreme Court. It finally ended only when the Census Bureau leadership said that census adjustment was no longer being considered.3
School reform offers another example of policy options shaped by measurement decisions. The data-driven accountability movement in education challenges traditions—local control of schools, deference to professionalism, and the belief in public schools as a foundation for personal mobility and societal progress. Data-driven accountability “is not simply affected by this institutional upheaval … it is implicated in the upheaval itself” (Henig, in press). This occurs as accountability thinking and practice shifts power from local to state and federal levels, and undermines union control over teacher salaries and promotions.
Theories from political sociology explain these examples. In an influential essay, Identity and Representation, Bourdieu (1991, pp. 220-228) elaborates on how official measurement can constitute new policy realities. He develops his theory with reference to ethnic identity, described as “struggles over classifications, struggles over the monopoly of the power to make people see and believe, to get them to know and recognize, to impose the legitimate definition of the divisions of the social world and, thereby, to make and unmake groups” (p. 221 [italics in original]). Bourdieu adds that the power of statistical categories to move beyond being simply descriptive to being constitutive of social reality is proportional to the authority of agencies creating those categories—in his discussion, the government and the social sciences.
There are, then, many interlinked developments that establish the era
3The Supreme Court ruled that an adjusted count could not be used for apportionment, but it left open the question of whether an adjusted count could be used for other purposes, including redistricting and allocation of federal funds. Given the steadily increasing costs of the census and the persistence of errors—missed households and erroneous enumeration—it is likely that modifications or technical improvements on the basic design based on mailout/mailback and nonresponse household follow-up will continue to be a policy issue.
of big social science. Essentially, the country is replacing expensive trial-and-error policy making with more deliberately produced knowledge that can inform policy making (Ruttan, 1984, p. 552; cited in National Research Council, 1999b):
Throughout history, improvements in institutional performance have occurred primarily through the slow accumulation of successful precedent, or as a by-product of expertise and experience. Institutional change was traditionally generated through the process of trial and error much in the same manner that technical change was generated prior to the invention of the research university, the agricultural experiment station, or the industrial research laboratory. With the institutionalization of research in the social sciences it is becoming increasingly possible to substitute social science knowledge and analytical skill for the more expensive process of learning by trial and error.
In one important way, the last half-century of big science continued earlier understandings of what linked social science to policy. Even the immature disciplines of the early 20th century were mobilized to address national challenges in World War I, in fields as varied as propaganda analysis, competency testing, and economic planning. The monumental Recent Social Trends, commissioned by President Herbert Hoover in 1929 and financed by the Rockefeller Foundation, can be viewed as a precursor to big social science, though it did not have the impact on policy debate that later attended the explosion of large-scale research in the 1960s.
The national call on social science expertise was reinvigorated by the challenges of the depression economy in the 1930s, leading to major advances in micro- and macroeconomic analysis and in welfare policy initiatives, such as the Social Security system. Government’s need for a better understanding of how the economy was responding to policy interventions led to improved scientific capacity, exemplified by population sample survey methods. World War II repeated the World War I call on social science expertise, but on a much larger scale. The Office of Strategic Services (which became the Central Intelligence Agency), for example, recruited historians, anthropologists, political scientists, and economists to help war planning
in unfamiliar parts of the world, and psychologists to help break Japanese and German codes.
The period that spanned the two world wars established a basic pattern that set the stage for the policy enterprise. The government did not establish a significant internal capacity in the social sciences or even in policy analysis. In an early version of outsourcing, it turned to expertise outside of government. In this way, the first half of the 20th century established the principle that universities, specialized research organizations, and think tanks were a source of independent and nonpartisan social science knowledge on social conditions and policy options.
In these early decades, there was little attention to “use” as it is discussed today. Use was largely taken for granted, certainly in settings, such as the Social Science Research Council, the National Bureau of Economic Research, the Brookings Institution, and private foundations and funders. A widely discussed book, Knowledge for What? (Lynd, 1939), was subtitled The Place of the Social Sciences in American Culture. Today, of course, the subtitle would likely replace “in American culture” with “in policy making.” The debate initiated by Lynd’s book took place among social scientists and focused on the general public purposes for which research knowledge was generated, not on specific policy applications. A recurring theme in the debate was how to maintain scientific independence in order to “speak truth to power,” a theme that has not entirely disappeared (O’Connor, 2007).
However, and indicative of a major shift in the thinking of U.S. social scientists since the 1930s, few scholars today want to keep their distance from the policy process or believe that independence can be secured only if their work lacks immediate relevance. When researchers are asked which they prefer, “more links between the academic and policy communities” or “a higher wall of separation,” more than nine of ten opt for more links (Avey et al., 2012). This helps explain why the phrase “basic versus applied science,” so pervasive in the immediate postwar period, does not appear in our report. Whether researchers call what they do basic or applied, they want it used—and for social scientists that means used in policy making. Policy makers looking for answers do not care whether social scientists call what they do basic or applied or, for that matter, care whether the research is disciplinary or interdisciplinary. These distinctions might still be of interest in some social science settings. They are not in this report, because we assume that policy makers are interested in answers to their questions, not in the particulars of the scientific arrangements producing the answers.
The first half of the 20th century is also relevant to this report for what
it did not establish. There was nothing established in the social sciences comparable to the National Institutes of Health, and there were no government laboratories akin to Lawrence Livermore or Los Alamos. The basic arrangement in the biological and physical sciences includes government conducted and managed research along with independent, university-based research and corporate research, especially in medicine and electronics. This did not develop for the social sciences. Rather, the model that emerged situated the production of social science knowledge primarily in the nonprofit sector and funded from a mixture of private and public sources. And though there are instances of industry-based advances in social science methods—survey research and psychological testing being the leading examples—these occur much less frequently in the social than in the natural sciences. The basic positioning of the social sciences in the nonprofit sector has implications for its use in policy making, notably in the number and workings of intermediary organizations—think tanks and advocacy organizations—and in the heavy presence of interested private funding.
Scope of the Policy Enterprise
The postwar era of big science introduced new challenges to understanding the usefulness of the social sciences. This is first evident in the sheer scope of the enterprise. In the 1920s, there were a handful of privately funded, nongovernmental social science organizations—the National Bureau of Economic Research, the Social Science Research Council, and the Brookings Institution, are notable examples—established in part to offer expertise and advice to the government. There was growth in the World War II era and shortly thereafter—the RAND Corporation, the American Institutes for Research, ISR and NORC, for example—and, importantly, federal funds were added to what had largely been philanthropic funding. The request-for-proposal (RFP) mechanism, initially designed to purchase military hardware, was re-engineered to purchase scientific expertise.4 For-profit and nonprofit research contract houses were formed, and consulting firms got into the business of using social science to advise government and private clients. A recent survey estimates more than 1,800 think-tank-like
4For example, NORC, a midsize social science organization with an annual budget of approximately $160 million, annually screens about 10,000 government-issued RFPs, closely examines 10 to 15 percent of them, and prepares formal bids for several hundred.
organizations in the United States (McGann, 2010), approximately a quarter of which are based in Washington, DC.
Another big change occurred in higher education. When big social science got under way in the 1960s, public policy schools were few; today there are several hundred master’s level degree programs in public administration or public policy.5 Graduates from these programs fill the growing number of positions available in policy analysis and advocacy institutions.6 By any measure—absolute numbers, budgets, number of employees, available federal funds, research output, scope of policies addressed—the current nongovernmental organization of policy analysis, advice, and advocacy is vastly different from that which characterized the first half of the 20th century. It does not exaggerate to label what is now in place a policy enterprise, which we can now define as an interlocking array of institutions and practices that use (or claim to use) science to influence policy making. Funds come from private and public sources. Influence flows through informal briefings, publications, media placement, as well as more formal arrangements such as RFPs and consultancies. There is continual circulation of personnel among the institutions that make up this enterprise, as well as circulation in and out of government positions.
A New Focus on Use
One early feature of the policy enterprise was a new research specialty dedicated to studying how research knowledge is used. It began in earnest in the work of Cohen and Lindblom (1979) and Weiss (1977), and in the scholarship of Campbell and his colleagues on social experimentation and the role it should play in shaping public policy (e.g., Campbell, 1975; Cohen and Garet, 1975; Dehue, 2001; Floden and Weiner, 1978; Riecken and Boruch, 1975). In this literature—spanning the fields of sociology, organizational behavior, political science, psychology, education, and, more recently, science and technology studies—understanding the use of social
5This number includes accredited graduate degree programs and graduate schools in the field of public administration and policy at the master’s levels in the United States from all types of institutions except online degrees: see http://www.gradschools.com/programs/public-affairs-policy [July 2012].
6For example, the Research Triangle Institute has a staff of approximately 2,800 people (see http://www.rti.org [January 2012]); Westat, more than 2,000 (see http://www.westat.com [January 2012]); RAND Corporation, approximately 1,600 (see http://www.rand.org [January 2012]); and the American Institutes for Research, 1,500 (see http://www.air.org [January 2012]).
science in public policy moved to the foreground. An institutional expression of this interest was the Center for the Research Utilization of Scientific Knowledge (CRUSK), established in 1970 by the Institute for Social Research at the University of Michigan. Other ISR centers established in this period flourished and continue today to play major roles in social science. The center for research utilization, however, did not flourish. It was closed in 1985, an early indicator of how difficult it is to study the phenomenon of use (Frantilla, 1998).
A key document of the period was a 1978 National Research Council (NRC) report, Knowledge and Policy: The Uncertain Connection. More than three decades later we find that its major conclusion strikingly anticipates one we reach. The 1978 report made clear that the question of use had ceased to be primarily one debated within social science as was the case in the prewar decades. By the late 1970s, the use question engaged a broad community of potential users and intermediaries, as well as academic researchers. The report (National Research Council, 1978, p. 1) opened with a worry:
Although the need for large-scale federal support of social R&D [research and development] is widely accepted, questions concerning its relevance to the making of social policy have become more insistent in recent years. What are we learning? Who is making effective use of what we learn?
The report traced the questioning to two sources: “legislators distrustful of ‘social engineers’ who promote radical ideas or pursue irrelevant academic interests, and social scientists worried that dependence on government might compromise their objectivity” (p. 2). Although echoes of these points can be found in today’s discussions, they are not central to how the usefulness of social science is framed in this report.
Rather, we start with another observation in the NRC report: “the policy world now [1970s] takes it for granted that the social sciences have a contribution to make in government” (p. 4). The report listed numerous innovations that were designed to reduce the uncertainty in the connection between knowledge and policy. They included competitively awarded contracts, collaboration between funders and recipients of funds, and program evaluation.
But the report then reached a sobering conclusion (National Research Council, 1978, p. 5):
Unfortunately, we lack systematic evidence as to whether these steps are having the results their sponsors hope for.… [S]ocial R&D
continues to be criticized by members of Congress, executive branch officials, and social scientists because it is neither good nor well-managed research and has little potential for use.
The report continues in this vein, asking: “What knowledge do we possess that is relevant to the formulation of social R&D policy? Regrettably (and ironically), we possess little knowledge obtained through research that will help answer [this] question” (p. 6).
In the 35 years since that NRC report, the policy enterprise concerned with bringing social science to bear on policy making has steadily expanded, received more funding, and become more professional. However, the telling conclusion just cited—“we lack systematic evidence as to whether these steps [to connect scientific knowledge and policy] are having the results their sponsors hope for”—is one we reach today.
Our explanation for this is simple. Like our predecessor committee, we take for granted that using science contributes to policies that are more likely to result in consequences that policy makers intend. Today, however, we conclude that in the years since the 1978 report the focus on operationalizing “use” has not provided an adequate understanding of what happens between science and policy in policy making, a point developed in Chapter 3.
The nation invests in the social sciences and, even if not in amounts characteristic of physics, biology, or chemistry, still at nontrivial levels. The National Science Foundation (2012a, Table 19) estimates that federal government obligations for social science research in fiscal 2011 were $1.3 billion and that its own expenditures for social, behavioral, and economic sciences in fiscal 2012 would be $254 million (National Science Foundation, 2012b). Many other agencies, such as the Environmental Protection Agency, the Food and Drug Administration, the Central Intelligence Agency, and the Departments of Defense, Education, and Health and Human Services (HHS),7 also fund social science research. In addition to these direct expenditures, there are federal data collection activities that generate
7The National Institutes of Health (2012a), a part of HHS, reports for fiscal 2013 that about 10 percent of its $30 billion budget is to fund behavioral and social science research, including projects that involve social scientists working with biological and medical scientists (Silver et al., 2012).
social and economic statistics used extensively by social scientists investigating a very wide range of policy questions. The fiscal 2012 budgets for the 13 principal statistical agencies designated by the U.S. Office of Management and Budget (OMB)8 exceeded $2.6 billion, not including $498 million in that year for the decennial census. Other federal statistical programs added $3.6 billion to the generation of social and economic data (U.S. Office of Management and Budget, 2011). Significant investments from the private sector foundations, corporations, and individuals join these public funds in supporting research universities and policy institutions.
From these public and private (generally tax-exempt) sources, there is a several billion dollar investment in social science research. That investment includes support for scientists working in universities, research institutions, and linkage institutions, as well as those in government agencies—on a large scale in intelligence, national security, and defense agencies and in statistical agencies (particularly the Census Bureau and the Bureau of Labor Statistics) and, on a numerically smaller though still influential scale, in legislative and executive offices, such as the Congressional Budget Office and the HHS Office of the Assistant Secretary for Planning and Evaluation. Most states and many local governments duplicate features of these programs and offices. The number of social science trained experts working in governments across the country reaches into the several thousands.
One standard justification for government-subsidized social science is that it produces a public good in the form of reliable and credible knowledge for policy making that would not otherwise be produced. But of course
8The 13 principal statistical agencies are the Bureau of Economic Analysis and the Bureau of the Census in the Department of Commerce; the Bureau of Justice Statistics of the Department of Justice; the Bureau of Labor Statistics of the Department of Labor; the Bureau of Transportation Statistics in the Research and Innovative Technology Administration of the Department of Transportation; the Economic Research Service and the National Agricultural Statistics Service of the Department of Agriculture; the Energy Information Administration of the Department of Energy; the National Center for Education Statistics in the Institute of Education Sciences of the Department of Education; the National Center for Health Statistics in the Centers for Disease Control and Prevention of the Department of Health and Human Services; the National Center for Science and Engineering Statistics in the Social, Behavioral and Economic Sciences Directorate of the National Science Foundation; the Office of Research, Evaluation, and Statistics of the Social Security Administration; and the Statistics of Income Division in the Internal Revenue Service Office of Research, Analysis, and Statistics of the Department of the Treasury. These agencies constitute 13 of the 14 members of the Interagency Council on Statistical Policy. The 14th member is the Office of Environmental Information in the Environmental Protection Agency, which is not a self-contained statistical agency (National Research Council, 2009).
the value of this effort depends on the science being used. A starting point, then, is asking what is known about use. The next chapter explains why scholarship on use to date is inadequate. Chapter 4 follows with a framework for research that can extend and deepen the understanding of use.