National Academies Press: OpenBook
« Previous: 3 Producing High-Quality Economic Evidence to Inform Investments in Children, Youth, and Families
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

4

Context Matters

The production of high-quality economic evidence is necessary—but not sufficient—to improve the usefulness and use of this type of evidence in investment decisions related to children, youth, and families. Equally important is attention before, during, and after economic evaluations are performed to the context in which decisions are made. Consumers of the economic evidence produced by these evaluations will inevitably consider such factors as whether the evidence is relevant and accessible and whether meaningful guidance is provided on how to apply the evidence within existing organizational structures, as well as personnel and budget constraints. Consumers also will consider the influence on investment decisions of broader factors such as political pressures and value-based priorities. As discussed in Chapter 2, moreover, whether evidence (including economic evidence) is used varies significantly depending on the type of investment decision being made and the decision maker’s incentives, or lack thereof, for its use (Eddama and Coast, 2008; Elliott and Popay, 2000; Innvaer et al., 2002; National Research Council, 2012). In addition, a decision maker may be faced with the pressing need to act in the absence of available or relevant evidence (Anderson et al., 2005; Simoens, 2010).

Apart from economic evaluations, decision makers rely on many other sources to inform their decisions, including expert opinion, community preferences, and personal testimonies (Armstrong et al., 2014; Bowen and Zwi, 2005; Orton et al., 2011). Reliance on these sources rises when the empirical evidence does not clearly point the decision maker in one direction or when there are conflicting views on the topic at hand (Atkins et al., 2005). The influence of a given type of evidence also may differ by

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

the stage of the decision making process (i.e., policy agenda setting, policy formulation, policy implementation) or its objective (e.g., effectiveness, appropriateness, implementation) (Bowen and Zwi, 2005; Dobrow et al., 2004, 2006; Hanney et al., 2003; National Research Council, 2012).

With some noteworthy exceptions, efforts to improve the use of evidence have focused on the use of research evidence in general rather than on the use of economic evidence in particular. Even with this broader focus, however, the research base on the factors that guide decisions and on reliable strategies for increasing the use of evidence is scant in the United States (Brownson et al., 2009; Jennings and Hall, 2011; National Research Council, 2012). The committee therefore based its conclusions and recommendations in this area on multiple sources: the emerging literature on processes for improving evidence-based decision making, relevant literature on the use of economic evidence from other countries, the expertise of the committee members, and two public information-gathering sessions (Appendix A contains agendas for both of these sessions). Many lessons learned from broader efforts to understand and improve the use of research evidence apply to the use of economic evidence in decision making.

This chapter organizes the committee’s review of contextual factors that influence the usefulness and use of evidence under three, sometimes overlapping, headings: (1) alignment of the evidence with the decision context, which includes the relevance of the evidence, organizational capacity to make use of the evidence, and the accessibility of reporting formats; (2) other factors in the use of evidence, which include the role of politics and values in the decision making process, budgetary considerations, and data availability; and (3) factors that facilitate the use of evidence, which include organizational culture, management practices, and collaborative relationships. The chapter then provides examples of efforts to improve the use of evidence, illustrating the role of the various factors discussed throughout the chapter. The final section presents the committee’s recommendations for improving the usefulness and use of evidence.

ALIGNMENT OF EVIDENCE WITH THE DECISION CONTEXT

Optimal use of evidence currently is not realized in part because the evidence is commonly generated independently of the investment decision it may inform (National Research Council, 2012). Economic evaluations are undertaken in highly controlled environments with resources and supports that are not available in most real-world settings. The results, therefore, may not be perceived as relevant to a particular decision context or feasible to implement in a setting different from that in which the evidence was derived. In addition, findings from economic evaluations may be reported in formats that are not accessible to consumers of the evidence.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Relevance of Evidence to the Decision Context

“Often there is not an evaluation that addresses the specific questions that are important at a given time. Usually what is used are evaluations that have already been done, internally or externally, that may or may not have answered the current questions.”

—Dan Rosenbaum, senior economist, Economic Policy Division, Office of Management and Budget, at the committee’s open session on March 23, 2015.

“We think a lot about the question of scalability. You may have something with strong evidence that works really well in New York City. That same approach may not work as well in a small border town in Texas where your work is shaped by a very different set of local factors.”

—Nadya Dabby, assistant deputy secretary for innovation and improvement, U.S. Department of Education, at the committee’s open session on March 23, 2015.

The perceived relevance of an evaluation to a specific decision influences whether the evidence is used or cast aside (Asen et al., 2013; Lorenc et al., 2014). Yet producers and consumers of evidence generally operate in distinct environments with differing terminology, incentives, norms, and professional affiliations. The two communities also differ in the outcomes they value (Elliott and Popay, 2000; Kemm, 2006; National Research Council, 2012; Oliver et al., 2014a; Tseng, 2012). As a result, the evidence produced and the evidence perceived to be relevant to a specific decision often differ as well.

Evidence is most likely to be used when the evaluation that produces it is conducted in the locale where the decision will be made and includes attention to contextual factors (Asen et al., 2013; Hanney et al., 2003; Hoyle et al., 2008; Merlo et al., 2015; Oliver et al., 2014a). Decision makers want to know whether a given intervention will work for their population, implementing body, and personnel. Each of these factors, however, often differs from the conditions under which the evaluation was conducted. Even methodologically strong studies that demonstrate positive effects under prescribed conditions can be and often are discounted in the absence of research indicating that these outcomes can be achieved under alternative conditions (DuMont, 2015; Nelson et al., 2009; Palinkas et al., 2014).

One way to enhance the relevance—and thus the use—of evidence is to gain a more thorough understanding of the decision chain, the specific decision to be made, when it will be made, where responsibility for making it lies, and what factors will influence that person or organization (National Research Council, 2012). It is also useful for producers of economic evidence and intermediaries (discussed later in this chapter in the section on

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

collaborative relationships) to consider the intended purpose of an existing intervention; the details of its implementation and administration; the culture and history of the decision making organization, particularly with respect to its use of various types of evidence; and the community in which the intervention is set (Armstrong et al., 2014; Eddama and Coast, 2008; van Dongen et al., 2013).

Ideally, economic evaluation goes beyond rigorous impact studies and associated cost studies to examine impact variability, particularly whether there are impacts for different settings, contexts, and populations, and whether and what adaptations can be effective; systems-level supports required for effective implementation; and the cost of implementation at the level of implementation fidelity required. Policy makers and practitioners attend not only to impacts but also to how to achieve them and the extent to which externally generated evidence applies within their own context (Goldhaber-Fiebert et al., 2011).

CONCLUSION: Evidence often is produced without the end-user in mind. Therefore, the evidence available does not always align with the evidence needed.

CONCLUSION: Evidence is more likely to be used if it is perceived as relevant to the context in which the investment decision is being made.

Capacity to Acquire and Make Use of Evidence

A key factor in promoting the use of economic evidence is ensuring that end-users have the capacity to acquire, interpret, and act upon the evidence. That capacity falls into two categories: the capacity to engage with and understand the research, and the capacity to implement the practices, models, or programs that the research supports. In both cases, that capacity can be developed internally in an agency or implementing organization or it can be supported through intermediaries who help translate evidence for decision makers or offer support to those implementing interventions with an evidence base.

Organizational Capacity to Acquire and Interpret Evidence

“We hire consultants more often than not [to access and analyze evidence] because we don’t always have the capacity to do so, which means we have to also find a funding source to make that possible.”

—Uma Ahluwalia, director, Montgomery County, Maryland Department of Health and Human Services, at the committee’s open session on March 23, 2015.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

“We’ve heard from some program administrators who would like to be able to have cost-benefit information, but lack the capacity to [access the necessary data]. In addition, agencies do not always have the expertise needed to conduct these kinds of data analyses.”

—Carlise King, executive director, Early Childhood Data Collaborative, Child Trends, at the committee’s open session of June 1, 2015.

As public pressure for accountability and efficiency grows, leaders in both public and nonprofit settings are increasingly called upon to collect, analyze, and interpret data on their agency’s effectiveness. Similarly, policy makers and funders are expected to make use of economic data in making decisions. Yet many of these stakeholders lack the capacity, time, or expertise to perform these tasks (Armstrong et al., 2013; Merlo et al., 2015). For example, Chaikledkaew and colleagues (2009) found that 50 percent of government researchers and 70 percent of policy makers in Thailand were unfamiliar with economic concepts such as discounting and sensitivity analysis.

Within public and private or nonprofit agencies across multiple sectors, decision makers may have had little training in research and evaluation methodology, which limits their ability to understand and assess the research base and use it to inform policy or practice (Brownson et al., 2009; Lessard et al., 2010). One of the only studies of its kind on the training needs of the public health workforce in the United States identified large gaps in these decision makers’ competence in the use of economic evaluation to improve their evidence-based decision making, as well as their ability to communicate research findings to policy makers (Jacob et al., 2014). Their ability to review the entire research base in the area of interest also may be limited by constraints of time and access. As a result, decision makers are vulnerable to presentations of evidence from vested interest groups that offer a limited view of what the evidence does and does not show.

Clearinghouses of evidence-based practices, discussed in the section below on reporting, can make existing knowledge accessible to many users on a common platform. However, decision makers would have difficulty summarizing all the evidence relevant to a particular decision at hand. Thus, organizations developing and implementing interventions need to have the internal or external capacity to interpret the evidence and determine how it applies to their specific context and circumstances.

One approach to building greater capacity for the analysis and use of research evidence, including economic evidence, is to incorporate stronger training on those topics into undergraduate and graduate curricula, as well as into other learning opportunities, including on-the-job or work-based learning and fellowships for future leaders and those seeking to inform deci-

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

sion making (Jacob et al., 2014; National Research Council, 2012). Senior executive service training in the federal government, for example, could include training in the use of economic evidence for federal executive leaders. Fellowship programs sponsored by philanthropies, government organizations, or other institutions could include training or practicums focused on the use of economic evidence. Graduate programs for those pursuing careers in government or service organizations or those seeking to influence decision makers—such as programs leading to a master’s degree in public policy, public administration, public health, social work, law, journalism, or communications—could include coursework related to the acquisition, translation, and use of evidence of all types, including economic evidence. Finally, human resources agencies serving employees who work on interventions for children, youth, and families could provide training and opportunities for applied learning in the use of research evidence, including how to access and acquire the evidence, how to judge its quality, and how to apply it in decision making. An example of such capacity is provided in Box 4-1.

CONCLUSION: Capacity to access and analyze existing economic evidence is lacking. Leadership training needs to build the knowledge and skills to use such evidence effectively in organizational operations and decision making. Such competencies include being able to locate economic evidence, assess its quality, interpret it, understand its relevance, and apply it to the decision context at hand.

Capacity to Implement Evidence-Based Interventions

“The issue of implementation is huge. Our own research suggests that the quality and extent of implementation of any given program is at least as important in determining effects, or in many cases more important, than the actual variety of the program implemented locally. The question of whether or not one can reasonably expect the kinds of effects that the background evidence suggests is very much an open question and has a great deal to do with the quality of the monitoring systems, implementation fidelity, local resources, and a huge number of contextual factors that have to do with what is actually put on the ground under the label of one of these programs.”

—Mark W. Lipsey, director, Peabody Research Institute, Vanderbilt University, at the committee’s open session on June 1, 2015.

“The evidence conversation is tilted entirely toward the evidence of effectiveness and efficacy, and we need a better understanding of the use of evidence in implementation. There are good examples of those kinds of

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
BOX 4-1
Building Capacity to Seek and Use Evidence: An Example

Kaufman and colleagues (2006) provided training and technical assistance in support of a community awarded a federal Safe Start demonstration grant for an integrated system of care designed to reduce young children’s exposure to violence. Their efforts represent an example of a university-community partnership that successfully improved the community’s capacity to seek and use scientific evidence in its local decision making. Although the objective was to increase the community’s acceptance of program evaluation data, the lessons learned could inform similar efforts to build stakeholders’ capacity to use economic evaluations as an additional tool to guide investment decisions.

The academic evaluators effectively educated policy makers, community leaders, and providers on the benefits of scientific evidence by engaging in a number of efforts, including (1) spending time outside of the university setting and participating actively in community meetings and forums to build relationships and trust, (2) delivering on research that the community identified as critical to its operations, (3) providing continual feedback on research findings to selected target audiences using strategies and mechanisms that reflected how those audiences consumed information, (4) embedding training and technical assistance in the use of evidence in all aspects of the initiative to promote the evidence’s broad utility, and (5) participating in project leadership meetings to ensure that the evidence was informing management decision making in real time. The investment of time and resources by the researchers led to an observable, sustained shift in the community’s capacity to incorporate evidence at multiple levels of program management and policy making.

systems. As others have pointed out, they depend a lot upon the capacity of the people implementing.”

—John Q. Easton, distinguished senior fellow, Spencer Foundation, at the committee’s open session on June 1, 2015.

Even if an organization has the capacity to access and analyze evaluation evidence, it may not have the infrastructure and capacity to support effective implementation of evidence-based interventions (Jacob et al., 2014; LaRocca et al., 2012). For instance, if an evidence-based intervention requires a level of professional development that no one can afford, or a workforce that is unavailable in most communities, or much lower caseloads than are found in existing systems, it will not be well implemented.

Implementation fidelity is critical to ensuring that economic benefits are realized. Funding is essential not only for the cost of the intervention but

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

also for the cost of the supports required to implement it. As discussed in Chapter 3, economic evaluators can break those costs out explicitly, since they may need to be funded from different sources. For instance, practitioners’ time may be billable to Medicaid, but the cost of building a quality assurance system to monitor implementation may not be.

Incorporating economic evidence into conceptual frameworks and models of implementation may improve the dissemination and use of the evidence. These models have been developed to study some of the implementation issues discussed above, but little attention has been given to whether economic evidence should be incorporated into the models and if so, how. In the development of the Consolidated Framework for Implementation Research, for example, Damschroder and colleagues (2009) found that intervention costs were considered in only 5 of 19 implementation theories they reviewed. Although they decided to include costs in their framework as one of several intervention characteristics that affect implementation, they note that “in many contexts, costs are difficult to capture and available resources may have a more direct influence on implementation” (p. 7) without recommending increased attention to cost assessment in the development of new interventions. Similarly, a conceptual model developed by Aarons and colleagues (2011) includes funding as a factor affecting all phases of the implementation process but fails to consider that intervention costs could also play an important role in implementation, especially considering that funding must be commensurate with costs. In contrast, Ribisl and colleagues (2014) propose a much more prominent role for economic analysis in the design and implementation of new interventions. They argue that cost is an important barrier to the adoption of evidence-based practices and advocate for an approach in which intervention developers first assess what individuals and agencies are “willing to pay” for an intervention and then design interventions that are consistent with that cost range.

Two recent examples illustrate potential contributions of economic evaluation to implementation studies. Saldana and colleagues (2014) developed a tool for examining implementation activities and used it as a template for mapping implementation costs over and above the costs of the intervention; applying this tool to a foster care program, they found it valuable for comparing different implementation strategies. Holmes and colleagues (2014) describe the development of a unit costing estimation system (cost calculator) based on a conceptual model of core child welfare processes, and discuss how this tool can be used to determine optimal implementation approaches under different circumstances, as well as to estimate costs under hypothetical implementation scenarios.

These points were reinforced by a number of panelists who spoke at the committee’s open sessions about their work in implementing evidence-based

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

interventions at the federal, state, and local levels. Speakers noted that effective implementation depends on a number of factors, including data and monitoring systems, the workforce and its training, and resources that affect everything from provider compensation to the number of children or families seen by each provider. While knowledge of the effectiveness and cost-effectiveness of interventions is growing, there remains only limited information about what is required to support effective implementation of those interventions.

At the committee’s June open session, panelist Mark Lipsey, director of the Peabody Research Institute at Vanderbilt University, commented that most cost-effectiveness research focuses on brand-name programs. However, the cost and infrastructure associated with implementing those programs are not feasible in most real-world settings. Communities generally lack the capacity and resources to implement the brand-name, model programs. Therefore, generic versions of the programs are implemented. Whether the effects suggested in research on brand-name programs can be expected in other settings is dependent upon the local resources available to implement the program, a large number of factors specific to the context where the program is implemented, and the quality of the implementation monitoring system. Lipsey suggested an alternative model to the traditional feed-forward approach in which highly controlled research on programs is conducted; synthesized, and placed in a clearinghouse, and efforts are then undertaken to implement those programs and replicate the findings in local settings. The context—population served, staff skills, resources, community, nature of the original problem—may differ from those of the programs in the original studies. Consequently, the results expected may not be realized in new settings. Alternatively, Lipsey suggested beginning with the monitoring and feedback systems currently in place in a particular setting and building incrementally toward evidence-based practice.

Gottfredson and colleagues (2015) also emphasize the importance of describing intervention implementation, although their focus is on prevention programs in health care. They note that the original research of economists and policy analysts “often generates conclusive answers to questions about what works under what conditions” (p. 895), but they give less attention to describing the intervention in subsequent trials in other settings and examining causes for variations in outcomes and costs. An example of the importance of implementation fidelity is described in Box 4-2.

In short, attention to the infrastructure and contextual aspects of effective implementation is often inadequate. Clearinghouses and registries have provided a systematic mechanism for synthesizing evidence of the effectiveness of interventions. Legislation has required the use of some of those evidence- or research-based models (Pew-MacArthur Results First Initiative, 2015) without necessarily addressing issues of fidelity or ensur-

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
BOX 4-2
The Importance of Implementation Fidelity: An Example

The experience of Washington State’s implementation of Functional Family Therapy (FFT) illustrates the importance of implementation fidelity. In its 1997 Community Juvenile Accountability Act, the Washington State legislature required juvenile courts to implement “research-based” programs. To fulfill that mandate, the Washington State Institute for Public Policy (WSIPP) (which is described later in this chapter in the section on examples of efforts to improve the use of evaluation evidence) conducted a thorough review of the evidence base, and from that review, the state’s Juvenile Rehabilitation Agency identified four model programs from which courts could choose. The evidence base for those programs was not specific to Washington State, so the legislature also required that WSIPP evaluate the models’ effectiveness in Washington in “real-world” conditions. In its first evaluation of FFT, WSIPP estimated a $2,500 return on investment. However, that evaluation found that FFT was effective—and thus the returns were realizedonly when therapists implemented the model with fidelity. In fact, WSIPP found that recidivism rates could actually increase relative to business as usual if delivered by therapists not appropriately trained. Thus, WSIPP recommended that the state work with FFT Inc. to develop a mechanism for training and monitoring therapists to ensure effective implementation of the program (Barnoski, 2002).

ing that resources are being devoted to effective implementation. Yet few model interventions have been demonstrated at scale, and it is not clear that those model interventions will produce the same or comparable outcomes when introduced into other settings and contexts with different resources available for implementation. Moving evidence-based practice and policy toward outcomes requires thinking in a holistic way about the range of evidence that is needed, its availability, and how the evidence aligns with existing systems and funding.

CONCLUSION: Infrastructure for developing, accessing, analyzing, and disseminating research evidence often is lacking in public agencies and private organizations charged with developing and implementing interventions for children, youth, and families.

CONCLUSION: It is not sufficient to determine whether an investment is effective at achieving desired outcomes or provides a positive economic return. Absent investments in implementation in real-world settings, ongoing evaluation, and continuous quality improvement, the positive outcomes and economic returns expected may not be realized.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

CONCLUSION: Conceptual frameworks developed in the field of implementation science may be relevant to improving the dissemination and use of economic evidence, but the implementation literature has not paid sufficient attention to the potential role of economic evidence in these models.

Reporting

“Research often uses language and terms that require a PhD in economics to recall what the report is saying.”

—Barry Anderson, deputy director, Office of the Executive Director, National Governors Association, at the committee’s open session on March 23, 2015.

“We have to figure out a way to communicate this information in ways that resonate with different perspectives, so benefit-cost means something to people other than those who are in the field. During the times when policy has changed, it is because we found ways of communicating the power of change to different communities. It has to mean something to people in different parts of the political dynamic that we work with.”

—Gary VanLandingham, director, Pew-MacArthur Results First Initiative, at the committee’s open session on March 23, 2015.

“There has been increased attention to local data dashboards. This entails the presentation of relevant, timely information to the right people at the right time so they can use data for continuous quality improvement and decision making. What are needed are both a data system and organizational documents with embedded agreements and expectations for [leaders’ and management teams’] timely use of local data on an ongoing basis. The administrative piece is just as important as the IT piece.”

—Will Aldridge, implementation specialist and investigator, FPG Child Development Institute, University of North Carolina at Chapel Hill, at the committee’s open session on June 1, 2015.

The reporting of evidence derived from economic evaluation influences whether the evidence is used in decision making (National Research Council, 2012; O’Reilly, 1982; Orton et al., 2011; Tseng, 2012; Williams and Bryan, 2007). Relevant, credible evidence is more likely to be used if reported in a clear and concise format with actionable recommendations (Bogenschneider et al., 2013; DuMont, 2015). Reporting formats designed to suit the information needs and characteristics of target audiences also may increase the use of economic evidence.

The distinct communities of producers and consumers of economic

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

evidence, discussed above in the section on relevance, influence how this evidence is typically reported. For example, economists tend to expect confidence limits, sensitivity analysis on key parameters such as discount rates, and other estimates of the range of a possible return. Sometimes they provide a range as their main finding. Legislators and top-level managers, however, like clear, crisp recommendations. Instead of estimates presented as ranges or by a table of estimates under different assumptions, they generally prefer a point estimate and a plain-English explanation without further numbers expressing the analysts’ confidence in the results (Institute of Medicine and National Research Council, 2014; National Research Council, 2012). Policy makers also tend to want results given up front, with methods being described later and easy to skip without compromising comprehension. These preferences stand in marked contrast to the expectations of academic journals.

Similarly, when multiple economic analyses using different parameter choices are available for a single program, a plethora of inconsistent numbers can destroy the credibility of the results with decision makers. Instead, comparisons with prior estimates can be presented in a way that makes it clear at the outset which estimate is best, with why that estimate is better than prior ones then being explained.

Systematic reviews of evaluations and clearinghouses can be used to help decision makers sort through evidence to determine its relevance and practical implications. Yet many of these resources currently do not incorporate economic evidence. The work of the Washington State Institute for Public Policy (see Box 4-2 and the section below on examples of efforts to improve the use of evaluation evidence) is one exception, providing independent systematic reviews of evidence that include economic evidence. The Tufts University Cost-Effectiveness Analysis Registry is another tool that makes economic evidence accessible to users. Clearinghouses can help consumers acquire and assess the full range of evidence in a given area, but they are not a panacea since most present only evidence of effectiveness and typically only for the fairly circumscribed brand-name, model programs discussed in the previous section.

CONCLUSION: Economic evidence is more likely to be used if it is reported in a form that is summarized and clear to target audiences and includes actionable recommendations.

CONCLUSION: Research summaries and publications often do not report contextual details that are relevant to whether positive impacts and economic returns should be expected in other settings.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

OTHER FACTORS IN THE USE OF EVIDENCE

The results of economic evaluation are one type of evidence on which decision makers may rely. Even when economic evaluations are of high quality (see Chapter 3), relevant to the decision setting, and feasible to implement, other factors—including political climate, values, budgetary considerations, and data availability—may influence whether the evidence they produce is used.

Political Climate and Values

“A project that has some prospects for success is subsidizing long-acting, reversible contraception. We received a grant from a philanthropist to do this on a volunteer basis with low-income girls and women. The results were amazing. There was a 40 percent drop in unwanted pregnancies. You can translate how much that would have cost the Medicaid Program. Here, we had a program with extremely compelling evidence and the potential to be duplicated within our state, but also touching this program were all of the politics around contraception, so there is a bit of an uphill climb on this one.”

—Henry Sobanet and Erick Scheminske, director and deputy director, Governor’s Office of State Planning and Budgeting, Colorado, at the committee’s open session on March 23, 2015.

“Over half of our county’s budget is education costs. Education is a very important value in our county.”

—Uma Ahluwalia, director, Montgomery County, Maryland Department of Health and Human Services, at the committee’s open session on March 23, 2015.

“Data will never trump values by itself. But data that has a compelling [personal] story attached to it, and that also is linked to the ideology of the people we are trying to communicate with can trump an individual perspective.”

—Gary VanLandingham, director, Pew-MacArthur Results First Initiative, at the committee’s open session on March 23, 2015.

Economic evidence is but one of several factors that policy makers must weigh as they make decisions about choosing among competing priorities (Gordon, 2006). In a pluralistic society, diverse political views, cultural norms, and values help define the context within which individuals make investment decisions. Numerous external factors, such as stakeholder feedback, legal actions, and the media, affect the use of evidence in the policy-making process (Zardo and Collie, 2014). Existing political pressures and

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

cultural belief systems influence not only decisions at the individual level, but also organizational practices and structures that may facilitate or hinder the use of scientific evidence in decision making (Armstrong et al., 2014; Flitcroft et al., 2011; Jennings and Hall, 2011; National Research Council, 2012; Nutbeam and Boxall, 2008).

Those working to increase the use of economic evidence will be more successful if they remain cognizant of the political environment within which an agency or institution is working. What are the external pressures? Are important external audiences open to diverse information, or are they only looking for confirmation for previously held views? Short-term budgetary concerns also may trump information about long-term efficiency. In addition, long-standing programs with little evidence of success often have strong, vocal allies in the form of providers and beneficiaries who exert pressure on agency leaders or local politicians who make resource allocation decisions.

Armstrong and colleagues (2014) state that “decision making is inherently political and even where research evidence is available, it needs to be tempered with a range of other sources of evidence including community views, financial constraints and policy priorities” (p. 14). In a study of the use of research by school boards, researchers found that school boards typically relied on a variety of information sources, including examples, experience, testimony, and local data (Asen et al., 2011, 2012). Research (defined as empirical findings, guided by a rigorous framework) was used infrequently compared with other types of evidence (Asen et al., 2013; Tseng, 2012). When research evidence was relied upon, it was cited in general rather than with reference to specific studies, and most commonly was used as a persuasive tool to support an existing position.

Studies of the use of economic evidence in local decision making across countries have found that political, cultural, and other contextual factors influence the application of such evidence, especially if it is found to contradict prevailing values or local priorities (Eddama and Coast, 2008). A European study found that the extent of knowledge about economic evaluation, the barriers to its use, the weight given to ethical considerations, and incentives promoting the integration of economic information into health care decision making varied by country. The authors suggest that if economic evidence is to have a stronger influence on policy making, the political and institutional settings within which decisions are made will require greater attention (Corbacho and Pinto-Prades, 2012; Hoffman and Von Der Schulenburg, 2000).

One area of contrast between the United States and some European countries is in the use of economic evidence in decisions on health policy (Eddama and Coast, 2008): the latter countries are more likely to rely on cost-effectiveness analysis (CEA) to shape their health policies (Neumann,

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

2004). In fact, language in the Patient Protection and Affordable Care Act (ACA) explicitly prohibits the application of CEA in the use of Patient-Centered Outcomes Research Institute (PCORI) funds that support the piloting of health care innovations (Neumann and Weinstein, 2010).

The use of economic evidence in policy making varies across U.S. policy-making enterprises. A number of federal agencies use benefit-cost analysis (BCA) or budgetary impact analysis to inform the legislative process (e.g., the Congressional Budget Office [CBO]) and in the approval of regulatory actions (e.g., the Office of Management and Budget [OMB]). In some fields methodological and ethical questions about the use of BCA—for example, to monetize certain outcomes, such as human life—can diminish the uptake of economic evidence (Bergin, 2013). The use of CEA to justify funding of preventive interventions but not treatment services under Medicare highlights the inconsistent and uneven use of economic evidence in policy making seen in the United States (Chambers et al., 2015).

Producers of economic evidence can consider contextual and organizational variables in their study design, analysis, and interpretation of findings so that research results better address the core issues decision makers face. Economic evaluations then are more likely to be seen as responsive, sensitive, and relevant to the local context and to increase the demand for and uptake of such work.

CONCLUSION: Political pressures, values, long-standing practices, expert opinions, and local experience all influence whether decision makers use economic evidence.

Budgetary Considerations

A budget process that takes into account only near-term costs and benefits—such as the 10-year window within which federal budget decisions are made, or the budget decisions of a foundation wishing to prove near-term success even with the use of economic evidence—will inherently entail a bias against investments in children, whose returns are long-term in nature. This observation creates an additional impetus for statistical entities such as the Census Bureau and the Internal Revenue Service (IRS) Statistics of Income program, as well as surveys supported by private foundations, to give significant budget weight to the development of longitudinal data on children.

Economic evaluation also tends to focus on the intervention, local community, or organization, comparing internal costs with internal benefits. Budget offices can mitigate the tendency to localize decision making by both providing information on gains (or costs) accruing outside of a local constituency or jurisdiction and suggesting policy options for maximiz-

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

ing all societal benefits in excess of costs. For instance, a federal program providing health care to children through states can account for net gains or losses nationwide, while budget analyses can inform policy makers of ways to design laws so as to avoid giving states incentives to discount gains outside their jurisdictions.

In formulating budgets, governments and private organizations ultimately decide how they will allocate their resources. Ideally, budget processes force governmental and private entities to make trade-offs at the broadest level, allocating monies to those interventions with the greatest benefits relative to costs. Under these ideal conditions, economic evaluations would be extensive and encourage decision making broadly across interventions while promoting negotiations among interventions, with multisector payoffs in mind. As has been made clear throughout this report, however, economic evaluations often are quite limited in both number and content. The total costs of an intervention frequently are excluded from the evaluations that are performed. Yet decisions will be made. The budget will be fully allocated one way or the other, even if the saving is deferred to another day or, in the case of government, returned to taxpayers. Bluntly, while one intervention’s expansion may await further economic evaluation, the budget will, regardless, fully allocate 100 percent of funds.

In practice, in many if not most cases, government budgetary decisions and the delivery of services take place within silos. Different departments and legislative committees separately oversee education, food, housing, and health programs for children without fully taking into account the impact in other program areas. Similar silos often characterize foundations and other private organizations engaged in making investment decisions for children.

In the practical world of budgets, therefore, the ideal is never fully met, often because of limitations of time and resources. Even with the best of economic evidence available, the evidence is never fully informative at every margin of how the next dollar should be spent (or returned to taxpayers). Given these limitations, there are nonetheless three dimensions in which budget processes could be improved to take better advantage of the evidence derived from economic evaluations: (1) reporting on the availability and absence of economic evidence; (2) allocating budgetary resources to take fuller account of the time dimension that economic evaluation needs to encompass, particularly for children, whose outcomes often extend well into adulthood; and (3) accounting for net benefits and gains beyond any particular intervention, constituency, or organization.

Reporting on the Availability and Absence of Economic Evidence

While Chapter 3 emphasizes the gains possible from the production of high-quality economic evidence, the focus here is on what budget offices

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

can do with the evidence that is and is not available. To the extent possible, decision makers need to be as informed as possible in their decision making. Thus they need to know what economic evaluations are available, not available, planned, and not planned for programs falling within their budget.

For example, OMB could list annually which programs do and do not have economic evaluations planned as part of their ongoing assessment, where the evaluations exist, and what has been evaluated. Such programs could include those implemented through tax subsidies or regulation, not just direct spending, as in the case of earned income tax credits, which accrue largely to households with children. Similarly, CBO regularly reports on options for reducing the federal budget deficit. In so doing, it could both report on the extent to which these options make use of economic evidence and recommend use of the availability of economic evidence as one criterion for decision making.

Allocation of Budgetary Resources to Account for Outcomes over Time

Returns on investments take place over time. No one would invest in a corporate stock based solely on the expected earnings of that corporation over 5 or even 10 years; the company’s net value depends on its earnings over time. Similarly the returns on interventions for children often accrue over a lifetime, and, as indicated in Chapter 3, often take the form of longer-term noncognitive gains such as decreased dropout rates, lower unemployment upon leaving high school, or lower rates of teen pregnancy.

Unfortunately, it is often easier to negotiate support for interventions with near-term gains since those gains may be both more visible and more likely to accrue to the benefit of public and private officials running for office or being promoted on the basis of their near-term successes. Likewise, a school board may more easily gain support for an intervention aimed at children ages 3 to 5 if it will improve performance in second grade 3 years later than if it will improve graduation rates 14 years later. Even CBO reports on the budgetary effects of proposed changes in the law cover only 10 years, with some exceptions for programs such as Social Security.

Since this is not a report on budget process reform, only two basic points are important to make here. First, decision making will be improved when decision makers are fully informed of these limitations. This is a particular issue when, as noted, program allocations are being made with and without economic evidence at hand. Particularly when it comes to investments in children, a short-term horizon biases those budgetary decisions in favor of interventions with short- but not long-term benefits, such as higher consumption levels for beneficiaries within a budget window and returns to existing voters but not those younger or not yet born. Economic evaluations that similarly focus on the short term add to those budgetary biases.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Second, if returns on investments in children are long term, data are needed to follow those children over extended periods of time. Relatedly, the linkage of long-term data across systems and sectors is an important step toward improving their use. Although there are challenges to the systematic linkage of data (e.g., the outdated design of administrative structures and systems, data privacy, tracking of children and families),1 there is still significant potential in these efforts (Brown et al., 2015; Chetty et al., 2015; Cohodes et al., 2014; Lens, 2015). Establishing personal relationships between the collectors and users of the data, shadowing successful project designs (e.g., the Project on Human Development in Chicago Neighborhoods,2 the Three City Study3), or seeking guidance from other fields (e.g., criminal justice) could provide opportunities for continuing to address these challenges.4 (See Chapter 5 for additional discussion of data linkage.)

On the other hand, one could depend on developing new and expensive data sets with each new experiment or program adoption or extension. But that approach likely would be cumbersome and expensive, even if worthwhile. Statistical entities, such as the Census Bureau and the IRS’s Statistics of Income Program or those associated with state K-12 and early childhood education, could gain more from their limited budgets if they gave significant budget weight to the development of longitudinal data following individuals. Foundations interested in economic evaluation could also assess the relative importance of a new experiment requiring new data development and more investment in data that could inform multiple investments. Students and youth provide an ideal case in point. Educational and early childhood reform efforts consistently try new experiments, many of which are amenable to economic evaluation. Well-developed data following young children and students over extended periods of time could allow multiple evaluations to make use of a common set of data, such as progress along various outcome scales, even if the separate evaluations still required additional input of data, say, on cost differences related to different experimental designs.

___________________

1 Observation made at the committee’s open session on June 1, 2015, Panel 2; see Appendix A.

2 For more information on this effort, see http://www.icpsr.umich.edu/icpsrweb/PHDCN/about.jsp# [June 2016].

3 For more information on this effort, see http://web.jhu.edu/threecitystudy [June 2016].

4 Observation made at the committee’s open session on June 1, 2015, Panel 2; see Appendix A.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Accounting for Net Benefits and Costs Across Interventions, Constituencies, and Organizations

Compartments, silos, and limited frameworks constantly affect budget decision making, and as a result, the economic savings from investing in effective strategies may not accrue to the intervention, constituency, or government entity making the investment. For instance, a community may invest in early childhood education, but given the mobility of families, the gains from that investment often will accrue to jurisdictions to which those families move. In technical BCA terms, when internal costs are compared with internal benefits, external costs and benefits are ignored. One study, for instance, found that the societal return needed to realize government savings on drug and crime prevention interventions varies widely among sectors, and even for government saving alone, depends on whether the calculation is made at the federal level or at the federal, state, and local levels combined (Miller and Hendrie, 2012).

How can budget offices make a difference here? For one, budget decisions frequently are made at high levels at which gains across boundaries can be combined. For instance, OMB often guides final budget decisions for the President when reviewing particular agency requests. Even a particular agency, as long as its goal is the well-being of constituents, can mitigate its own tendency to localize decision making by reporting economic evaluations across program areas, even those not under their jurisdiction.

Budget offices also can identify for policy makers and administrators incentives that might offset built-in tendencies to account only for local costs and benefits. For example, many federal programs in areas affecting children are implemented on the ground through state and local officials, and many state programs are implemented through local officials, thus resulting in transfers of benefits and costs across jurisdictions. Additional features can be added to programs so that offsetting transfers are made to compensate jurisdictions bearing costs for benefits they do not receive. Economic evaluations can account for gains and losses across all jurisdictions.

OMB, for example could list which programs do and do not have economic evaluations planned as part of their ongoing assessment. Such programs could include those implemented through tax subsidies or regulation, not just direct spending. In addition, in its annual review of options for reducing the deficit, CBO could recommend using the availability of economic evaluation as one criterion for decision making.

CONCLUSION: Budgets allocate resources one way or the other. Those decisions will be made regardless of whether the results of economic evaluation and other forms of evidence are at hand or the

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

research is planned for the future. It is desirable to have access to as much information as reasonably possible. Economic evaluation can be influential in a world where decision making is made with incomplete information.

CONCLUSION: Budget choices often factor in only near-term cost avoidance and savings and, even when evidence from benefit-cost analysis is available, near-term benefits. Benefits from investments in children, youth, and families, however, often are measured most accurately over extended periods continuing into adulthood.

CONCLUSION: The economic savings that result from investing in effective strategies may accrue to constituencies or government entities other than those making the investments.

Data Availability

“There aren’t archives out there where researchers or administrators or anybody else can go to get linked administrative data at the local, state, or federal level to do what we need to do. It’s the access issue that is the concern here.”

—Robert M. George, senior research fellow, Chapin Hall at University of Chicago, at the committee’s open session of June 1, 2015.

“At times it can be difficult to get the federal government to share data across different agencies. It can be even harder to get state agencies to share data across its agencies or with the federal government.”

—Beth A. Virnig, director, Research Data Assistance Center, University of Minnesota, at the committee’s open session of June 1, 2015.

Opportunities exist to use administrative data to help meet the data needs of different types of economic evaluation.5 In particular, cost analysis (CA), CEA, cost-savings analysis, and BCA produce distinct types of evidence that can be used to answer different questions. They also use different types of administrative data and leverage those data in different ways. Figure 4-1 depicts the potential uses of administrative data in economic evaluations.

CA benefits from accessing administrative data that are qualitatively

___________________

5 Big data, innovative data-sharing technologies, and the emerging field of data science are relevant to the discussion of the use of economic evidence; within this report, however, these topics are not explored in depth.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

image

FIGURE 4-1 Opportunities for the use of administrative data in economic evaluations.
NOTE: SPED = special education.
SOURCE: Adapted from Crowley (2015).

different from those used for other types of economic evaluation. Particularly key is the use of site budgets, personnel records, and service delivery logs. Site budgets, often one of the main sources of administrative data used for CAs, help establish the quantity of resources used and provide the actual prices paid to operate an intervention. Personnel records are used to determine what labor resources were used for what intervention activities. And service delivery logs are used to determine the size of the population served. Programs that use coordinated data systems to track service delivery at the individual level produce administrative records that allow for individual cost estimates by apportioning total costs to specific individuals. This process can provide more precise estimates than average cost estimates with poorly understood variability. Finally, reports on in-kind contributions that may supplement parental grants or contracts also can be mined to estimate the total costs for an intervention. Ignoring such supplemental resources can result in underestimating the cost of infrastructure and jeopardize future replication of an intervention.

CEA uses many of the same records considered in an effectiveness analysis. Record domains including health care, education, criminal justice,

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

social services, and workplace participation all are relevant. In a CEA, impacts captured by outcomes on administrative records can be considered in the context of an intervention’s cost. Whether the intervention is considered cost-effective depends on the payer’s willingness to pay for the achieved change in outcomes.

Cost savings analysis makes it possible to consider an intervention’s impact and efficiency in more absolute terms. Cost savings analyses can leverage administrative data similar to those used in CEA, but often look to data that are linked to budgetary outlays. In health care these data include Medicaid and Medicare reimbursements, private insurer payments to providers, and uncompensated care costs at both the provider and government levels. In education, the focus is often on cost drivers such as special education and disciplinary costs, as well as areas linked to public spending, such as attendance. Criminal justice records for individuals often are combined with administrative data on law enforcement spending, as well as court and detention operating costs. Specifically, when criminal records indicate the quantity of criminal justice resources spent on individuals, data on local, state, and federal spending can be used to estimate the price of those resources. In a similar fashion, individual-level social services data can be used in combination with social services agency and programmatic budgets to estimate quantity of resources consumed and the local prices for providing them. Importantly, within the social services domain, programmatic budgets alone are not sufficient for estimating prices. The infrastructure costs of the service providers also must be included in the price estimates, and often can be derived only from agency operating budgets. Lastly, evaluations of workforce participation in cost savings analyses generally focus on impacts on wealth, income, and tax revenue, requiring access to tax and asset records.

While a cost-savings analysis generally would consider only one of these domains at a time, a full BCA would leverage these records to assess impact across systems and arrive at a full net benefit of the intervention that accounted for savings in one system and increased costs in another. Outcome evaluation of interventions for children, youth, and families ideally requires longitudinal data on changes in disparate aspects of well-being: education, health, safety, housing, employment, happiness, and so on.

While administrative data sets contain much of the information needed for CA, CEA, cost-savings analysis, and BCA, these data may not be available. Administrative data often are not stored centrally. Local data systems tend to use varied formats, archive differing information, use incompatible file formats, and sometimes overwriting data instead of archiving them. Some systems are not even automated. Even if data are centralized, local cost recovery objectives may preclude retrieving them from the central

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

source. Centralized data also tend to be a snapshot in time and place, while local data may be updated.

Even automated data often are not readily accessible. Privacy rules differ between health care and education data, but they often preclude access to identifiable records. Even signed consent will not enable access to identifiable tax records or Social Security earnings records.

The problem becomes especially acute when data cut across silos. The department funding a trial usually will try to contribute the data it owns to an evaluation, but may lack the leverage to convince other departments to spend resources on providing data or breaking barriers to support an evaluation.

The private sector now has data that dwarf the amount of public data. Every credit card swipe goes into a commercial database that documents buying habits and often also into a vendor database with details of who purchased what, where, and when. Sensors in crash-involved vehicles provide driving and impact data in millisecond intervals. And medical records increasingly are electronic. Thus, the data needed to answer many policy questions are housed in private data systems. Increasingly, the same is true of data needed to answer questions about the long-term outcomes of randomized controlled trials. The pressing question is how those data can be accessed affordably and ethically.

Access to data from randomized controlled trials also may be limited in ways that hamper maximizing the lessons learned from the trials. Trial managers are protective of their data. They fear confidentiality could be breached. They lack the resources to document and share deidentified data and answer questions about the data posed by prospective users. And they worry that their data could be misanalyzed. Yet meta-analyses are more powerful and accurate if unit record data can be pooled. It is unclear where the proper balance lies here.

CONCLUSION: Federal agencies maintain large data sets, both government-collected and resulting from evaluations, that are not readily accessible. Privacy issues and silos compound the challenges of making these data available. Improving access to administrative data and evaluation results could provide opportunities to track people and outcomes over time and at low cost.

CONCLUSION: Without a commitment by government to the development of linkages across administrative data sets on education, health, crime, and other domains, both longitudinally and across systems, efforts to expand the evidence base on intervention impacts and evidence of economic returns will be limited.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

FACTORS THAT CAN FACILITATE THE USE OF ECONOMIC EVIDENCE

Factors related both to organizational culture and management practices and to collaborative relationships can facilitate the use of economic evidence.

Organizational Culture and Management Practices

Organizational culture and management practices, including leadership, openness to learning, accountability, performance management, and learning forums, can promote more optimal use of economic evaluation. The focus in this section is on the dynamics within decision-making bodies. Some of the factors that provide an impetus for an organization to conduct economic evaluation are briefly reviewed in Box 4-3.

Leadership and Openness to Learning

Some organizations have a culture or characteristics that are supportive of the use of evidence, including the results of economic evaluation, such as leaders and managers who value economic evaluations and have sufficient

BOX 4-3
The Impetus for Economic Evaluation: Examples

Proposals internal to an organization, as well as legislation with a direct impact on the organization’s budget, will frequently generate cost analysis (CA). The Congressional Budget Office (CBO), for example, requires CA for passage of federal legislation with a budget impact. Because CAs provide important information about the economic impacts of legislation, they may be accompanied by cost-effectiveness analysis (although CBO is likely to include such information in separate, program-related studies). As a routine matter, however, legislation often is not accompanied by information relating costs to the effectiveness of particular interventions (benefit-cost analysis) or assessing an intervention’s cost-effectiveness relative to other options with the same goal.

As another example, the White House Office of Information and Regulatory Affairs administers Executive Order 12866 58 FR 51735, which requires federal agencies considering alternatives to rulemakings to provide an analysis of the costs and benefits of these alternatives. In theory this requirement has led to improved decision making, although there is not a strong evidence base indicating that it has in fact resulted in more cost-effective rules (Harrington and Morgenstern, 2004).

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

knowledge to understand and make use of them. Such characteristics have been known to influence the extent to which economic evaluation is used to make programmatic or budgetary decisions (Armstrong et al., 2014; Brownson et al., 2009; Jennings and Hall, 2011). Researchers, both internal and external, can promote the use of economic evaluation when they understand the organization, develop relationships with leaders and other potential users who become involved in joint decision making involving the evidence (Nutley et al., 2007; Palinkas et al., 2015; Williams and Bryan, 2007), and communicate results in ways that increase understanding and use of the evidence (National Research Council, 2012; Tseng, 2012).

Organizations open to discussion and learning are more receptive to the use of evidence, including rigorous economic evaluations, that may run counter to their experiences and beliefs (Cousins and Bourgeois, 2014). Changes in the organizational culture may therefore be required to make an organization receptive to the use of economic evidence. Such changes may entail not only leadership and support from the top of the organization, but also external support and access to the resources needed to achieve a shared vision for the acquisition and use of such evidence (Blau et al., 2015; Hoyle et al., 2008). Changes also may entail attention to future needs, including the data required for economic evaluation. (See the section above on budget considerations for discussion of budgeting for the development of data in advance of future economic evaluations.) Further discussion of the importance of a culture of learning is included in the section below on performance management.

Wholey and Newcomer (1997) argue that organizations and their cultures should be examined before a study is undertaken to determine whether the organization is, in fact, prepared to use the evidence produced by the study. Funders, both public and private, who want to promote the use of economic evidence might choose to place their resources in organizations that are more receptive to doing so—thereby also providing incentives for other organizations to perform more economic evaluation.

CONCLUSION: Economic evidence is more likely to be used if it has leadership support and if the organizational culture promotes learning.

Accountability

Accountability involves delegation of a task or responsibility to a person or organization, monitoring the delegate to observe performance, and delivering consequences based on that performance. It arises in such relationships as supervisors’ evaluation of employees’ performance, auditors’ concerns with fiscal accountability, shareholders’ interests in company performance, and funders’ concerns with the success of the projects they fund.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Accountability is a central tenet of representative democracy (Greiling and Spraul, 2010), as citizens want to know how well the government to which they have delegated power has performed, and then deliver consequences through elections or other feedback channels.

Veselý (2013) notes that “accountability” is one of the most frequently used terms in public administration, but has many different meanings. He identifies four current usages, including “good governance” and a means to ensure the quality and effectiveness of government. Lindberg (2013) found more than 100 different subtypes and usages of accountability in the scholarly literature; he sees the major subtypes as political, financial, legal, and bureaucratic.

As the accountability movement has continued, its drawbacks, or unintended side effects, have become more obvious. These efforts can take time and resources away from an organization’s primary goals; that is, accountability, too, involves a cost that must be related to benefits. Sometimes standards are unrealistic, or criteria for judging are contradictory. Agencies may focus on success of the tasks being measured or on one type of benefit, and neglect other goals or the broader picture. Accountability typically involves a top-down approach, whereas economic evaluation should be considered valuable in strengthening, not just threatening, decision makers. Those promoting the use of economic evidence would do well to understand why greater accountability can but not necessarily does promote the use of economic evidence or better performance (Halachmi, 2002; Veselý, 2013). The example in Box 4-4 illustrates the potential negative effects of an emphasis on accountability.

One bottom line is that economic evaluation first and foremost provides valuable information for constructive problem solving. If people think that results of evaluations and other data will be used against them (e.g., for budget cuts or other unfavorable consequences), they will react accordingly (Asen et al., 2013; Lorenc et al., 2014). They may aim to improve the specific outcomes being measured without actually improving anything—for instance, by serving only those who are most likely to achieve some outcome or by manipulating the data (Brooks and Wills, 2015).

Nevertheless, the interest in accountability will continue, and rightly so. The theory is somewhat incontrovertible: if people are accountable for their actions, they usually will respond to the incentives involved. It is in the application of accountability schemes that difficulties arise. There is a learning agenda implied by the above-described mixed experience with accountability frameworks. As Coule (2015) puts it, there is increasing recognition that the notion of accountability as “a somewhat benign and straightforward governance function” is, instead, “a challenging, complex choice” (p. 76).

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
BOX 4-4
Illustrative Example of Accountability: No Child Left Behind

An example of the ways in which strong accountability systems can overtake the gains available from a good information and economic evaluation system and even result in unintended consequences is the K-12 accountability provisions contained in the federal No Child Left Behind Act.* The act’s goal of academic proficiency for all students as measured by state standardized tests of math and English language arts has resulted in a system highly focused on improving test scores. As a result, untested subjects, such as foreign languages or social studies, may be given short shrift. Students well below or well above proficiency have received less attention than others since they are less likely to contribute to a school’s overall measures of progress. The National Research Council’s (2011) report suggests that test-based incentive systems have had little effect on student achievement and that high school exit exams “as currently implemented in the United States, decrease the rate of high school graduation without increasing achievement” (pp. 4-5).

At the same time, by promoting the use of measures of progress, No Child Left Behind holds considerable promise for leading to many types of economic evaluation of different approaches to teaching, learning, and use of school resources. A more modest and attainable accountability system might first emphasize obtaining better measures of individual student progress that are useful to teachers and principals (e.g., as early warning signals of a student’s no longer making progress), as well as for performing multiple levels of experimentation amenable to future economic evaluation.

*In December 2015, the No Child Left Behind Act was replaced by the Every Student Succeeds Act (Public Law No:114-95).

Performance Management

Closely related to accountability systems are performance management and monitoring. The theory or logic model entails monitoring performance to achieve greater accountability and then better performance. Government-wide reforms, such as the Government Performance and Results Act (GPRA) of 1993, the George H.W. Bush-era Program Assessment Rating Tool (PART), and the current GPRA Modernization Act of 2010 are prime examples of the creation of performance management systems aimed at making data more widely used in decision making. Policy-specific changes in such areas as safety-net programs (the Personal Responsibility and Work Opportunity Act of 1996) and education (the No Child Left Behind Act of 2002 and the Race to the Top initiative of 2009) provide further incentive for the use of performance measures within specific policy areas.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Economic evaluation can and has played an important role in performance management.

An area ripe for further research is the role of continuous improvement or continuous quality improvement both in supporting the implementation of evidence-based practices and in ensuring that the implementation of those practices is helping to improve outcomes. As part of the Maternal, Infant, and Early Childhood Home Visiting Program, for example, states are required to submit an implementation plan to the federal government. Among the items they must include is a plan for using data for continuous quality improvement. This requirement suggests that it is important not only to use data and evidence to identify which types of programs or practices can produce outcomes or savings that offset their costs, but also to have a system to continually monitor the implementation of these efforts and ensuring that implementation and outcomes are both moving in the expected direction. As noted earlier in this report, the implementation of interventions can strongly influence whether they produce the expected outcomes. Other factors—including community-level factors, historical context, and the choice of a counterfactual—also can affect outcomes. Thus, it is important in promoting evidence-based practice to identify ways in which governments and providers can monitor their programs continuously to ensure that they are producing the desired benefits.

Moynihan (2008) argues that performance data (of which economic evaluation is one type) is not comprehensive. For any complex program or task, there are multiple ways of capturing performance, and performance data could not reasonably be expected to replace politics or to erase information asymmetries in the policy process. This does not mean that these data are not useful if applied in a realistic system of improvement, rather than one focused on some final determination of merit. Moynihan also points out that performance data are more likely to be used purposefully in homogenous settings, where individuals can agree on the basic goal of a program.

Techniques such as BCA certainly have an appeal in being less susceptible to subjectivity than the selection of a simple performance target. But even as the importance and sophistication of BCA have risen, the political process should not be expected to cede decision making to even the best technical analysis. Organizational learning remains the central management benefit of performance data, including economic evidence, for complex tasks. Learning requires a willingness to observe and correct error, which depends in turn on frank discussions about what is working and what is not, as well as the limitations of even the highest-quality analysis.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Learning Forums

A classic error governments have made in efforts to link data to decisions is to pay inadequate attention to creating routines for the use of data. Learning forums are structured routines that encourage actors to closely examine information, consider its significance, and decide how it will affect future action. The meaning of data is not always straightforward; even the answer to such basic questions as whether performance is good or bad may be unclear. Learning forums provide a realm where performance data are interpreted and given shared meaning. More complex questions, such as “What is performance at this level?” or “What should we do next?” cannot be answered simply by looking at the data, but require deeper insight and other types of knowledge that can be incorporated into learning forums (Moynihan, 2015).

Such routines are more successful when they include ground rules to structure dialogue, employ a nonconfrontational approach to avoid defensive reactions, feature collegiality and equality among participants, and include a diverse set of organizational actors responsible for producing the outcomes under review (Moynihan, 2008). Moynihan and Kroll (2015) note that although no learning forum will be perfect, following principles and routines—for example, focusing on important goals and on some of the factors discussed in this and the previous chapter, such as committed leadership, timely information, a staff well trained in analyzing data, and high-quality data—can make a forum successful. A learning forum also will be more effective if it incorporates different types of relevant information. Quantitative data are more useful when they can be interpreted by individuals with experiential knowledge of process and work conditions that explain successes, failures, and the possibility of innovation (Moynihan, 2008). The latter type of information also might be derived from some type of evaluation, ideally with treatments and controls, a BCA, or a CEA.

A Potential Role for Funders

How might the broad conclusions on organizational culture and a continuous learning process presented in this section influence public and private funders? In sponsoring economic evaluation, funders often explicitly or implicitly seek or rely on a theory of causality: How do particular activities in this particular analysis result in specific outcomes? That question can beg how the evaluation and the theory itself should adapt in a process of newer learning and continuous improvement. Funders might consider granting funds to support the use of monitoring systems and feedback loops, thereby enabling nonprofits or government agencies to use economic and other data

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

and evidence to learn, adapt, and incorporate new understandings into an ongoing cycle of improvement.

CONCLUSION: Economic evidence is most useful when it is one component of a continuous learning and improvement process.

Collaborative Relationships

“There is a process involved to get individuals who are not naturally researchers to think about how they should use this type of information. It is building relationships. It is building trust. It is not a one shot thing.”

—Dan Rosenbaum, senior economist, Economic Policy Division, Office of Management and Budget, at the committee’s open session on March 23, 2015.

Studies relating to the use of economic evidence in policy making suggest that the “disjuncture between researchers and decision-makers in terms of objective functions, institutional contexts, and professional value systems” (Williams and Bryan, 2007, p. 141) requires considering an interactive model of research utilization that would increase the acceptability of economic evidence (Nutley et al., 2007). Tseng (2012) argues that improving the quality of research itself is insufficient, noting that “relationships are emerging as key conduits for research, interpretation, and use. Policymakers and practitioners rely on trusted peers and intermediaries. Rather than pursuing broad-based dissemination efforts, there may be value in understanding the existing social system and capitalizing on it” (p. 13).

A systematic review of 145 articles on the use of evidence in policy making in 59 different countries found that the factor that most facilitated use was collaboration between researchers and policy makers, identified for two-thirds of the studies in which use was achieved (Oliver et al., 2014a). Other facilitating factors included frequent contact; relevant, reliable, and clear reports of findings; and access to high-quality, relevant research.

However, developing relationships takes time and effort. Studies of use conclude that research should be conducted with sustained personal contact, dialogue, and collaboration between researchers and decision makers to benefit both the policy-making and research development processes (Davies et al., 2008; Elliott and Popay, 2000; Mitton et al., 2007; National Research Council, 2012; Orton et al., 2011; Palinkas et al., 2015). The need for regular communication among researchers, practitioners, and policy makers is a lesson that has been learned among those involved in scaling up evidence-based programs (Supplee and Metz, 2015), as well as those advocating for evidence-based policy making (Innvaer et al., 2002; Kemm, 2006).

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

In addition to developing relationships with potential users, researchers can communicate with local decision makers. Mitton and colleagues (2007) recommend using a steering committee composed of local representatives from different sectors to help guide the research and recommend strategies for dissemination. The steering committee members then become important conduits to the community, informing others about the study, their trust in the research, the results, and their implications.

In some cases, researchers find potential users among community leaders or what those studying public policy term policy entrepreneurs (Oliver et al., 2014b; Orton et al., 2011; Tseng, 2012). Regular, open dialogues with consumers can alert researchers to how the evidence might be used—for example, instrumentally (directly, for a decision) or conceptually (to influence beliefs about the problem or the approach and to inform future planning).

It is also important to recognize that economic evaluations may be applied to questions and settings beyond the original purpose of the evaluation. Consumers of economic evidence may wish to generalize and translate results of a study when they consider implementing an intervention elsewhere. In such circumstances, translators or intermediaries can play a critical role in helping to bridge the divide that often exists between producers and consumers of economic evidence (Armstrong et al., 2013; Bogenschneider and Corbett, 2010; Tseng, 2012). These translators or intermediaries are often people who already have established relationships with leaders in the agency and the community and thus are familiar with the contexts in which the results of economic evaluation may be applied. Further, they are, or can become, a trusted source with the skills to identify and interpret relevant research results in an informed, unbiased manner. They can engage in ongoing dialogue with users and help them translate research results into action that is consistent with the results.

The function performed by these translators or intermediaries is often referred to as knowledge brokering, whose primary objective is to “link decision makers with researchers so they can understand each other’s goals, cultures, and constraints, and can thus collaborate on how best to use evidence in decision making” (Conklin et al., 2013, p. 2). A knowledge broker may, therefore, be someone who operates independently at the intersection between producers and consumers, or may be someone affiliated more strongly with one group, such as an evaluator involved in evaluation and program planning who can help link empirical evidence with decisions made in practice settings (Donnelly et al., 2014; Urban and Trochim, 2009). Furthermore, Dobbins and colleagues assert that knowledge brokering is not limited to single intermediaries but “can be carried out by individuals, groups and/or organizations, as well as entire countries” (Dobbins et al., 2009, p. 2; Ward et al., 2009). The important point to remember is that it

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

is the function of brokering to facilitate the diffusion and uptake of knowledge, regardless of the entity that provides it, that is essential to reducing the divide often experienced between research and practice. Fostering relationships between producers and consumers of economic evidence also can enhance the credibility and perceived relevance of research among potential users, since trust in the evidence appears to be closely tied to trust in its source (Fielding and Briss, 2006; Tseng, 2014).

Examples exist of such intermediary relationships or partnerships. Armstrong and colleagues (2013) describe success in Australia in implementing an Evidence-Informed Decision Making model, designed in accordance with research on knowledge transfer, to help decision makers better utilize scientific evidence. Such models are intended to integrate the best available research evidence with local contextual factors, such as community norms, political preferences, and available resources, leading to decisions better tailored to the local context (Tseng, 2014).

A promising development in efforts to connect research with policy and practice is the increasing focus on research-practice partnerships. Coburn and colleagues (2013) provide an overview of such partnerships in the field of education, defining them as “long-term mutualistic collaborations between practitioners and researchers that are intentionally organized to investigate problems of practice and solutions for improving district outcomes” (p. 2). Several such research-practice partnerships already exist, particularly in education.

According to Coburn and colleagues (2013), research-practice partnerships have five characteristics: (1) they are long-term, operating over several years and sometimes decades, which allows the partnership to focus on complex issues that may not be resolved with one study or simple or rapid analyses; (2) they focus on the problems of practice that districts find most pressing and important; (3) they are committed to mutualism, with research agendas being developed together by the researchers and practitioners and continually revisited to ensure that they are meeting the needs of each; (4) they use intentional strategies to foster partnerships, such as formal data sharing agreements and structured processes for developing research and sharing evidence; and (5) they produce original analyses, so the relationship is not just about translating or sharing findings but also about developing new studies to answer pressing questions.

These partnerships have the potential to address some of the ongoing challenges entailed in connecting research, practice, and policy (Innvaer et al., 2002; Oliver et al., 2014a). They build ongoing communication between researchers and practitioners to ensure that the research being produced answers the questions of interest to the practitioners or policy makers. This ongoing dialogue and relationship also allows for multiple conversations between researchers and practitioners to help translate find-

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

ings into action in a way that is consistent with the evidence. Because researcher-practitioner partnerships center around the needs of and data from the community, including the framing of research questions of interest, they also address concerns among practitioners and decision makers about the generalizability and relevance of evidence from other contexts to their communities (Orton et al., 2011).

There are a number of examples of success in the use of economic evidence in decision making. Both the National Institute for Health and Care Excellence (NICE) in the United Kingdom and the Health Intervention and Technology Assessment Program (HITAP) in Thailand have statutory authority to use economic evidence to inform decisions about health care coverage. In Canada and Australia, knowledge transfer models are used to integrate research findings and contextual factors, such as community preferences, resources, and other local issues, to foster evidence-based decision making in local public health settings (Armstrong et al., 2013; Lavis et al., 2003); one of these efforts is described in Box 4-5. Finally, Blau and colleagues (2015) report on the success of training for medical professionals in four low- and middle-income countries (Albania, Azerbaijan, Croatia, and Georgia) in the use of economic evidence to inform decisions on immunization. The methods learned were then used in all four countries to improve estimates of the burden of disease, and led to policy changes in each country. These cases illustrate that effective use of economic evidence can occur, but they are the exceptions.

CONCLUSION: Interactive, ongoing, collaborative relationships between decision makers and researchers and trusted knowledge brokers are a promising strategy for improving the use of economic evidence.

EXAMPLES OF EFFORTS TO IMPROVE THE USE OF EVALUATION EVIDENCE

This chapter has reviewed factors that influence the use of economic evidence by decision makers. This section describes examples of ongoing efforts to address these factors. The examples in this section are intended to be illustrative of the points discussed throughout the chapter; they do not represent the total range of innovative ways in which state and local governments across the country are partnering with practitioners and intermediaries to improve the use and usefulness of research evidence in general and economic evidence in particular.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
BOX 4-5
Knowledge Translation Strategies

Knowledge translation, defined as a range of strategies that help translate research evidence into practice, holds promise for guiding efforts to improve the use of economic evidence in decision making. Knowledge translation strategies are informed by theories underlying diffusion, dissemination, and implementation sciences and are designed to improve the capacity of both individual users and organizations to access and use evidence (Armstrong et al., 2013). The work of Armstrong and colleagues at the University of Melbourne in Australia is one of the few studies available to articulate and begin to test a theory of change around knowledge translation strategies. These researchers found that the knowledge base on the effectiveness of knowledge translation strategies in changing behaviors in the clinical medicine and allied health fields was more substantial than that on the effectiveness of these knowledge translation strategies in public health settings. Their formative research suggested that for their approach to be effective it would need to support change at the individual and organizational levels. As a result, they set out to develop the capacity of key personnel in local government agencies in Victoria to access research evidence, assess its trustworthiness, and apply it to the local context, as well as to implement strategies that could foster an organizational culture supporting evidence-informed decision making across these agencies. A statewide survey and a series of individual interviews with members of the target audience helped shape the development of a multipronged, resource-intensive intervention that included tailored organizational support, group trainings, targeted communications, and the development of evidence summaries of relevant content, all of which contributed to both individual and organizational improvements in the use of evidence. Strategies for improving the use of evidence among local government leaders included the implementation of training sessions to build the skills of project officers and senior management in basic research methods and ways of identifying high-quality empirical evidence, as well as the utilization of networks to promote evidence sharing, particularly if network activities also served to strengthen relationships between local agency staff and researchers.

Washington State Institute for Public Policy (WSIPP)

WSIPP is an example of efforts to use economic evidence to guide investment decisions by state government. WSIPP was created in 1983 by the Washington State legislature to conduct “practical, nonpartisan research at the direction of the legislature or the institute’s board of directors” (Institute of Medicine and National Research Council, 2014, p. 9). The institute essentially functions as an advisor on spending decisions for the state. WSIPP has developed a three-step process for determining the economic impacts

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

of decisions. First, it applies a meta-analytic approach to identify and summarize the results of all rigorous evaluations relevant to the policies being analyzed. Second, it uses a systematic analytical framework to calculate the benefits, costs, and risks to the state’s population of a policy change. Finally, it analyzes the expected economic impact of investing in portfolios of programs that address a particular policy goal (Institute of Medicine and National Research Council, 2014).

WSIPP has become a valuable resource for state legislators by addressing several of the factors discussed in this chapter. First, it has strong relationships with lawmakers, which have helped both analysts and decision makers understand how to work effectively with each other. Second, WSIPP has managed to create and maintain the perception that its work is relevant to decision makers across various contexts by building a portfolio of work in many policy areas over many years. Third, it has developed a systematic process that it applies in all its BCAs, which includes the reporting of results in an easy-to-understand format, standardized across policy sectors. Finally, WSIPP has ensured that external conditions are conducive to the use of its work by remaining systematically nonpartisan, making recommendations that follow objectively from its work regardless of which political faction will identify most with them (Institute of Medicine and National Research Council, 2014; National Research Council and Institute of Medicine, 2009).

Pew-MacArthur Results First Initiative

A joint project of the Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation, Results First works with states to implement WSIPP’s approach to conducting BCA. This initiative helps states develop capacity to both produce and use economic evidence by offering government agencies WSIPP’s analytical tools, training policy makers and their staff in how the model can help inform their decision making, and helping agencies and decision makers establish working groups to guide and implement the model (Institute of Medicine and National Research Council, 2014). Results First also ensures that analyses are relevant to the local context by replacing Washington State’s data with data specific to each jurisdiction and by helping to implement analyses requested by states. Finally, Results First creates an effective incentive structure by requiring demonstrated commitment from both executive and legislative bodies to implementing WSIPP’s model and considering the results in policy deliberations (Pew-MacArthur Results First Initiative, 2014).

At the committee’s March open session, panelist Gary VanLandingham, director of Results First, stated that one of the major barriers facing the initiative is that “it is difficult to bring information into the policy process

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

to come in as an outsider and bring information into the relationships . . . because of all of the gatekeepers that exist and the [need to gain] the policymaker’s confidence.” Thus part of Results First’s strategy is to identify actors who already have those relationships—who are at “the nexus of influence nodes”—and work with them to build their capacity to do this type of analysis and bring it into the system.

University of Chicago Consortium on Chicago School Research (UChicago CCSR)

The example of the UChicago CCSR illustrates factors that facilitate the use of evidence and the perception that evidence is relevant to local decision making; lessons from this work apply to increasing the use of economic evidence. The UChicago CCSR was created in 1990 in a partnership among researchers from the University of Chicago, the Chicago public schools, and other organizations. The consortium’s initial objective was to study the impact of the decentralization of Chicago’s public school system. Since then, it has contributed to many of the city’s reform efforts, a number of which have informed efforts in other jurisdictions.

Several features distinguish the UChicago CCSR from other research organizations, many of them related to factors that influence the use of evidence. First, by focusing on one place—Chicago—the consortium builds a perception among its main target group of users that its work is relevant to the local context. Second, it builds strong relationships and trust with users of its work by actively engaging a diverse group of stakeholders in the design of research, communicating the results of its work, and asking for input on the interpretation of its findings. In addition, its multipartisan steering committee includes representatives from state and local agencies, the teachers’ union, civic leaders, education researchers, and community-based organizations. Finally, another unique feature of the UChicago CCSR is its commitment to reporting its work to a diverse range of audiences by translating research findings into publicly accessible reports that are widely disseminated.

Investing in Innovation Fund

The example of the Investing in Innovation Fund (i3) does not focus on economic evidence per se, but does highlight how evidence of impact is directly tied to decisions about investments in interventions benefiting children, youth, and families. Established in 2009 under the American Recovery and Reinvestment Act, i3 provides competitive grants to applicants that have established evidence of improving student achievement and attainment. By using evidence as an entrance requirement, i3 creates an incen-

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

tive structure that encourages local educational agencies (LEAs) to generate solid evidence of the impact of their programs. In addition, because nonprofit organizations are eligible for i3 funding only if they partner with one or more LEAs or a consortium of schools, i3 incentivizes the development of relationships and a political environment conducive to the effective use of the evidence generated by funded activities. At the committee’s March open session, panelist Nadya Dabby, assistant deputy secretary for innovation and improvement at the U.S. Department of Education, stated that i3 gives “the support and incentive to create evidence coming out of the program . . . public investments should benefit [public education] beyond the direct beneficiaries.”

EPISCenter

The example of EPISCenter illustrates efforts to shift investments toward proven practices through collaborative relationships and technical assistance aimed, in part, at building capacity to realize anticipated program and economic impacts. A partnership between the Pennsylvania Commission on Crime and Delinquency and Penn State University’s Prevention Research Center, EPISCenter aims to advance high-quality implementation, impact assessment, and sustainability for interventions that have been proven effective through rigorous evaluations. In addition to outreach and advocacy to promote the adoption of evidence-based interventions, EPISCenter provides technical assistance, educational opportunities, and resources to communities. Examples of the programs promoted by the center include the following:

  • Communities that Care (CTC)—a structured system designed at the University of Washington to help communities prevent adolescent problem behaviors and promote positive youth development. CTC communities collect local data on risk and protective factors associated with delinquency, violence, substance use, and educational attainment. Communities then identify specific risk and protective factors on which to focus, and seek out evidence-based programs and strategies for addressing those priorities. After a few years of implementing these strategies, communities reassess their risk and protective factors to measure impact and identify new and emerging priorities.
  • Standardized Program Evaluation Protocol (SPEP)—a data-driven scoring system, developed at Vanderbilt University, for evaluating the effectiveness of juvenile justice programs in reducing recidivism. To determine an SPEP score, services being implemented are compared with characteristics that have been shown to predict
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

reduced youth recidivism. SPEP translates evidence into a technical assistance resource for analyzing current investments and their relationship to the evidence base.

Although EPISCenter’s activities do not focus specifically on economic evidence, much of the center’s approach to promoting the use of evidence in general can be adapted to promoting the effective use of economic evidence. In addition, the center disseminates estimates of the return on investment of some of the interventions it promotes.6

Pay for Success/Social Impact Bonds

“I would say that Pay for Success approaches have changed the conversation more rapidly and more dramatically than anything else I have been engaged in over my career. I have rarely seen key decision makers from counties, states, service providers, foundations, and other [stakeholders] come together in a shorter period of time and get so intensely interested in the outcomes of a program. What is the evidence? What would it take for us to get that evidence? People put their own money at risk on the proposition that some social program is actually going to achieve a set of outcomes. Having sufficient evidence that people are going to put their money down on it is very powerful.”

—Jerry Croan, senior fellow, Third Sector Capital, at the committee’s open session discussion on March 23, 2015.

In March 2010, the United Kingdom’s Ministry of Justice and Social Finance, a not-for-profit organization created in 2007, launched a pilot program aimed at reducing recidivism among prisoners released from the Peterborough prison. The key feature of this pilot was its financial arrangement: private parties, mainly charitable trusts and foundations, provided approximately £5 million to fund the program, while the ministry agreed to pay them up to £8 million after 7 years, according to observed recidivism among program participants. Furthermore, if the program failed to achieve a reduction in recidivism of 7.5 percent, investors would lose their money (Disley et al., 2011; Nicholls and Tomkinson, 2013).

The Peterborough pilot was the first financial arrangement of its kind. Such financing models—referred to here as pay for success (PFS) but also known by various other names, such as social impact bonds, outcome-

___________________

6 See, for example, a brief discussion of the return on investment of Functional Family Therapy at http://www.episcenter.psu.edu/sites/default/files/ebp/MST-Three-Year-Report-ROI.pdf [November 2015].

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

based financing, and payment-by-results models7—have attracted a significant amount of interest in recent years. Since the original pilot was conducted, PFS programs have been explored in several countries on at least three continents (Azemati et al., 2013).

In particular, PFS financing has garnered growing interest within the United Kingdom and the United States (Callanan and Law, 2013; Corporation for National and Community Service, 2015). This financing tool leverages private investment to support preventive services that lead to public savings (Liebman, 2013). Typically, when a PFS contract is considered to be successful, the private investors receive back the initial capital outlay that supported service delivery as well as a percentage return, while the public sector benefits from the remaining cost aversion or savings (often in the form of reduced service utilization).8 This financing structure makes PFS contracts of particular interest to programs designed to intervene in developmental processes that otherwise lead to downstream costs (Finn and Hayward, 2013).

The PFS model has a number of benefits highlighted by its proponents (Bridges Ventures, 2014; Costa and Shab, 2013; Crowley, 2014; Galloway, 2014; Greenblatt and Donovan, 2013). These benefits include the following:

  • PFS reduces governments’ financial risk from funding social programs. If the expected results are not achieved, the government’s financial losses are reduced or completely eliminated, as in the Peterborough pilot.
  • It allows private agents—individuals or organizations—to align their investments with their social values while creating an opportunity for positive return on their investment.
  • It incentivizes service providers to innovate because the focus on outcomes allows them to adapt their programs to improve results without having to worry about the up-front expenditures often required in traditional pay-for-performance arrangements.
  • It increases society’s trust in how tax revenues are spent. Unsuccessful programs are not funded at the taxpayers’ expense, and—because payments are determined by the economic value of positive outcomes, including government cost savings—successful programs provide societal value that exceeds their cost.

___________________

7 In the present context, these terms are used to refer to financing instruments, but in other contexts, they are used differently. In international development, for example, outcome-based financing and pay for performance (P4P) more commonly refer to incentive-based payment mechanisms.

8 Certain investors (e.g., private foundations) may be interested primarily in breaking even on their investment. Breaking even may allow the investors to reinvest the percentage return and sustain the program of interest.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
  • It increases the focus of public officials on economic evidence and the details of expected outcomes.

Conversely, others have been more cautious in embracing PFS (Stid, 2013), citing the following challenges:

  • PFS contracts are complex and involve numerous parties—government agencies, service providers, investors, program evaluators, and intermediaries connecting all these parties. Governments may have difficulty adapting their procurement mechanisms to these arrangements.
  • Complete transfer of financial risk from government agencies to investors may be difficult to achieve in all cases. In fact, recent contracts in the United States have required third parties to guarantee a maximum loss to investors if programs do not achieve their outcomes.
  • To date, most PFS investors have been charities and foundations. To become a major model in social program financing, PFS will need to attract a wider—profit-seeking—range of investors.
  • To foster innovation, PFS financing will require funding untested programs with inherently higher risk than is posed by models with rigorous evidence of effectiveness. It is still unclear whether investors will be willing to absorb higher degrees of risk, and governments will be willing to guarantee the corresponding larger payments.
  • Although PFS arrangements theoretically remove government agencies from the management of program implementation, traditional principal-agent problems are not necessarily solved because the intermediaries who manage most aspects of PFS programs may face strong incentives to create situations in which positive outcomes are reported.

Since the 2010 Peterborough pilot, several U.S. municipalities and states have launched PFS arrangements to fund programs with an empirical record of preventing recidivism among juvenile offenders, reducing emergency care costs for children with asthma, and reducing utilization of special education among at-risk youth (Brush, 2013; Olson and Phillips, 2013). Interest in these arrangements is increasing at the federal and state levels—especially for early childhood programs, in which the return on investment may be the greatest (Heckman et al., 2010). In the United States,

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

as of August 2015, PFS projects had been launched in 6 states and were being explored in 27 others.9

The Social Innovation Fund’s (SIF) Pay for Success program is a U.S. federal initiative aimed at supporting PFS projects. SIF itself is an initiative of the Corporation for National and Community Service, with a stated goal of finding what works and making it work for more people. SIF seeks to accomplish this goal by creating a learning network of organizations working to implement innovative and effective evidence-based solutions to local and national challenges. As part of the 2014 and 2015 congressional appropriations, SIF was authorized to use up to 20 percent of its grant funds to support PFS implementation. In 2014, eight grantees were selected to receive funding for up to 3 years to provide technical assistance and promote capacity building for state and local governments and nonprofit organizations interested in implementing PFS strategies. SIF’s grantees include university-based PFS initiatives, nonprofit organizations focused on specific policy areas (e.g., housing, crime and delinquency, children), and organizations that specialize in supporting PFS projects (Corporation for National and Community Service, 2015).10

Two key objectives of PFS financing are identifying programs that work and limiting government financing of those that do not work. Although success in public policy is usually identified with programs that achieved their objectives, a recent example highlights how implementation of the PFS framework can lead to finding success even when programs do not meet their targets. In 2012, New York City began implementing a 4-year PFS project designed to reduce recidivism among adolescents incarcerated at Rikers Island. Under the terms of the agreement, had the program reduced recidivism by 10 percent, the city would have paid private funders their $9.6 million investment, and would have paid more for a larger impact. In July 2015, however, an independent evaluator announced that the program had failed to show any decrease in recidivism (Vera Institute of Justice, 2015), and the program was canceled after only 3 years. Despite the evident disappointment in the program’s failure, its implementation under a PFS approach meant that New York City did not pay for an ineffective intervention,11 and it now can turn its attention to alternative approaches for reducing recidivism among this population. Moreover, these findings

___________________

9 U.S. PFS activity tallied by the Nonprofit Finance Fund, see http://payforsuccess.org/pay-success-deals-united-states [June 2016].

10 The full list of SIF’s 2014 PFS grantees, subgrantees, and subrecipients can be found at http://www.nationalservice.gov/programs/social-innovation-fund/our-programs/pay-success#grantees [November 2015].

11 Private funders did not lose the entirety of their investment. Bloomberg Philanthropies, a private foundation, provided a loss-guarantee that reduced private losses from $7.2 to $1.2 million.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

may lead to further scrutiny of the limitations of the type of cognitive-behavioral therapy (CBT) implemented at Rikers, a significant development in itself because CBT had considerable evidence of prior success (MDRC, 2015).

Key to the future success of PFS agreements is government partners’ willingness to work with intermediaries, program providers, and investors. Often contract development will reveal regulations or laws that prevent successful use of economic evidence to develop performance-based financing. For instance, efforts to structure PFS agreements around reducing the need for special education have been hindered by federal restrictions on Individuals with Disabilities Education Act (IDEA)-B funding. Specifically, states that invest in effective pre-K efforts will lose IDEA-B funding downstream. Related to “wrong pocket” issues discussed in Chapter 3, without greater flexibility and coordination between levels of government, using economic evidence can be more difficult.

CONCLUSION: Growing interest in performance-based financing efforts is likely to increase the importance of economic evidence in decisions on investments in children, youth, and families.

RECOMMENDATIONS

RECOMMENDATION 3: If aiming to inform decisions on interventions for children, youth, and families, public and private funders of applied research12 should assess the potential relevance of proposed research projects to end-users throughout the planning of research portfolios.

Strategies for implementing this recommendation might include the following:

  • Engage groups of end-users in assessing high-priority research questions, key populations and contexts of interest, and capacity and resources for implementation in real-world conditions. Funders should then emphasize those areas in funding announcements.
  • Review criteria in funding announcements to ensure that they encompass the extent to which the proposed research relates to the interests and needs of end-users.

___________________

12 “Funders” here might include staff in public agencies (e.g., the National Institutes of Health, the Institute for Education Sciences, the Centers for Disease Control and Prevention), as well as staff in private philanthropic or other organizations.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
  • Engage end-users in review panels to assess aspects of relevance in proposal scoring and funding decisions.
  • Fund long-term partnerships between researchers and practitioners and between researchers and policy makers that are centered around the needs of the practitioners or policy makers.
  • Ensure that funding announcements require the publication of information of relevance to end-users, such as information on the context in which the study was implemented, the costs of both implementation and supports for implementation, and the population with which the study was conducted.
  • Make ongoing learning and improvement a priority alongside demonstrating outcomes in assessment of grantees’ performance.
  • Require the publication of research findings in different formats targeted toward different audiences.
  • Ensure that sufficient time and resources are provided to support a planning stage during which researchers can engage in such activities as developing relationships with organizations and key actors, refining research questions, building advisory groups, and learning the local context.

RECOMMENDATION 4: To achieve anticipated economic benefits and optimize the likelihood of deriving the anticipated outcomes from evidence-based interventions, public and private funders13 should ensure that resources are available to support effective implementation of those interventions.

Strategies for implementing this recommendation might include the following:

  • Support intermediary organizations that can provide training and technical assistance in the implementation of evidence-based interventions and work collaboratively with implementing organizations to ensure effective implementation.
  • Ensure that resources and fidelity assurances are available for the implementation of evidence-based practices, including resources that support professional development, technical assistance, and monitoring of implementation.
  • Facilitate linkages between decision makers and researchers through convening, information-sharing, or grant-making initiatives.

___________________

13 “Funders” here might include elected officials at the local, state, or federal level; leadership of public grant-making agencies or regulatory bodies; and private funders of interventions for children, youth, and families.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

RECOMMENDATION 5: Providers of postsecondary and graduate education, on-the-job training, and fellowship programs designed to develop the skills of those making or seeking to inform decisions related to children, youth, and families should incorporate training in the use of evidence, including economic evidence, in decision making.

RECOMMENDATION 6: Government agencies14 should report the extent to which their allocation of funds—both within and across programs—is supported by evidence, including economic evidence.

REFERENCES

Aarons, G.A., Hurlburt, M., and Horwitz, S.M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4-23.

Anderson, L.M., Brownson, R.C., Fullilove, M.T., Teutsch, S.M., Novick, L.F., Fielding, J., and Land, G.H. (2005). Evidence-based public health policy and practice: Promises and limits. American Journal of Preventive Medicine, 28(5), 226-230.

Armstrong, R., Waters, E., Dobbins, M., Anderson, L., Moore, L., Petticrew, M., Clark, R., Pettman, T.L., Burns, C., Moodie, M., Conning, R., and Swinburn, B. (2013). Knowledge translation strategies to improve the use of evidence in public health decision making in local government: Intervention design and implementation plan. Implementation Science, 8, 121.

Armstrong, R., Waters, E., Moore, L., Dobbins, M., Pettman, T., Burns, C., Swinburn, B., Anderson, L., and Petticrew, M. (2014). Understanding evidence: A statewide survey to explore evidence-informed public health decision-making in a local government setting. Implementation Science, 9(1), 188.

Asen, R., Gurke, D., Solomon, R., Conners, P., and Gumm, E. (2011). “The research says:” Definitions and uses of a key policy term in federal law and local school board deliberations. Argumentation and Advocacy, 47, 195-213.

Asen, R., Gurke, D., Connors, P., Solomon, R., and Gumm, E. (2012). Research evidence and school-board deliberations: Lessons from three Wisconsin school districts. Educational Policy, 26. doi:10.1177/0895904811429291.

Asen, R., Gurke, D., Conners, P., Solomon, R., and Gumm, E. (2013). Research evidence and school board deliberations: Lessons from three Wisconsin school districts. Educational Policy, 27(1), 33-63.

Atkins, D., Slegel, J., and Slutsky, J. (2005). Making policy when the evidence is in dispute: Good health policy making involves consideration of much more than clinical evidence. Evaluating Evidence, 24(1), 102-113.

Azemati, H., Belinsky, M., Gillette, R., Liebman, J., Sellman, A., and Wyse, A. (2013). Social impact bonds: Lessons learned so far. Community Development Investment Review, 9(1), 23-33.

___________________

14 The key actors in “government agencies” here would include agency leadership, budget offices, and others with management and budget functions in executive and legislative branches at the federal, state, and local levels.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Barnoski, R.P. (2002). Washington State’s Implementation of Functional Family Therapy for Juvenile Offenders: Preliminary Findings. Olympia: Washington State Institute for Public Policy. Available: http://www.wsipp.wa.gov/ReportFile/803/Wsipp_Washington-StatesImplementation-of-Functional-Family-Therapy-for-Juvenile-Offenders-PreliminaryFindings_Full-Report.pdf [October 2015].

Bergin, T. (2013). Markets for suspicion: Assessing cost-benefit analysis in criminal justice. InterDisciplines, 4(2), 59-84.

Blau, J., Hoestlandt, C., Clark, A., Baxter, L., Felix Garcia, A.G., Mounaud, B., and Mosina, L. (2015). Strengthening national decision-making on immunization by building capacity for economic evaluation: Implementing ProVac in Europe. Vaccine, 33(Suppl. 1), A34-A39.

Bogenschneider, K., and Corbett, T.J. (2010). Evidence-Based Policymaking: Insights from Policy-Minded Researchers and Research-Minded Policymakers. New York: Taylor & Frances Group.

Bogenschneider, K., Little, O.M., and Johnson, K. (2013). Policymakers’ use of social science research: Looking within and across policy actors. Journal of Marriage and Family, 75(2), 263-275.

Bowen, S., and Zwi, A.B. (2005). Pathways to “evidence-informed” policy and practice: A framework for action. PLoS Medicine, 2(7), 0600-0605.

Bridges Ventures. (2014). Choosing Social Impact Bonds: A Practitioner’s Guide. London, UK: Bridges Ventures, LLP.

Brooks, J., and Wills, M. (2015). Core Principles. Washington, DC: National Governors Association Center for Best Practices.

Brown, D.W., Kowalski, A.E., and Lurie, I.Z. (2015). Medicaid as an Investment in Children: What Is the Long-Term Impact on Tax Receipts? Cambridge, MA: National Bureau of Economic Research.

Brownson, R.C., Fielding, J.E., and Maylahn, C.M. (2009). Evidence-based public health: A fundamental concept for public health practice. Annual Review of Public Health, 30, 175-201.

Brush, R. (2013). Can pay for success reduce asthma emergencies and reset a broken health care system? Community Development Investment Review. Available: http://www.frbsf.org/community-development/files/pay-for-success-reduce-asthma-emergencies-resetbroken-health-care-system.pdf [March 2016].

Callanan, L., and Law, J. (2013). Pay for success: Opportunities and risks for nonprofits. Community Development Investment Review. Available: http://www.frbsf.org/communitydevelopment/publications/community-development-investment-review/2013/april/pay-for-success-opportunities-risks-nonprofits [November 2015].

Chaikledkaew, U., Lertpitakpong, C., Teerawattananon, Y., Thavorncharoensap, M., and Tangcharoensathien, V. (2009). The current capacity and future development of economic evaluation for policy decision making: A survey among researchers and decision makers in Thailand. Value in Health, 12(Suppl. 3), S31-S35.

Chambers, J.D., Cangelosi, M.J., and Neumann, P.J. (2015). Medicare’s use of cost-effectiveness analysis for prevention (but not for treatment). Journal of Health Policy, 119(2), 156-163.

Chetty, R., Hendren, N., and Katz, L.F. (2015). The Effects of Exposure to Better Neighborhoods on Children: New Evidence from the Moving to Opportunity Experiment. Cambridge, MA: National Bureau of Economic Research.

Coburn, C.E., Penuel, W.R., and Geil, K.E. (2013). Research-Practice Partnerships: A Strategy for Leveraging Research for Educational Improvement in School Districts. New York: William T. Grant Foundation. Available: http://w.informalscience.org/images/research/Research-Practice-Partnerships-at-the-District-Level.pdf [October 2015].

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Cohodes, S., Grossman, D., Kleiner, S., and Lovenheim, M.F. (2014). The Effect of Child Health Insurance Access on Schooling: Evidence from Public Insurance Expansions. Cambridge, MA: National Bureau of Economic Research.

Conklin, J., Lusk, E., Harris, M., and Stolee, P. (2013). Knowledge brokers in a knowledge network: The case of seniors health research transfer network knowledge brokers. Implementation Science, 8(7), 1-10.

Corbacho, B., and Pinto-Prades, J.L. (2012). Health economic decision-making: A comparison between UK and Spain. British Medical Bulletin, 103(1), 5-20.

Corporation for National and Community Service. (2015). State of the Pay for Success Field: Opportunities, Trends, and Recommendations. Available: http://www.nationalservice.gov/sites/default/files/documents/CNCS%20PFS%20State%20of%20the%20Field%20Document%20Final%204-17-15%20sxf.pdf [March 2016].

Costa, K., and Shab, S. (2013). Government’s role in pay for success. Community Development Investment Review, 9(1), 91-96.

Coule, T.M. (2015). Nonprofit governance and accountability: Broadening the theoretical perspective. Nonprofit and Voluntary Sector Quarterly, 44(1), 75-97.

Cousins, J.B., and Bourgeois, I. (Eds.). (2014). Organizational capacity to do and use evaluation. New Directions for Evaluation, 2014(141). doi:10.1002/ev.20075.

Crowley, M. (2014). The role of social impact bonds in pediatric health care. Pediatrics, 134(2), e331-e333.

Crowley, M. (2015). Opportunities for Administrative Data to Support the Use of Economic Estimates in Decision-Making. Presentation at Administration for Children & Families’ Office of Planning Research and Evaluation meeting on The Promises and Challenges of Administrative Data in Social Policy Research, October 1-2, Washington, DC.

Damschroder, L.J., Aron, D.C., Keith, R.E., Kirsh, S.R., Alexander, J.A., and Lowery, J.C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(1), 50.

Davies, H., Nutley, S., and Walter, I. (2008). Why “knowledge transfer” is misconceived for applied social research. Journal of Health Services Research & Policy, 13(3), 188-190.

Disley, E., Rubin, J., Scraggs, E., Burrowes, N., and Culley, D.M. (2011). Lessons Learned from the Planning and Early Implementation of the Social Impact Bond at HMP Peterborough. Santa Monica, CA: RAND Corporation.

Dobbins, M., Robeson, P., Ciliska, D., Hanna, S., Cameron, R., and O’Mara, L. (2009). A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implementation Science, 4(23), 1-16.

Dobrow, M.J., Goel, V., and Upshur, R.E.G. (2004). Evidence-based health policy: Context and utilisation. Social Science & Medicine, 58(1), 207-217.

Dobrow, M.J., Goel, V., Lemieux-Charles, L., and Black, N.A. (2006). The impact of context on evidence utilization: A framework for expert groups developing health policy recommendations. Social Science & Medicine, 63(7), 1811-1824.

Donnelly, C., Letts, L., Klinger, D., and Shulha, L. (2014). Supporting knowledge translation through evaluation: Evaluator as knowledge broker. Canadian Journal of Program Evaluation, 29(1).

DuMont, K. (2015). Leveraging Knowledge: Taking Stock of the William T. Grant Foundation’s Use of Research Evidence Grants Portfolio. New York: William T. Grant Foundation.

Eddama, O., and Coast, J. (2008). A systematic review of the use of economic evaluation in local decision-making. Journal of Health Policy, 86(2-3), 129-141.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Elliott, H., and Popay, J. (2000). How are policy makers using evidence? Models of research utilization and local NHS policy making. Journal of Epidemiology & Community Health, 54(6), 461-468.

Fielding, J.E., and Briss, P.A. (2006). Promoting evidence-based public health policy: Can we have better evidence and more action? Health Affairs (Millwood), 25(4), 969-978.

Finn, J., and Hayward, J. (2013). Bringing success to scale: Pay for success and housing homeless individuals in Massachusetts. Community Development Investment Review, 9(1). Available: http://www.frbsf.org/community-development/files/bringing-success-scale-payfor-success-housing-homeless-individuals-massachusetts.pdf [December 2015].

Flitcroft, K., Gillespie, J., Salkeld, G., Carter, S., and Trevena, L. (2011). Getting evidence into policy: The need for deliberative strategies? Social Science & Medicine, 72(7), 1039-1046.

Galloway, I. (2014). Using pay-for-success to increase investment in the nonmedical determinants of health. Health Affairs, 33(11), 1897-1904.

Goldhaber-Fiebert, J.D., Snowden, L.R., Wulczyn, F., Landsverk, J., and Horwitz, S.M. (2011). Economic evaluation research in the context of child welfare policy: A structured literature review and recommendations. Child Abuse & Neglect, 35(9), 722-740.

Gordon, E.J. (2006). The political contexts of evidence-based medicine: Policymaking for daily hemodialysis. Social Science & Medicine, 62(11), 2707-2719.

Gottfredson, D.C., Cook, T.D., Gardner, F.E.M., Gorman-Smith, D., Howe, G.W., Sandler, I.N., and Zafft, K.M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16(7) 893-926.

Greenblatt, J., and Donovan, A. (2013). The promise of pay for success. Community Development Investment Review, 9(1), 19-22.

Greiling, D., and Spraul, K. (2010). Accountability and the challenges of information disclosure. Public Administration Quarterly, 34(3), 338-377.

Halachmi, A. (2002). Performance measurement, accountability, and improved performance. Public Performance & Management Review, 25(4), 370-374.

Hanney, S.R., Gonzalez-Block, M.A., Buxton, M.J., and Kogan, M. (2003). The utilisation of health research in policy-making: Concepts, examples and methods of assessment. Health Research Policy and Systems, 1(1), 2.

Harrington, W., Morgenstern, R.D., and Sterner, T. (2004). Comparing Instruments and Outcomes in the United States and Europe. Washington, DC: Resources for the Future.

Heckman, J.J., Moon, S.H., Pinto, R.R., Savelyev, P.A., and Yavitz, A. (2010). A New Cost-Benefit and Rate of Return Analysis for the Perry Preschool Program: A Summary. NBER Working Paper 16180. Available: http://www.nber.org/papers/w16180.pdf [December 2015].

Hoffmann, C., and Von Der Schulenburg, J.-M. (2000). The influence of economic evaluation studies on decision making: A European survey. Health Policy (Amsterdam, Netherlands), 52(3), 179-192.

Holmes, L., Landsverk, J., Ward, H., Rolls-Reutz, J., Saldana, L., Wulczyn, F., and Chamberlain, P. (2014). Cost calculator methods for estimating casework time in child welfare services: A promising approach for use in implementation of evidence-based practices and other service innovations. Children and Youth Services Review, 39, 169-176.

Hoyle, T.B., Samek, B.B., and Valois, R.F. (2008). Building capacity for the continuous improvement of health-promoting schools. Journal of School Health, 78(1), 1-8.

Institute of Medicine and National Research Council. (2014). Considerations in Applying Benefit-Cost Analysis to Preventive Interventions for Children, Youth, and Families. S. Olson and K. Bogard (Rapporteurs). Board on Children, Youth, and Families. Washington, DC: The National Academies Press.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Innvaer, S., Vist, G., Trommald, M., and Oxman, A. (2002). Health policy-makers’ perceptions of their use of evidence: A systematic review. Journal of Health Services Research & Policy, 7(4), 239-244.

Jacob, R.R., Baker, E.A., Allen, P., Dodson, E.A., Duggan, K., Fields, R., Sequeira, S., and Brownson, R.C. (2014). Training needs and supports for evidence-based decision making among the public health workforce in the United States. BMC Health Services Research, 14(1), 564.

Jennings, E.T., Jr., and Hall, J.L. (2011). Evidence-based practice and the uses of information in state agency decision making. The Journal of Public Administration Research and Theory, 22, 245-255.

Kaufman, J.S., Crusto, C.A., Quan, M., Ross, E., Friedman, S.R., O’Rielly, K., and Call, S. (2006). Utilizing program evaluation as a strategy to promote community change: Evaluation of a comprehensive, community-based, family violence initiative. American Journal of Community Psychology, 38(3-4), 191-200.

Kemm, J. (2006). The limitations of “evidence-based” public health. Journal of Evaluation in Clinical Practice, 12(3), 319-324.

LaRocca, R., Yost, J., Dobbins, M., Ciliska, D., and Butt, M. (2012). The effectiveness of knowledge translation strategies used in public health: A systematic review. BMC Public Health, 12(751), 1-15.

Lavis, J.N., Robertson, D., Woodside, J.M., McLeod, C.B., and Abelson, J. (2003). How can research organizations more effectively transfer research knowledge to decision makers? Milbank Quarterly, 81(2), 221-248.

Lens, M.C. (2015). Measuring the geography of opportunity. Progress in Human Geography, 1-23. doi: 10.1177/0309132515618104. Available: http://phg.sagepub.com/content/early/2015/11/30/0309132515618104.full.pdf+html [June 2016].

Lessard, C., Contandriopoulos, A.P., and Beaulieu, M.D. (2010). The role (or not) of economic evaluation at the micro level: Can Bourdieu’s theory provide a way forward for clinical decision-making? Social Science & Medicine, 70(12), 1948-1956.

Liebman, J.B. (2013). Building on Recent Advances in Evidence-Based Policymaking. Washington, DC: The Hamilton Project, The Brookings Institution.

Lindberg, S.I. (2013). Mapping accountability: Core concept and subtypes. International Review of Administrative Sciences, 79(2), 202-226.

Lorenc, T., Tyner, E.F., Petticrew, M., Duffy, S., Martineau, F.P., Phillips, G., and Lock, K. (2014). Cultures of evidence across policy sectors: Systematic review of qualitative evidence. European Journal of Public Health, 24(6), 1041-1047.

MDRC. (2015). Statement on Rikers PFS Failure. Available: http://www.mdrc.org/news/announcement/mdrc-statement-vera-institute-s-study-adolescent-behavioral-learningexperience [December 2015].

Merlo, G., Page, K., Ratcliffe, J., Halton, K., and Graves, N. (2015). Bridging the gap: Exploring the barriers to using economic evidence in healthcare decision-making and strategies for improving uptake. Applied Health Economics and Health Policy, 13(3), 303-309.

Miller, T.R., and Hendrie, D.V. (2012). Economic evaluation of public health laws and their enforcement. Public Health Law Research. Available: http://publichealthlawresearch.org/sites/default/files/downloads/resource/EconomicEvaluationPHL-Monograph-MillerHendrie2012.pdf [March 2016].

Mitton, C., Adair, C.E., McKenzie, E., Patten, S.B., and Waye Perry, B. (2007). Knowledge transfer and exchange: Review and synthesis of the literature. The Milbank Quarterly, 85(4), 729-768.

Moynihan, D.P. (2008). The Dynamics of Performance Management. Washington, DC: Georgetown University Press.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Moynihan, D.P. (2015). Using Evidence to Make Decisions: The Experience of U.S. Performance Management Initiatives. Commissioned paper for the Committee on the Use of Economic Evidence to Inform Investments in Children, Youth, and Families. Available: http://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/dbasse_171855.pdf [June 2016].

Moynihan, D.P., and Kroll, A. (2015). Performance management routines that work? An early assessment of the GPRA Modernization Act. Public Administration Review. doi: 10.1111/puar.12434.

National Research Council. (2011). Incentives and Test-Based Accountability in Education. Committee on Incentives and Test-Based Accountability in Public Education. M. Hout and S.W. Elliott (Eds.). Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

National Research Council. (2012). Using Science as Evidence in Public Policy. Committee on the Use of Social Science Knowledge in Public Policy. K. Prewitt, T.A. Schwandt, and M.L. Straf (Eds.). Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

National Research Council and Institute of Medicine. (2009). Strengthening Benefit-Cost Analysis for Early Childhood Interventions. A. Beatty (Rapporteur). Committee on Strengthening Benefit-Cost Methodology for the Evaluation of Early Childhood Interventions. Board on Children, Youth, and Families. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

Nelson, S.R., Leffler, J.C., and Hansen, B.A. (2009). Toward a Research Agenda for Understanding and Improving the Use of Research Evidence. Portland, OR: Northwest Regional Educational Laboratory.

Neumann, P.J. (2004). Why don’t Americans use cost-effectiveness analysis? American Journal of Managed Care, 10(5), 308-312.

Neumann, P.J., and Weinstein, M.C. (2010). Legislation against use of cost-effectiveness information. New England Journal of Medicine, 363(16), 1495-1497.

Nicholls, A., and Tomkinson, E. (2013). The Peterborough Pilot Social Impact Bond. Oxford, UK: Saïd Business School, University of Oxford.

Nutbeam, D., and Boxall, A.-M. (2008). What influences the transfer of research into health policy and practice? Observations from England and Australia. Public Health, 122(8), 747-753.

Nutley, S.M., Walter, I., and Davies, H.T.O. (2007). Using Evidence: How Research Can Inform Public Services. Bristol, UK: The Policy Press.

Oliver, K., Invar, S., Lorenc, T., Woodman, J., and Thomas, J. (2014a). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research, 14, 2.

Oliver, K., Lorenc, T., and Innvaer, S. (2014b). New directions in evidence-based policy research: A critical analysis of the literature. Health Research Policy and Systems, 12, 34.

Olson, J., and Phillips, A. (2013). Rikers Island: The first social impact bond in the United States. Community Development Investment Review. Available: http://www.frbsf.org/community-development/files/rikers-island-first-social-impact-bond-united-states.pdf [December 2015].

O’Reilly, III, C.A. (1982). Variations in decision makers’ use of information sources: The impact of quality and accessibility of information. Academy of Management Journal, 25(4), 756-771.

Orton, L., Lloyd-Williams, F., Taylor-Robinson, D., O’Flaherty, M., and Capwell, S. (2011). The use of research evidence in public health decision making processes: Systemic review. PLoS One, 6(7), 1-10.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Palinkas, L.A., Garcia, A.R., Aarons, G.A., Finno-Velasquez, M., Holloway, I.W., Mackie, T.I., Leslie, L.K., and Chamberlain, P. (2014). Measuring use of research evidence the structured interview for evidence use. Research on Social Work Practice, 1-15. Available: http://rsw.sagepub.com/content/early/2014/12/01/1049731514560413.full.pdf [June 2016].

Palinkas, L.A., Short, C., and Wong, M. (2015). Research-Practice Policy Partnerships for Implementation of Evidence-Based Practices in Child Welfare and Child Mental Health. New York: William T. Grant Foundation.

Pew-MacArthur Results First Initiative. (2014). Results First in Your State. Available: http://www.pewtrusts.org/~/media/assets/2013/results-first-in-your-state-brief.pdf [December 2015].

Pew-MacArthur Results First Initiative. (2015). Legislating Evidence-Based Policymaking: A Look at State Laws that Support Data-Driven Decision-Making. Available: http://www.pewtrusts.org/~/media/assets/2015/03/legislationresultsfirstbriefmarch2015.pdf?la=en [January 2016].

Ribisl, K.M., Leeman, J., and Glasser, A.M. (2014). Pricing health behavior interventions to promote adoption: Lessons from the marketing and business literature. American Journal of Preventive Medicine, 46(6), 653-659.

Saldana, L., Chamberlain, P., Bradford, W.D., Campbell, M., and Landsverk, J. (2014). The Cost of Implementing New Strategies (COINS): A method for mapping implementation resources using the stages of implementation completion. Children and Youth Services Review, 39(2), 177-182.

Simoens, S. (2010). Use of economic evaluation in decision making: Evidence and recommendations for improvement. Drugs, 70(15), 1917-1926.

Stid, D. (2013). Pay for success is not a panacea. Community Development Investment Review, 9(1), 13-18.

Supplee, L.H., and Metz, A. (2015). Opportunities and challenges in evidence-based social policy. Social Policy Report, 28(4). Available: http://www.srcd.org/sites/default/files/documents/spr_28_4.pdf [June 2016].

Tseng, V. (2012). Social Policy Report: The Uses of Research in Policy and Practice. New York: William T. Grant Foundation.

Tseng, V. (2014). Forging common ground: Fostering the conditions for evidence use. Journal of Leisure Research, 46(1), 6-12.

Urban, J.B., and Trochim, W. (2009). The role of evaluation in research—practice integration working toward the “golden spike.” American Journal of Evaluation, 30(4), 538-553.

van Dongen, J.M., Tompa, E., Clune, L., Sarnocinska-Hart, A., Bongers, P.M., van Tulder, M.W., van der Beek, A.J., and van Wier, M.F. (2013). Bridging the gap between the economic evaluation literature and daily practice in occupational health: A qualitative study among decision-makers in the healthcare sector. Implementation Science, 8(57), 1-12.

Vera Institute of Justice. (2015). Impact Evaluation of the Adolescent Behavioral Learning Experience (ABLE) Program at Rikers Island: Summary of Key Findings. Available: http://www.vera.org/sites/default/files/resources/downloads/adolescent-behavioral-learningexperience-evaluation-rikers-island-summary-2.pdf [January 2016].

Veselý, A. (2013). Accountability in Central and Eastern Europe: Concept and reality. International Review of Administrative Sciences, 79(2), 310-330.

Ward, V., House, A., and Hamer, S. (2009). Knowledge brokering: The missing link in the evidence to action chain? Evidence & Policy: A Journal of Research, Debate and Practice, 5(3), 267.

Wholey, J.S., and Newcomer, K.E. (1997). Clarifying goals, reporting results. New Directions for Program Evaluation, 76, 95-105.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

Williams, I., and Bryan, S. (2007). Understanding the limited impact of economic evaluation in health care resource allocation: A conceptual framework. Health Policy, 80(1), 135-143.

Zardo, P., and Collie, A. (2014). Predicting research use in a public health policy environment: Results of a logistic regression analysis. Implementation Science, 9, 142.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×

This page intentionally left blank.

Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 159
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 160
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 161
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 162
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 163
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 164
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 165
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 166
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 167
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 168
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 169
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 170
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 171
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 172
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 173
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 174
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 175
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 176
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 177
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 178
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 179
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 180
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 181
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 182
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 183
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 184
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 185
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 186
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 187
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 188
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 189
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 190
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 191
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 192
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 193
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 194
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 195
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 196
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 197
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 198
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 199
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 200
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 201
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 202
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 203
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 204
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 205
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 206
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 207
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 208
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 209
Suggested Citation:"4 Context Matters." National Academies of Sciences, Engineering, and Medicine. 2016. Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families. Washington, DC: The National Academies Press. doi: 10.17226/23481.
×
Page 210
Next: 5 A Roadmap for Improving the Use of High-Quality Economic Evidence »
Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families Get This Book
×
 Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families
Buy Paperback | $55.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In recent years, the U.S. federal government has invested approximately $463 billion annually in interventions that affect the overall health and well-being of children and youth, while state and local budgets have devoted almost double that amount. The potential returns on these investments may not only be substantial but also have long-lasting effects for individuals and succeeding generations of their families.

Ideally, those tasked with making these investments would have available to them the evidence needed to determine the cost of all required resources to fully implement and sustain each intervention, the expected returns of the investment, to what extent these returns can be measured in monetary or nonmonetary terms, and who will receive the returns and when. As a result of a number of challenges, however, such evidence may not be effectively produced or applied. Low-quality evidence and/or a failure to consider the context in which the evidence will be used may weaken society's ability to invest wisely, and also reduce future demand for this and other types of evidence.

Advancing the Power of Economic Evidence to Inform Investments in Children, Youth, and Families highlights the potential for economic evidence to inform investment decisions for interventions that support the overall health and well-being of children, youth, and families. This report describes challenges to the optimal use of economic evidence, and offers recommendations to stakeholders to promote a lasting improvement in its quality, utility, and use.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!