Societies, both domestic and international, invest substantially in interventions1 designed to support the well-being of children, youth, and families in such areas as education, health, and social welfare. Often, the success of these interventions varies widely, leading to calls for evidence on how to make more informed investment decisions. Economic evidence—information derived from economic principles and methods—can help meet this need.2 Economic evidence can be used to determine not just what works, but what works within budget constraints.
Economic evaluation is a particular means of producing economic evidence that can be used to calculate and compare the costs and outcomes of an intervention. Unfortunately, economic evaluation is not always executed or applied effectively. These shortcomings may not only weaken society’s ability to invest wisely but also reduce the demand for this and other types of evidence. On the other hand, economic evaluation that is of both high quality and high utility—timely, accessible, and relevant within the context or environment in which it can best be used—can significantly improve and increase the returns on investments targeted to children, youth, and families.
This report examines many of the factors that both weaken and
1 Throughout this report, the term intervention is used to represent the broad scope of programs, practices, and policies that are relevant to children, youth, and families.
2 In the context of this report, economic evidence refers to the information produced by cost and cost-outcome evaluations, including cost analysis, cost-effectiveness analysis, and benefit-cost analysis.
strengthen the effective use of economic evidence. It proposes best practices and makes recommendations to both producers and consumers of economic evidence, as well as those who mediate between the two, for improving the use of such evidence to inform investments for children, youth, and families.
In recent years, significant efforts have been devoted to strengthening the use of evidence, as well as performance measurement, for decision making in both the public and private sectors. Building on various efforts to “reinvent government,” a movement given momentum by Osborne and Gaebler (1992), Congress passed the Government Performance and Results Act (GPRA) of 1993 to strengthen measures of government performance and use them to guide future actions. The Program Assessment Rating Tool (PART), a 2002 initiative of the George W. Bush administration, was introduced as a diagnostic tool designed to help assess and improve the performance of federal programs. The GPRA of 2010 continued the momentum of these efforts by building on lessons learned and providing examples of agencies that had made use of evidence in planning and assessing their programs and policies. A more recent legislative effort advocating the use of evidence in general in investment decisions is the Evidence-Based Policymaking Commission Act of 2015, first introduced in 2014 by U.S. Senator Patty Murray (D-WA) and Representative Paul Ryan (R-WI), which would establish a commission to determine how best to expand the use of data for evaluating the effectiveness of federal investments. One of the hopes for this bill is to increase the availability and use of data in support of program evaluation.3
Additional efforts are evident in a growing number of publicly and privately funded initiatives designed to help implement evidence-based programs and policies, support new and continuous evaluation, and target investments toward what works. Examples include the Bloomberg Foundation’s What Works Cities Initiative; the Results First Initiative of the Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation; the U.S. Department of Education’s Investing in Innovation Fund (i3); Making Results-Based State Government Work, a joint project of the National Conference of State Legislatures and the Urban Institute; Results for America; Pay for Success initiatives; the International Initiative for Impact Evaluation (3iE); and Health Systems Evidence. Additional initiatives to support the use of evidence include the recent efforts of the William T. Grant Foundation, which recently introduced a new research
3Evidence-Based Policymaking Commission Act of 2015, 114th Congress; 1st Session; H.R. 1831 (2015).
focus to support studies aimed at identifying and testing actionable strategies for improving the production and use of “useful” research evidence, and the Laura and John Arnold Foundation, whose new Evidence-Based Policy and Innovation Division will develop and support initiatives that encourage policy makers to use evidence in their decision making.
Although these initiatives have made substantial progress in bringing the use of economic evidence in decision-making to the forefront of investment conversations, not all are not concerned specifically with economic evidence, but are focused on evidence more generally. For example, outcomes (e.g., graduation from high school) may be measured with little regard for intervention costs. Policy makers are seeking more information (e.g., from economic evaluations) to determine what works in the most cost-effective manner so that resources can be allocated wisely.
Not surprisingly, evidence is not the only factor influencing decisions. Weiss (1983) notes that ideology, interests, and information are the three major influences on government decisions. A 2012 report of the National Research Council (NRC) titled Using Science as Evidence in Public Policy similarly notes that scientific evidence is only one of the many influences on policy decisions, and that in a democracy, the views and interests of citizens and interested groups must be taken into account in formulating policy (National Research Council, 2012). Indeed, democratic processes by their very nature provide a means of making decisions in the absence of certainty. Nevertheless, the report highlights what it terms the “unique voice” of research evidence: It is “governed by systematic and rule-governed efforts that guard against self-deception . . . science is designed to be disinterested” (p. 10). Its procedures also are carefully detailed and circumscribed to allow for replication so that evidence can continually be tested and retested (National Research Council, 2012).
Although decision making clearly is the result of a dynamic process influenced by emotions and values, not just empirical evidence, such evidence—particularly economic evidence—can be used more effectively in investment decisions. Obviously, if the quality of the economic evidence is weak or the context (e.g., timelines or access to relevant data) in which it might be utilized is not carefully considered, the evidence will have limited utility. Given the potential for economic evaluations to influence better investments for children, youth, and families, this report outlines promising strategies for strengthening the evaluations themselves and better incorporating the evidence they produce into the processes used by decision makers.
In fall 2014, with support from the MacArthur Foundation, the Robert Wood Johnson Foundation, the Jacobs Foundation, the Institute of Medi-
cine and the NRC formed the Committee on the Use of Economic Evidence to Inform Investments in Children, Youth, and Families. The committee was charged with conducting a study of how to improve the use of evidence derived from economic evaluations of costs, benefits, and potential for return on investment to inform policy and funding decisions on investments for children, youth, and families. The committee’s statement of task is presented in Box 1-1. Topics related to methodological standards, principles, and practices are covered at length within Chapter 3. The other topics of the charge are discussed throughout the entire report.
The committee’s charge at its core was to formulate recommendations for better ensuring that decision-making processes for investments in children, youth, and families are as informed as possible by economic evidence and, closely related, that the processes used by elected officials, researchers, budgeters, agency managers, and individual practitioners can result in better use of such evidence. Other National Academies efforts related to the study charge are highlighted in Box 1-2.
An ad hoc committee under the auspices of the Institute of Medicine (IOM) and the National Research Council (NRC) will study how to improve the use of economic analysis of costs, benefits, and potential for return on investment to inform policy and funding decisions on investments for children, youth, and families. The committee will make recommendations to improve the quality, utility, and use of research, evaluation, and economic evidence about investments in children, youth, and families. The committee will take into consideration the perspectives of and actions that can be taken by prevention researchers, economic researchers, implementation researchers, evaluation scientists, implementers, and those engaged in making decisions about policies and investments. Throughout its information gathering and deliberations, the committee will consider lessons learned from similar economic analyses in other fields.
The committee will
- Review and investigate the current landscape of the design, methods, utility, and use of research and evaluation on effectiveness, costs, benefits, feasibility to implement on a large scale, and potential for return on investment to determine what is being learned about investments in children, youth, and families; who is using the knowledge; and how it is being used.
- Review existing standards or guides for the design, methods, and reporting of cost effectiveness, benefit-cost, return on investment, and budgetary impact analyses.
- Identify areas where widespread adoption of common methodological approaches is needed to ensure both consistent quality (e.g., appropriate choice of methods, validity, rigor) and appropriate utility (e.g., match of research questions to policy and implementation needs, comparability of studies, consistency of reporting).
- Specify common methodological approaches to be adopted in areas where sufficient evidence is available to reach consensus, including articulating options where one fixed standard may not be appropriate or needed.
- Identify and propose principles and processes for arriving at and adopting common methodological approaches over time in areas where consensus is not currently achievable or appropriate.
- Identify and propose processes for ensuring that the research and evaluation community, implementers, and those engaged in decision making are mutually informed and involved in designing studies, evaluations, and economic analyses to answer both important research questions and critical policy questions (e.g., costs, implementation, and effects at scale; components of interventions; portfolios of interventions; timeframe of anticipated outcomes and returns; accrual of benefits and returns to sectors/budgets other than the original expenditure).
- Identify current efforts and propose potential opportunities to support sustained, ongoing use of research, evaluation, and economic evidence in the public, philanthropic, and private sectors to inform investment in children, youth, and families. This includes incorporating evidence into decision-making processes alongside other political and value considerations.
The committee will build on the information gathered in two prior workshops conducted by the IOM and NRC: the 2009 Workshop on Strengthening Benefit-Cost Analysis for Early Childhood Interventions and the 2013 Workshop on Standards for Benefit-Cost Analysis of Preventive Interventions for Children, Youth, and Families.
To address the study charge, the National Academies appointed a committee whose membership included experts in a variety of disciplines and fields, including public policy, public health, education, social welfare, economics, sociology, developmental psychology, prevention science, program evaluation, and decision science. Members also included practitioners with experience (e.g., legislative, agency) in making decisions on interventions
- Strengthening Benefit-Cost Analysis for Early Childhood Interventions (National Research Council and Institute of Medicine, 2009)
- Considerations in Applying Benefit-Cost Analysis to Preventative Interventions for Children, Youth, and Families (Institute of Medicine and National Research Council, 2014)
- Bridging the Evidence Gap in Obesity Prevention (Institute of Medicine, 2010)
- Using Science as Evidence in Public Policy (National Research Council, 2012)
- The Communication and Use of Social and Behavioral Science Research (2015)
for children, youth, and families. (See Appendix A for public session agendas and Appendix B for biographical sketches of the committee members and staff.) At the end of this report is a Glossary of key terms.
The committee conducted reviews of the peer-reviewed and gray literature (e.g., research reports, online publications) relevant to the topics outlined in the study’s statement of task. The committee used this evidence to formulate conclusions and actionable recommendations to inform the research, practice, and policy decisions of prevention researchers, economic researchers, implementation researchers, evaluation scientists, implementers, and budgetary decision makers. The committee’s search efforts revealed areas of weakness in economic evaluations, such as gaps in guidance related to cost analysis and other economic evaluation methods. As a result, the committee formulated recommendations for improving future economic evaluations, including some suggested best practices.
Given the many competing influences entailed in decision-making processes, the existing literature provided little explicit evidence on precisely how economic evidence in general—much less economic evaluation per se and, still less, high-quality economic evaluations of investments in children, youth, and families—can better be incorporated into those processes. Accordingly, to assess how better use might be made of the results of economic evaluations, the committee found it useful to look beyond the quality issue to studies on the use of research and economic evidence more broadly
The committee used various sources to supplement its research efforts. The committee met in person four times and held one half-day virtual meeting. In addition to its closed-session meetings, the committee held three public information-gathering sessions and commissioned three research papers. Participants in these supplemental information-gathering processes included prominent experts such as budgetary decision makers, translators of economic evidence, researchers, statisticians, and implementers representing perspectives from the fields of state and federal government, academia, and foundations and other nonprofit institutions.
Public Information-Gathering Sessions—Discussion Panels
- The Use of Economic Evidence in Decisions
- Facilitating and Overcoming Barriers to the Use of Economic Evidence in Investments
- Bridging the Gap Between Producers and Consumers of Economic Evidence
- Barriers to and Advances in the Use of Administrative Data/Integrated Data Systems
- Drs. Jeff Valentine (University of Louisville) and Spyros Konstantopoulos (University of Michigan) addressed the technical issues surrounding the design, quality, and use of meta-analyses in certain economic analyses.
- Dr. Richard Cookson (University of York) addressed the state of the art and current efforts related to introducing equity issues and outcomes into economic evaluations of social-sector interventions related to children, youth, and families.
- Dr. Donald Moynihan (University of Wisconsin) addressed the nature and use of performance data to provide insight on how to improve the use of economic data.
defined, deriving salient information from such fields as public administration. The committee supplemented its literature review with commissioned papers and open-session discussions with outside experts. Throughout the chapters of this report, the reader will find selected statements made during those open-session discussions that support the key messages of this report.4Box 1-3 provides detail on the committee’s supplemental information-gathering process.
4 The experts quoted throughout the report provided written permission to include their names and testimony.
The committee first worked to determine how to define and limit the scope of its work while achieving the study’s purpose. Specifically, the committee developed definitions of economic evidence and its use and identified the types of investments in children, youth, and families that were relevant to the study. Throughout this report, for example, the committee elected to use the term intervention to represent the broad scope of programs, practices, and policies that are relevant to children, youth, and families. The committee’s decisions about definitions and relevance are reviewed in the following sections; at the same time, the committee recognizes that alternative, or broader, definitions of these terms are possible.5 It should also be noted that several illustrative examples are presented throughout the report, but the committee recognizes that these examples are inadequate to cover the gamut of possible types of investments in children, youth, and families.
Definitions Related to Economic Evidence
Economic evidence broadly defined refers to the various types of information collected using economic principles and methods—all of which are important for decision making. Based on both the statement of task for this study (Box 1-1) and the expertise of its appointed members, however, the committee focused on a particular class of methods for producing economic evidence. These methods formally fall under the rubric of cost and outcome evaluation or economic evaluation methods but are better known in terms of several specific types of analysis: cost analysis (CA), cost-effectiveness analysis (CEA), and benefit-cost analysis (BCA). For purposes of this report, cost is defined as the full economic value of the resources required to implement a given social intervention, and outcomes are defined as the causal impacts of an intervention on children, youth, and families (relevant outcome domains are detailed later in this section). Key definitions related to economic evidence are given in Box 1-4. A Glossary of key terms is at the end of this report.
It should be noted that cost analysis is sometimes used as a generic term for multiple economic evaluation methods. To avoid confusion, the abbreviation CA is used in this report when cost analysis is being discussed as a specific type of analysis. The objective of a CA is to measure the full economic value of the resources required to implement an intervention relative to the baseline condition (typically the status quo).
5 For example, the Centers for Disease Control and Prevention uses the term program to describe any organized public health action (e.g., research initiatives, infrastructure-building projects, training education services) (Koplan et al., 1999).
Benefit-cost (or cost-benefit) analysis—a method of economic evaluation in which both costs and outcomes of an intervention are valued in monetary terms, permitting a direct comparison of the benefits produced by the intervention with its costs.
Break-even analysis—a method of economic evaluation that can be used when the outcomes of an intervention are unknown; can be used to complement cost analysis as a way of anticipating potential economic returns.
Budgetary impact analysis—a special case of cost-savings analysis that examines the impact, year-by-year, of a health-related intervention on the government budget for aggregate or specific agencies.
Cost—the full economic value of the resources required to implement a given social intervention.
Cost analysis—a method of economic evaluation that provides a complete accounting of the economic costs of a given intervention over and above the baseline scenario.
Cost-effectiveness analysis—a method of economic evaluation in which outcomes of an intervention are measured in nonmonetary terms. The outcomes and costs are compared with both the outcomes and cost for competing interventions (or an established standard) to determine whether the outcomes are achieved at reasonable monetary cost.
Cost savings analysis—a method of economic evaluation that entails performing a benefit-cost analysis, but only from the perspective of the government sector.
Cost utility analysis—a method of economic evaluation that entails performing a cost-effectiveness analysis using quality-of-life measures.
Counterfactual—the base condition used as the basis for comparison in evaluating an intervention. No treatment, the current situation, or the best proven treatment are common counterfactuals.
Discount rate—a factor used to estimate future costs or the value of future benefits at the current equivalent value.
DALY (disability-adjusted life year)—a general measure of the burden of disease on the quantity of life lived.
Economic evidence—the information produced from cost and cost-outcome evaluations, including cost analysis, cost-effectiveness analysis, and benefit-cost analysis.
Impact—an effect of an intervention, or change in an outcome, that can be attributed to the intervention.
Intervention—in the context of this report, a term used to represent the broad scope of programs, practices, and policies that are relevant to children, youth, and families.
Logic model—a pictorial representation of an intervention’s theory of change. Logic models typically show the relationship among resources, or inputs needed to carry out an intervention; major activities involved in the intervention; and the results of the intervention, expressed as outputs, outcomes, and/or impacts.
Outcome—an attitude, action, skill, behavior, etc., that an intervention is intended to causally influence.
QALY (quality-adjusted life year)—a general measure of the burden of disease on the quality and quantity of life lived.
Return-on-investment analysis—a method of economic evaluation used in special cases in which benefit-cost analysis is conducted for a specific stakeholder group.
Shadow price—the estimated true value or cost of the results of a particular decision, as calculated when no market price is available or when market prices do not reflect the true value.
CA provides a foundation for both CEA and BCA.6 In CEA, outcomes of an intervention are measured in nonmonetary terms, while in BCA, both costs and outcomes of an intervention are valued in monetary terms. CEA and BCA both incorporate the impacts of the intervention, not just its costs. In the case of CEA, for any given outcome affected by an intervention, the cost of attaining a given impact, such as the cost for each additional high school graduate, is calculated. Alternatively, a CEA can identify the amount of an outcome achieved for each dollar of cost. BCA goes one step further to value (ideally) all of the outcomes of an intervention in dollar terms, so that the aggregate value of the outcomes can be compared with the full economic cost of attaining those impacts. A BCA demonstrates whether the value of an intervention’s outcomes exceeds the value of the intervention’s cost, or whether the ratio of benefits to costs exceeds 1. In principle and in idealized form, a BCA also allows for comparisons across interventions to determine which ones provide the highest ratio of benefits to costs when
6 Related evaluation methods that can be considered special cases of these three approaches include cost-savings analysis, break-even analysis, cost-utility analysis, and budgetary impact analysis. These methods are defined in Box 1-4 and described in greater detail in Chapter 2.
available resources are restricted. Illustrative examples of these methods can be found in boxes throughout Chapter 2.
While economic evidence can be quite valuable and fulfill a unique role, it does not dominate other forms of evidence and concerns. In the case of public policy decisions that are highly value-dependent or involve substantial nonquantifiable outcomes because of uncertainty or other reasons (e.g., the death penalty), the extent to which any economic evaluation can or should determine the final decisions made may be limited.
Definitions Related to the Use of Economic Evidence
A key word in the committee’s statement of task is use. The committee was not asked to determine which method of economic evaluation is best, which studies are most informative, or how recent reports might suggest ways to allocate resources. Instead, the question posed was how best to improve the use of economic evidence, both existing and as it might be developed, to inform decision making on investments in children, youth, and families. In this context, economic evidence per se serves two main purposes: it provides a unique, disinterested voice and a rigor often unavailable elsewhere, and a broader framework for decision making in the presence of limited information. The ideal goal for the use of economic evidence, particularly that derived from BCA, is to maximize the return derived from each additional dollar, moment of time, or other resource expended at each margin. Since there never will be enough evidence to ensure that every dollar or other incremental resource could not be better spent or used elsewhere, the very framework of economic evaluation maintains a balanced focus by calling on those producing or using evidence to look at costs, not just benefits; at margins, not just averages; at every margin, not just one. For instance, a simple evidence framework might reveal that a certain effort succeeds at a certain task, but the economic evidence framework asks additionally whether there might be some better use of the last $100 spent on that effort.
To understand potential drivers of the use of economic evidence, it is important to acknowledge the complexity and variability entailed in decision making, as well as the differing typologies of use that may have distinct influencers (Innvaer et al., 2002; National Research Council, 2012; Pirog, 2009). The following is a nonexhaustive list of the typologies of use:
- Instrumental use—Study results are used to make concrete decisions. The decisions may be formative or summative and may be made by many different stakeholders.
- Imposed use—Requirements set by government offices or funders
mandate that scientific information be collected or that evidence-based interventions be implemented as a condition for funding.
- Conceptual or enlightenment use—This category of use refers to broad changes in how policy makers or stakeholders view an issue over time. Such use can occur when evaluation findings over time affect the conceptual framework that policy makers use to address policy issues.
- Tactical use—A decision is made to draw on research evidence to support or refute a specific idea, program, legislative action, or reform effort.
- Process use—In practice, evaluations often influence how people in an organization think. Such process use is particularly common when those in the organization are directly involved in a study. As a result of their involvement, they may, for example, begin making more requests for data, increase their use of logic models, or ask for more evidence in planning or making other types of decisions.
- Symbolic use—Evaluation results are used for strategic or persuasive purposes, to justify pre-existing positions. Views or decisions of the user do not truly change, but the user’s goal is to change the views of others.
It is important that typologies of use are considered by those seeking to make economic evidence more useful. Illustrative examples of instrumental use, imposed use, and conceptual use of economic evidence are described in Chapter 2.
Relevant Investments for Children, Youth, and Families
The committee’s charge focused specifically on investments in children, youth, and families, and should not be viewed as being related to spending on families more broadly defined. Not all spending, public or private, represents investment. The term investment normally implies less, not more, consumption today to promote more well-being and consumption tomorrow. The corresponding notion that an investment produces a rate of return is incorporated explicitly into economic evaluations by the way in which they sum and compare costs and benefits over time. A classic case is education, which entails a return tomorrow for investments made today.
As a result, although many types of spending or other efforts may be beneficial to children, youth, and families, they do not necessarily provide the types of return on investment that would be identified by evidence from economic evaluation. Table 1-1 summarizes the types of interventions that fall within the relevant policy domains of this study. Of course, such categorizations are somewhat arbitrary. Many social interventions may fall
|Domains||Examples of Interventions|
|Early Childhood||Home visiting, parent education, early learning, and other inventions that serve parents or children from the prenatal period to kindergarten entry through services delivered in the home, in centers, or through other providers (e.g., pediatricians)|
|K–12 Education||School-based interventions that serve children during the K–12 years, as well as school- or community-based out-of-school-time learning programs (i.e., before- or after-school or summer learning)|
|General Youth Development||School- or community-based programs that serve adolescents to promote positive youth development (e.g., mentoring, sports)|
|Child Welfare||Home- or community-based programs designed to prevent abuse or neglect or to intervene with children and families in the child welfare system|
|Health Promotion or Prevention||School- or community-based programs for parents or children designed to promote general physical and mental health or to prevent specific health problems, including disease|
|Safety Promotion and Injury Prevention||Home-, school-, or community-based programs designed to promote safety or prevent injury|
|Wellness||Home-, school-, or community-based programs for children and families focused specifically on nutrition and physical activity|
|Mental Health||School- or community-based programs for parents, youth, and children designed to diagnose and treat mental health or substance use conditions|
|Teen Pregnancy Prevention||School- and community-based programs for children and youth designed to promote sexual health and prevent teenage childbearing|
|Crime/Violence Prevention||School- and community-based programs for children and youth designed to prevent delinquency, crime, and violence|
|Homelessness Prevention and Supports||Programs providing housing support to prevent homelessness or providing supportive services for those that are homeless|
|Employment, Training, and Welfare||Training, employment, and income or in-kind assistance programs for families with children|
SOURCE: Adapted from http://www.rand.org/content/dam/rand/pubs/technical_reports/2008/RAND_TR643.pdf [March 2016].
into more than one category, as many interventions have multiple objectives. For example, some early childhood interventions could be regarded either as child welfare interventions when they focus on prevention of abuse and neglect or as crime/violence prevention interventions when that is an eventual outcome. Likewise, interventions designed as general youth development programs may also be considered substance abuse prevention programs or teen pregnancy prevention programs if those are domains of behavior in which they have impacts. In general, the committee classifies interventions based on the primary outcomes they are designed to effect, even when there is overlap with other domains. Further, although many of the examples in Table 1-1 come from the U.S. experience, the committee also reviewed international practices and provides examples of economic evaluation of major development topics, such as disease prevention interventions.
The intervention types listed in Table 1-1 share a focus on children and youth, either directly or through their intervention with parents. Interventions of interest for this study include those that start as early as the prenatal period and extend through early childhood, the elementary school ages, and adolescence toward adulthood (given that returns on investment can extend throughout a child’s life). Interventions often may center on the parent-child dyad or adopt a two-generation focus, with differential services centered on the child(ren) and the parents. Interventions of interest include those that deliver services in the home, as well as those that reach children and parents through a childcare or early learning program, a medical provider, a school, or some other community-based setting. The interventions listed in Table 1-1 may be made available universally or on a targeted basis according to specific indicators of risk, such as low income, health risks, or developmental delays.
The findings and recommendations presented in this report concern a broad range of interventions addressing children, youth, and families. The interventions may be at the federal, state, regional, or local level and in the United States or other countries. They may vary in intensity, frequency, duration, and cost and be financed by governmental or nonprofit entities or other actors. Finally, in addition to the types of interventions listed in Table 1-1, of relevance are regulatory policies, whether established by legislatures or agencies, designed to affect child and family well-being. For instance, safety promotion and injury prevention often are accomplished through specific legislation or regulations regarding products in the marketplace or safety practices, such as the use of safety belts or child seats in vehicles.
Relevant Outcome Domains
Although some interventions, such as those within the early childhood education field, may focus on the child, others, such as those related to
housing, focus on the family; moreover, benefits to one family member often redound to others. Table 1-2 lists some of the outcome domains that the interventions in Table 1-1 would be expected to affect and for which economic evaluations need to account. As organized here, the outcome domains are divided into one set measured for children and youth and another measured for adults. Table 1-2 reveals a wide-ranging set of outcome areas that need to be considered, many of which may not typically be examined in economic evaluations, often because data are inadequate. Broadly speaking, these areas cover all aspects of child development—cognitive, social, emotional, behavioral, and physical—as well as similar domains of well-being
|Child and Youth Outcome Areas|
|Behavioral/Emotional||Behavior problems, school discipline (suspension, expulsion), social-emotional functioning (e.g., emotional regulation, executive function), mental health (e.g., depression, anxiety, stress, other psychological dysfunction)|
|Cognitive||IQ, other cognitive development measures|
|Education||Achievement tests and other education outcomes through high school (e.g., attendance, grades, grade repetition, special education use, high school graduation/dropping out)|
|Physical Health||Physical health, reproductive health (e.g., teen pregnancy, contraceptive use), health care utilization|
|Antisocial/Risky Behavior||Delinquency, crime, substance abuse, sexual activity, bullying|
|Adult Outcome Areas|
|Family Functioning||Parent-child relationships, child abuse and neglect, healthy relationships|
|Education||Postsecondary education outcomes|
|Economic||Labor market outcomes, use of social welfare programs|
|Health||Physical health, mental health, reproductive health (e.g., childbearing, contraceptive use), health care utilization|
|Crime and Substance Abuse||Criminal activity; arrests and convictions; recidivism; use of tobacco, alcohol, and drugs|
SOURCE: Adapted from http://www.rand.org/content/dam/rand/pubs/technical_reports/2008/RAND_TR643.pdf [March 2016].
in adulthood. The adult outcomes also encompass economic outcomes, including those related to labor and the use of welfare interventions, both for parents and for children as they age.
Although many conclusions about the use of economic evidence might well apply to any intervention, why place special focus on children, youth, and families? First, much spending on children and youth is aimed at improving their long-term well-being into adulthood and for decades to come (Hahn, 2014). Those concerned with long-term advancement for both individuals and society often turn their attention to the early stages of life—a critical period in cognitive, social, behavioral, and neurological development (Doyle et al., 2009). Second, relative to many other forms of spending, such as retirement support, much of government spending on children does take the form of investment, as evidenced by the share of spending on children going to education (Hahn, 2014). Of note, investment is especially amenable to economic analysis both in framing interventions and in providing evidence of success. Finally, it simply turns out that a good deal of recent effort devoted to providing economic evidence (see examples above) has focused on ways of providing services that would help children advance (Hahn, 2014).
To guide its deliberations, the committee considered the audiences for this report—those who produce evidence (including economic evidence) and those whose decisions ultimately affect whether and how well the evidence is used. These audiences can be divided broadly into the producers and consumers of evidence, with intermediaries often called upon to bridge the gap between them. Stakeholder groups for which this report may be relevant include prevention, economic, and implementation researchers; evaluators; implementers; knowledge translators; technical assistance providers; federal, state, and local agencies and policy makers; budget offices; and public, private, and nonprofit funding agencies. Additionally, many of the key messages of this report were designed to be applicable to stakeholders in both domestic and international contexts.
Researchers generally produce or provide economic evaluations, although those who gather the data used in such studies often determine what studies can be performed and what costs and benefits can be assessed. Both government entities (legislative and executive branches at the local, state, and federal levels) and philanthropies can finance economic evaluations and in that role affect what is studied and how results are disseminated. Of course, they simultaneously determine much of what is demanded and naturally serve as primary consumers of the evidence they demand. Within those institutions are individuals who provide, consume, and mediate the
evidence resulting from economic evaluations, including elected officials and agency leaders with the power both to allocate resources to produce the evidence and to make use of the evidence to reallocate resources. Such power varies widely from person to person and field to field. For instance, a welfare budget may be determined by a legislature, but vaccination decisions may be made by an administrator or, more likely, a service provider (e.g., a nurse or physician in concert with parents).
Advocates for children, youth, and families or for particular subgroups, such as children with diabetes or in charter schools, are among those who exercise influence over public policy choices. At the broadest level, of course, society in general and, more specifically, families themselves serve as consumers of economic evidence, often deciding through private spending and voting just how such evidence will be used in allocating societal resources.
To support efforts to make better investment decisions for children, youth, and families through the use of economic evidence, it is critical that stakeholders, across sectors and systems, recognize the complexities surrounding the production, use, and utility of the evidence. For those stakeholders seeking ways to improve the use of economic evidence in decision making on investments in children, youth, and families, the committee proposes that they give substantial attention to two simple guiding principles: (1) quality counts, and (2) context matters. That is, it is not enough simply to require the use of evidence without also addressing issues of the quality and utility of the evidence. Failure to address quality issues weakens not just individual reports but also the acceptance of evidence more broadly by decision makers. In turn, context always matters: failure by researchers conducting economic evaluations to address the values and concerns of decision makers practically ensures that the resulting evidence will be weakly used, if at all. These two principles provide the framework for this report and support the committee’s overall conclusion that the greatest promise for improving the use of economic evidence lies in producing high-quality and high-utility economic evidence that fits well within the context in which decisions will be made.
The remainder of this report details the key messages framed by the above two guiding principles. Chapter 2 provides background for the succeeding chapters, summarizing the methods commonly used in economic evaluation. It identifies the relevant stakeholders in the production, use, and
support of economic evidence; highlights current uses of economic evidence and common challenges to its use; and examines the untapped potential of economic evidence to inform investments in children, youth, and families. Chapter 3 describes the procedural steps in producing high-quality economic evidence. It also provides recommendations, including best practices, for the production and reporting of the results of high-quality economic evaluations of interventions for children, youth, and families, while also describing some related emerging issues. Chapter 4 acknowledges the vital importance of context in considering how quality economic evidence can best be used, highlights the broader influential factors that affect investment decisions, and offers related recommendations. Chapter 5 consolidates the key messages conveyed within Chapters 3 and 4, presenting a roadmap with a multipronged strategy for promoting improvements in the use of high-quality economic evidence. The recommendations offered in Chapter 5 present potential opportunities for stakeholders in economic evidence to partner to resolve some of the most pressing, cross-cutting issues (e.g., infrastructure, incentives, and funding) that must be addressed to promote more optimal use of economic evidence to inform investment decisions.
The committee’s hope is that the recommendations and best practices presented in this report will help bridge the gap between the needs and interests of the consumers and producers of economic evidence and stimulate improvements, such as long-term partnerships and advanced development of data, in the ways in which economic evidence informs investments in children, youth, and families.
Doyle, O., Harmon, C.P., Heckman, J.J., and Tremblay, R.E. (2009). Investing in early human development: Timing and economic efficiency. Economics & Human Biology, 7(1), 1-6.
Hahn, H. (2014). Kids’ Share 2014: Report on Federal Expenditures on Children Through 2013. Washington, DC: Urban Institute.
Innvaer, S., Vist, G., Trommald, M., and Oxman, A. (2002). Health policy-makers’ perceptions of their use of evidence: A systematic review. Journal of Health Services Research & Policy, 7(4), 239-244.
Institute of Medicine. (2010). Bridging the Evidence Gap in Obesity Prevention: A Framework to Inform Decision Making. S.K. Kumanyika, L. Parker, and L.J. Sim (Eds). Committee on an Evidence Framework for Obesity Prevention Decision Making. Food and Nutrition Board. Washington, DC: The National Academies Press.
Institute of Medicine and National Research Council. (2014). Considerations in Applying Benefit-Cost Analysis to Preventive Interventions for Children, Youth and Families: Workshop Summary. S. Olson and K. Bogard (Rapporteurs). Board on Children, Youth, and Families. Washington, DC: The National Academies Press.
Koplan, J.P., Milstein, R., and Wetterhall, S. (1999). Framework for program evaluation in public health. Morbidity and Mortality Weekly Report: Recommendations and Reports, 48(RR-11), 1-40.
National Research Council. (2012). Using Science as Evidence in Public Policy. Committee on the Use of Social Science Knowledge in Public Policy. K. Prewitt, T.A. Schwandt, and M.L. Straf (Eds.). Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
National Research Council and Institute of Medicine. (2009). Strengthing Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary. A. Beatty (Rapporteur). Committee on Strengthening Benefit-Cost Methodology for the Evaluation of Early Childhood Interventions. Board on Children, Youth, and Families. Division of Behavorial and Social Sciences and Education. Washington, DC: The National Acadmies Press.
Osborne, D., and Gaebler, T. (1992). Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector. Reading, MA: Addison-Wesley.
Pirog, M. (Ed.). (2009). Social Experimentation, Program Evaluation, and Public Policy (Vol. 1). Chicago, IL: John Wiley & Sons.
Weiss, C.H. (1983). Ideology, interests, and information: The basis of policy positions. In D. Callahan and B. Jennings (Eds.), Ethics, the Social Sciences, and Policy Analysis (pp. 213-245). New York: Plenum.
This page intentionally left blank.