This chapter provides a foundation for the remainder of the report. It begins by reviewing common methods used for economic evaluation, including the types of questions that can be answered by using these methods and the methods’ limitations. It then identifies the stakeholders who produce and consume the economic evidence resulting from these evaluations, as well as those who serve as intermediaries in the economic evaluation process. The next two sections provide selected examples of the current uses of economic evidence to inform investments in children, youth, and families and highlight the challenges involved in these efforts, particularly with respect to the quality, usefulness, and use of the evidence. The final section describes the important role of economic evidence within the broader evidence ecosystem. Many of the topics summarized in this chapter are discussed in greater depth in subsequent chapters.
For purposes of this study, the discussion here focuses on several types of economic evaluation that are classified collectively as cost and outcome analysis methods (Boardman et al., 2001; Gramlich, 1997; Karoly et al., 2001; Zerbe and Bellas, 2006). (See Chapter 1 for definitions of key terms used in this discussion.)
Questions Economic Evaluation Methods Can Answer
As shown in Table 2-1, there are three main methods that can be applied for the economic evaluation of social interventions: cost analysis (CA); cost-effectiveness analysis (CEA) and related methods of cost-utility analysis; and benefit-cost analysis (BCA) (also known as cost-benefit analysis [CBA]) and several related methods, including return-on-investment (ROI) analysis (also known as cost-savings analysis in the case of government stakeholders), budgetary impact analysis (BIA) (a special case of cost-savings analysis), and break-even analysis. Each of these methods addresses a somewhat different question. They all share the need for a comprehensive measure of the full economic cost of the intervention of interest, but they differ as to whether they require measurement of intervention outcomes or impacts and whether those impacts are monetized. In all cases, when costs and outcomes are measured, the measurement is always in reference to a baseline condition (or counterfactual), which may be the status quo or some other scenario. In addition, all of the methods can be used to conduct an economic evaluation of an intervention that has been implemented (and evaluated)—often referred to as an ex post or retrospective analysis. In such instances, the analysis will be based on measured results for program cost and, in the case of CEA and BCA, program outcomes. These methods also can be applied to an intervention that has yet to be implemented but for which the resources required and the expected impacts can be estimated (perhaps based on a similar program or one implemented at a smaller scale), and the potential cost, cost-effectiveness, or benefit-cost results can be calculated—typically called a prospective or ex ante analysis. Ultimately, which economic evaluation method is most appropriate depends on the question being addressed and the information on costs and outcomes that is available; what is feasible also depends on the resources available to support the research. The following subsections briefly describe and illustrate these methods. Further detail on these methods and their use is provided in Chapter 3.
Cost Analysis (CA)
CA is quite simple conceptually, although potentially complex in practice. It is used to address the question: What is the full economic value of the resources used to implement the intervention of interest over and above the baseline scenario? In effect, CA captures the “cost” of a program serving children, youth, and families. When a stand-alone CA is performed, it is not necessary to have measures of program impact; when a CA is part of a CEA or BCA, measures of program impact are required to capture the return on the resources invested. The output of a CA is straightforward: a
comprehensive measure of the program costs. Box 2-1 describes an illustrative CA for the PROSPER (PROmoting School-Community-University Partnerships to Enhance Resilience) program. This example illustrates how CA can inform the implementation of an intervention, support planning for its replication, and provide the foundation for a CEA or BCA. The issues involved in CA are taken up in more depth in Chapter 3.
Both CEA and BCA and their related methods build on the results of a CA and incorporate intervention impacts, thereby capturing the return on the investment. For all of these methods, this requires an estimate of the causal impact of the intervention on its intended outcome or outcomes. Issues involved in deriving such an estimate as an input to economic evaluation are reviewed in Chapter 3. For now, it is assumed that a rigorous measure of intervention impacts is available as input to a CEA or BCA. The difference between CEA and BCA lies in the way they measure program impact: CEA uses natural units1 (or another nonmonetary unit), while BCA converts outcomes into a monetary value. The methods related to BCA (see Table 2-1) examine investment and return from different perspectives, such as the government in the case of a cost-savings analysis or BIA or the private sector in the case of ROI analysis. BCA investments and returns can also be examined from the perspectives of the participant or of others who are not participants but are impacted by the intervention in some way. Break-even analysis focuses on the time period over which the return occurs.
Cost-Effectiveness Analysis (CEA) and Related Methods
In CEA, selected intervention impacts are measured in their natural units. Given a measure of the full economic cost of an intervention, CEA is used to determine the cost (possibly net of impacts on market costs) to achieve one more unit of the outcome, such as one more year of schooling. Alternatively, when meaningful, one can calculate the reverse ratio to determine the amount of a given outcome that is generated per dollar of cost, such as a gain of a certain number of scale points on an achievement test per dollar spent. As illustrated in Box 2-2, CEA can be a powerful tool for demonstrating the economic benefit of investing in an intervention and for comparing the relative cost-effectiveness of different interventions. The example in Box 2-2 further illustrates that CEA can be informative for investments in children, youth, and families in less developed, not just more developed, countries.
One issue that arises with CEA relates to its use of natural units to measure outcomes. Social interventions typically have multiple outcomes,
1 Natural units are nonmonetary measures, such as a change in an achievement test scale score or in the number of years of schooling.
|Type of Evaluation||Questions Addressed||Requirements for Cost||Requirements for Outcomes||Outputs of Analysis|
|Cost analysis (CA)||
|Cost-Effectiveness Analysis and Related Methods|
|Cost-effectiveness analysis (CEA)||
|CEA using a quality-of-life measure (also known as cost-utility analysis)||
|Benefit-Cost Analysis and Related Methods|
|Benefit-cost (or cost-benefit) analysis (BCA or CBA)||
|Type of Evaluation||Questions Addressed||Requirements for Cost||Requirements for Outcomes||Outputs of Analysis|
|Return-on-investment (ROI) analysis (also known as cost-savings analysis in the case of government stakeholders)||
|Budgetary impact analysis (BIA) (a special case of cost-savings analysis)||
NOTES: All measures of intervention cost and impact are relative to a baseline condition.
aAlthough societal perspective is a desirable goal, other perspectives need to be considered as well, including the perspectives of program participants, other nonparticipants affected by the program, and/or the government or public sector. BCA results can be particularly informative when the societal perspective is disaggregated to show these sub-perspectives. See the section in Chapter 3 on Defining the Scope for more discussion on this topic.
bWhen costs and/or benefits accrue over multiple time periods, the dollar streams are discounted to a given point in time to reflect the time value of money. Thus, the relevant outcome is net present-value savings or benefits or the ratio of present values.
cThe broadest stakeholder for BCA is society as a whole. Society as a whole may be subdivided into specific stakeholders, typically defined as the government sector (or individuals as taxpayers), program participants (as private individuals), and the rest of society (program nonparticipants as private individuals).
SOURCE: Adapted from http://www.rand.org/content/dam/rand/pubs/technical_reports/2008/RAND_TR643.pdf [March 2016].
Rigorous CA can provide important information about the resources required to deliver an evidence-based intervention serving children, youth, and families. The comprehensive CA conducted by Crowley and colleagues (2012) for the PROSPER (PROmoting School-Community-University Partnerships to Enhance Resilience) community-based prevention delivery system illustrates the value of investigating the full economic cost of an intervention as part of a larger program evaluation. In particular, the study employed the Cost-Procedure-Process-Outcome Analysis model developed by Yates (1996, 2009) and differentiated between total economic costs—accounting for the value of all resources used—and financial costs—those resources used by the implementing organization, in this case the PROSPER system. As seen in the table below, the full economic costs, whether measured in the aggregate or per youth served, exceeded the financial costs by about 50 percent. In addition, the analysis demonstrated that it is essential to recognize the system-level or infrastructure costs associated with the implementation of specific school-based, evidence-based prevention interventions.
The study further examined how costs evolved over time as the model was developed and implemented in the community. Separate identification of the costs of adoption, implementation, and sustainability demonstrated the differential time path of activities required to deliver PROSPER in a given community and the associated resource requirements over the 5-year demonstration project. Such information is valuable for interpreting intervention impacts derived from an outcome evaluation, but also for planning for replication in other communities.
Cost Estimates for PROSPER Delivery System
|Low Estimate (in millions of $)||High Estimate (in millions of $)|
|Total economic cost||$4.34||$5.21|
|Total financial cost||$2.66||$3.53|
|Average Cost per Youth Served|
|Total economic cost||$486||$580|
|Total financial cost||$311||$405|
NOTE: The study further examined how separate costs evolved over time as the model was developed and implemented in the community.
SOURCE: Adapted from Crowley et al. (2012, Table 2).
and except for those outcomes that are naturally measured in the same unit or can be converted to the same unit (e.g., converted to a monetary value), it is not possible to aggregate them.2 Thus, CEA typically focuses on one unmonetized outcome only, such as achievement score gains, years of schooling achieved, or number of crimes averted. If the same program that reduces crime also increases schooling, the latter will not be taken into account in a CEA. This is one drawback for CEA in the context of social interventions, which often have impacts on multiple outcomes, typically measured in different units. The ability of interventions or agencies to work together on the total well-being of children and youth is limited when each measures cost-effectiveness along a single dimension. Another major limitation of CEA is that it does not allow for the comparison of uses of resources for directly enhancing the well-being of children and youth with other uses of resources (e.g., infrastructure investments).
In the health policy field, the issue of multiple outcomes with CEAs has been somewhat mitigated by the development of several measures of quality of life, such as quality-adjusted life years (QALYs) or disability-adjusted life years (DALYs).3 These indices often combine two outcome measures: health-related quality of life and length of life (or survival). With this common metric, researchers can use the CEA methodology to measure the net cost of an intervention per QALY or DALY or the gain in QALYs or DALYs per dollar of cost. Current guidance for the field in the conduct of cost-utility analysis was provided almost two decades ago by the consensus Panel on Cost-Effectiveness in Health and Medicine (Gold et al., 1996; Weinstein et al., 1996) and is currently being updated by the 2nd Panel on Cost-Effectiveness in Health and Medicine.4
Although medical and health-services interventions cover much that is beyond the scope of this report, QALYs and DALYs have been used for economic evaluation of social and behavioral interventions. In other fields, such as education, attempts have been made to combine multiple outcomes using value weights on outcomes derived from key stakeholders, loosely following the tenets of the utility theory underlying the use of QALYs and
2 For a CEA, it may be possible to aggregate the impacts across more than one outcome if the different outcomes are measured in the same natural unit, such as impacts on subdomains of an achievement test where a scale point on each test has the same meaning. Aggregation may also be possible when impact estimates can be converted to another common metric other than a monetary unit. This is the case, as discussed next, with the quality-of-life measures used in the health policy field.
3 The validity of the QALY and DALY measures is based on utility theory developed by von Neumann and Morgenstern (with application to health and QALYs; see Pliskin and colleagues ). Hence, the application of CEA using QALYs or DALYs is also known as cost-utility analysis.
The evidence-based policy movement extends to developing countries’ efforts to promote child and family well-being. The growing number of randomized controlled trial evaluations provides a basis for conducting economic evaluation; CEA in particular has been a useful approach for comparing the gains in key outcomes per dollar spent.
For example, Dhaliwal and colleagues (2013) from the Jameel Poverty Action Lab (J-PAL) assembled the evidence for the educational impact of 12 intervention models implemented and evaluated in Africa, Asia, and Latin America that were designed to increase students’ school attendance. With the addition of information about program cost and using a standardized approach, the researchers estimated the additional years of schooling obtained per $100 spent. As shown in the table below, the results indicated that the greatest educational gain per dollar spent was associated with an intervention implemented in Madagascar to provide information to parents about the returns to education, with the aim of influencing their children’s educational investment. The next most cost-effective strategy was an intervention evaluated in Kenya to deworm students through their primary schools. The other programs examined had considerably smaller cost-effectiveness ratios. The table in this box illustrates that each program generated a range of cost-effectiveness ratios given the uncertainty in the estimates of program impact.
The authors examined the sensitivity of their relative rankings of the cost-effectiveness of the alternative education interventions to several methodological choices, such as accounting for the time costs of program participants, the treatment of transfers in measuring intervention cost, and the choice of the discount rate. Regardless of these choices, the top two interventions listed in the table continued to dominate in terms of their cost-effectiveness.
Comparative Cost-Effectiveness Estimates for 12 Interventions
|Intervention Model (Country)||Additional Years of Education per $100 Spent|
|Lower Bound||Point Estimate||Upper Bound|
|Information Session for Parents on Returns to Education (Madagascar)||1.0||20.6||40.2|
|Deworming Through Primary Schools (Kenya)||5.1||12.5||19.9|
|Free Primary School Uniforms (Kenya)||0.33||0.71||1.09|
|Merit Scholarships for Girls (Kenya)||0.07||0.16||0.24|
|Conditional Cash Transfers for Girlsâ€”Average Transfer Amount (Malawi)||0.03||0.07||0.12|
|Unconditional Cash Transfers for Girls (Malawi)||0.00||0.02||0.04|
|Iron Fortification and Deworming in Preschools (India)||0.10||2.7||5.4|
|Building Village-Based Schools (Afghanistan)||1.0||1.5||3.0|
|Camera Monitoring of Teachersâ€™ Attendance (India)||â€”||â€”||â€”|
|Computer-Assisted Learning Curriculum (India)||â€”||â€”||â€”|
|Remedial Tutoring by Community Volunteers (India)||â€”||â€”||â€”|
|Menstrual Cups for Teenage Girls (Nepal)||â€”||â€”||â€”|
|Information Session for Boys on Return to Education (Dominican Republic)||0.08||0.24||0.40|
NOTE: — = no significant impact of the intervention.
SOURCE: Adapted from Dhaliwal et al. (2013), reprinted with permission.
DALYs (Levin and McEwan, 2001). For example, Levin and McEwan (2001) discuss a cost-utility framework whereby multiple measures of effectiveness for an education intervention are weighted by their importance to parents, administrators, or some other audience. Weights are estimated subjectively or more rigorously using techniques similar to those applied in the field of health. See Chapter 3 for additional discussion of this topic.
When multiple interventions have multiple outcomes, another alternative is to conduct CEAs for each outcome of interest and compare their results to determine whether one intervention dominates the others across the outcomes examined. For example, Levin and colleagues (1987) compared the costs and impacts of four education interventions: cross-age tutoring, computer-assisted instruction, reduced class size, and increased instructional time. They found that for achievement in math, peer tutoring was most cost-effective, followed by class size reduction; for achievement in reading, they found that peer tutoring was most cost-effective, followed by computer-assisted instruction. As in this example, the rank ordering of the cost-effectiveness of interventions may depend on which outcome is being considered.
Benefit-Cost Analysis (BCA) and Related Methods
With BCA, all outcomes, in theory, can be accounted for in the economic analysis because an economic value is assigned to each outcome, so they are all measured in the same monetary unit (e.g., dollars). Thus all outcomes can be aggregated into their total monetary value to society, which can then be compared with the monetary value of the intervention’s costs to society. This approach is particularly useful in the context of social interventions, which as noted above often affect multiple outcomes. As illustrated in the BCA example in Box 2-3, early childhood interventions, for example, may have effects on the child in terms of school readiness, health, or service utilization (e.g., emergency room visits) while at the same time affecting the mother or father in terms of their employment or use of social welfare programs. In a BCA, the economic values attached to each of these outcomes can be aggregated as a total measure of the benefit of an intervention to compare against its cost. The results of the BCA are then expressed in terms of net benefits (typically net present-value benefits when costs and benefits occur over time), a benefit-cost ratio, or a measure of the internal rate of return. With BCA, of course, the challenge is assigning a monetary value to each outcome. This and other methodological issues associated with BCA (e.g., the use of discount rates, accounting for uncertainty, the appropriate summary metrics) are addressed in greater detail in Chapter 3.
BCA is conducted from a societal perspective, although its results can be disaggregated to portray costs and benefits from the perspective of spe-
cific stakeholders, such as the government, program participants, and other members of society that are not participants. This additional detail can be very useful to decision makers, as it can show whether all stakeholders gain from an intervention or whether costs and benefits are distributed quite differently across stakeholder groups. For example, an intervention that is cost-beneficial overall but leads to losses or only small gains to participants may be less appealing to funders than an intervention that shares net benefits more equitably across stakeholders.
As noted above and in Table 2-1, a number of methods can be considered special cases of BCA.5 In ROI analysis, the BCA is conducted from a specific perspective, such as the funding agency of the provider. In cost-savings analysis, the BCA is conducted from the perspective of government—the federal government, a particular state or local government, or potentially all levels of government combined. Thus in an ROI or cost-savings analysis, costs are limited to those that are paid for by the specific stakeholder(s) targeted, and the values attached to outcomes are those that apply to the targeted stakeholder(s) as well (e.g., the effect of outcomes in terms of government revenues or expenditures). For some outcomes, the only economic values included are private values that apply to the individuals participating in the intervention or to other members of society who experience private gains. But many such outcomes have a public-sector component. Examples include interventions that increase individual earnings, a portion of which will be paid to the government in taxes; that reduce the need for special education, thereby also lowering the cost of providing public education; or that reduce crime, effectively lowering the costs of the criminal justice system.
BIA can likewise be viewed as a special case of cost-savings analysis that examines the impact, year by year, of a health-related intervention on the government budget, both revenues and expenditures, in the aggregate or for specific agencies. Since BIA takes the government perspective, costs are measured specifically for the relevant government sector, and outcomes are also valued in terms of the impact on government revenues and expenditures, and ultimately the net budgetary impact. While the typical cost-savings analysis may entail calculating summary metrics such as a cost-savings ratio, the primary objective of BIA is to present the net program impact, year by year, on the government budget.
Finally, break-even analysis is an option when intervention outcomes
5 These special cases are sought frequently. However, when they are used, it is important to state that results do not reflect a comprehensive assessment of costs and benefits to all stakeholders. Therefore, conclusions may differ from what would be found using a societal perspective. For example, an intervention that is cost-beneficial from a societal perspective may not yield favorable ROI or BIA results. The opposite can also be true. An intervention that yields favorable ROI or BIA results may not be cost-beneficial from a societal perspective.
In the field of early childhood interventions, the series of BCAs for the Perry Preschool Program has been highly influential in making an economic argument for investing in high-quality early learning interventions. Perry Preschool was a 1- or 2-year part-day center-based preschool program that served a small number of children with low income and low IQ scores in Ypsilanti, Michigan. The program was evaluated using a randomized controlled trial (RCT) for several cohorts of children from 1962 to 1965, with a total of 123 children in the treatment and control groups. After showing favorable effects on school readiness, the children in the evaluation were followed to assess educational and other life-course outcomes through the school-age years and again at ages 19, 27, and 40 (see Schweinhart et al., 2005, for the findings as of age 40, as well as earlier years).
The table in this box summarizes the benefit-cost ratios from the series of BCAs conducted for the Perry Preschool Program, starting with the follow-up data available through age 19 and continuing through the age 40 follow-up. One series of studies, those marked with an asterisk, was conducted by the High Scope team that implemented Perry Preschool and their collaborators. These studies showed an initial estimated return of $3.56 for every dollar of cost based on the age 19 follow-up impact estimates, a return that reached $16.14 for every dollar invested based on the results as of the age 40 follow-up. Two other BCAs were conducted by independent research teams. For the most part, the High Scope-sponsored studies used a similar methodology over time, so the increasing estimated returns with each successive follow-up study are attributable to greater precision in the estimated benefits in terms of the observed improvements in labor market earnings, levels of crime, and other areas of social gain. For any given follow-up age, the independent studies showed somewhat different results largely because of different choices regarding the outcomes to value and the economic values as-
are unknown—for example, because an evaluation has not yet been conducted. If the cost of the intervention can be assessed and its potential outcomes identified and valued in dollar terms, one can then infer how large the impacts would have to be for the intervention to pay back its costs. This can be done considering either a single or multiple outcomes. A break-even analysis can be a useful complement to a stand-alone cost analysis, prior to an impact evaluation, as a way of anticipating whether an intervention is likely to show a favorable economic return.
signed to each given outcome. For example, Karoly and colleagues (1998) did not include the intangible costs of crime (e.g., pain and suffering of victims), in contrast to the High Scope analysis, also conducted with follow-up data as of age 27 (Barnett, 1993, 1996; Schweinhart et al., 1993). Studies also varied in the discount rate applied. Despite these differences, the series of BCAs for Perry Preschool consistently shows a benefit-cost ratio that is substantially larger than 1.
Benefit-Cost Ratios from BCAs of the Perry Preschool Program
|BCA Study||Follow-up Age||Benefit-Cost Ratio|
|Berrueta-Clement et al. (1984)||Age 19 follow-up||3.56|
|Karoly et al. (1998)||Age 27 follow-up||4.11a|
|Barnett (1993, 1996), *Schweinhart et al. (1993)||Age 27 follow-up||8.74b|
|Belfield et al. (2005), *Nores et al. (2005), *Belfield et al. (2006)||Age 40 follow-up||16.14b|
|Heckman et al. (2010)||Age 40 follow-up||7.1–12.2 b,c|
NOTES: The benefit-cost ratios are the ratio of the present discounted value of total benefits to society as a whole (participants and the rest of society) divided by the present discounted value of program costs. The discount rate is 3 percent unless otherwise noted. The value of reducing intangible crime victim costs is excluded unless otherwise noted.
aDiscount rate is 4 percent.
bIncludes value of reduced intangible crime victim costs.
cReported range of estimates under alternative assumptions regarding the economic cost of crime.
Additional Principles and Values That Drive Investment Decisions
“The most important factor that influences people in their decision making is their existing belief system.”
—Jerry Croan, senior fellow, Third Sector Capital, in the committee’s open session discussion on March 23, 2015.
The economic evaluation methods summarized in Table 2-1 have the potential to play an important role in helping decision makers understand the economic value of the resources required to implement an intervention, the cost to achieve a given impact, or the economic value of outcomes from the intervention relative to the costs of implementation. However, the
questions shown in the table are not the only ones of interest to decision makers considering investments in children, youth, and families. Rather, these methods contribute important information to be considered along with other factors in making such decisions. This section highlights several such factors that influence investment decisions.
Among the most prevailing concerns in both public and private policy making is equity. Issues of equity enter the discussion throughout this report since, while not the focus of this study, they remain a consideration in all policy choices.
Equity principles described in Box 2-4 and examined in detail in a paper commissioned for this study (Cookson, 2015) can be divided roughly into three categories: (1) equal justice or equal treatment of equals, or horizontal equity; (2) progressivity, or vertical equity; and (3) individual equity (the right to the rewards from one’s own efforts and, consequently, to ownership of property properly acquired). Each of these principles has a long tradition and is considered meritorious to some degree by philosophers and citizens alike. Seldom is any policy considered without attention to each of these three equity principles, which, along with administrative considerations and the efficiency considerations inherent in economic evaluation, form much of the landscape for decision making on investments in children, youth, and families.
Equity plays a role in allocating scarce resources across interventions. Within each intervention for children, youth, and families, the question arises of whether resources should be distributed equally, progressively, where they efficiently produce the highest return, or some combination thereof. For instance, should investments in children be increased not simply if they are good investments but as a way of trying to achieve equality of opportunity? Should each child have equal access to quality early childhood education? And do increased costs justify imposing additional taxes on the earnings of others? As discussed further in Chapter 3, economic evaluation methods are tools to support efficiency in the allocation of scarce resources, but typically are not employed to address concerns about equity. Nonetheless, as discussed in Chapter 4, the relevance of evidence derived from economic evaluation is likely to be enhanced when equity considerations are incorporated in the analysis or when implications for equity are discussed.
The United States is a pluralistic society with diverse political views, cultural norms, and values. Evidence from economic evaluations is one of
Equal Justice or Horizontal Equity
In many ways the queen of principles, equal justice applies to almost all policies: equal punishment for equal crimes, equal taxes for those with equal net income or other measure of a tax base, equal right to vote, and so forth. The challenge here is often determining just who are “equals.” For instance, does income define those who deserve some equal level of child benefit, or should one also take into account family size, large medical expenses, or other factors?
Progressivity or Vertical Equity
Progressivity usually requires that those with greater means pay more and those with greater needs receive more. Although highly controversial in application, at some level the principle almost follows from some concept of natural law. That is, one does not expect children in a family to pay their fair share of costs, nor does one hold such expectations in larger society for those too severely disabled to work. Those with no income or assets cannot pay income tax, which automatically makes the system progressive at that level. Attempts to means test interventions often are favored by conservatives as well as liberals, since means testing costs less than more universal programs while progressively distributing benefits. But means testing has its own consequences, such as high effective tax rates when benefits are phased out as income rises. Foundations, in turn, typically do not make grants that subsidize the rich as much as or more than the poor. One complication here and the source of much controversy is that no principled standard exists for just how progressive any system should be.
The right to the returns from one’s own labor and to ownership of property derived from one’s saving effectively restricts the extent to which government can tax or engage in “takings” without due compensation. This principle arises most commonly with respect to taxation, although it also leads to requirements for due compensation when government exercises, say, the power of eminent domain. In taxation, adherence to this principle aligns closely with the notion of benefit taxation (taxation according to benefit received) versus ability-to-pay taxation (taxation according to ability to pay, which aligns more closely with progressivity).
NOTE: For the purposes of this report, poverty alleviation is considered a special method of improving vertical equity.
SOURCE: Cookson (2015).
several factors for policy makers to weigh as they make difficult choices among competing priorities. In addition to the equity considerations just referenced, decision makers may take into account other values or moral judgments when weighing policy options. Political considerations also may enter into the decision-making process. Thus even when evidence from economic evaluations is of the highest quality and is made available to those making decisions about resource allocation, a range of factors in addition to or even instead of economic considerations will likely influence those decisions.
In most markets, producers devote considerable resources to understanding the needs of consumers. They conduct research to attain an in-depth understanding of consumers’ desires, preferences, and constraints and design their products accordingly. In the realm of interventions for children, youth, and families, by contrast, the research and development systems and incentives needed to ensure that research results are supplied to policy makers and practitioners (the consumers) in a way that addresses their needs and constraints are lacking. Stakeholders tend to talk to their peers (researchers to other researchers, practitioners to other practitioners), with few individuals bridging the divide. Before considering how this divide can be addressed, it is important to understand the different groups involved and how their needs and incentives vary.
Producers of Economic Evidence
The economic evaluation methods listed in Table 2-1 have long been used by economists to examine costs and outcomes in a wide array of policy arenas. As their use has grown, however, the community of producers has expanded as well. In the context of investments in children, youth, and families, producers of economic evidence may be researchers and policy analysts affiliated with the following types of institutions:
- university academic departments or affiliated research centers;
- think tanks, foundations, and other nonacademic organizations;
- executive or legislative branch agencies at the federal, state, or local level;
- international agencies, development banks, and country-specific government agencies; and
- advocacy, consumer rights, and victim support organizations.
This group of stakeholders also includes consultants who may conduct economic evaluations alone or in collaboration with other analysts.
When it comes to economic evidence for a particular intervention, producers of the evidence may or may not be independent of the intervention evaluators. In the example provided earlier of the BCAs of the Perry Preschool Program (Box 2-3), the analyses have been conducted by individuals associated with the team that implemented and evaluated the program, as well as by research teams at universities and think tanks that were not previously affiliated with the program.
The results of economic evaluations may appear in peer-reviewed outlets such as journals or research reports. They also may be released as studies under the imprint of a particular organization, such as a foundation, think tank, or advocacy group.
Consumers of Economic Evidence
A wide array of individuals might be considered consumers or users of evidence from economic evaluations. Under the broadest definitions, users can include the researchers themselves, their funders, and others who may use the evidence to apply for new funding or inform research agendas. For purposes of this report, the committee focused primarily on the use of research for applied purposes—that is, to develop or improve policy and practice. Thus, the focus here is on users who are in a position to translate research results into policy or practice.
Even within this more restricted category of users, however, many different types of actors may benefit from economic evidence. Elected officials and their staff may decide what policies and strategies should have priority and be included in budgets. Civil servants may use economic evidence to guide a range of decisions within a specific public program, such as how to emphasize more effective strategies in their budgets, grant making, regulations, and technical assistance. At the practice level, individuals in nongovernmental organizations may use economic evidence in selecting interventions to implement and in structuring their organizations for effective implementation.
In addition to those who make policy and practice decisions, a large array of intermediary organizations may use economic evidence to enhance policy and practice through advocacy, technical assistance, and other avenues. For instance, advocacy organizations may use economic evidence to argue for increased funding in a given area, such as early childhood (Christeson et al., 2013; Committee for Economic Development, 2012). Think tanks and research organizations may use economic evidence to highlight particular strategies for decision makers at the federal, state, or local level (Karoly et al., 2005). And technical assistance providers may use
economic evidence to determine how best to inform and support individual organizations or groups of organizations, public or private, in selecting and implementing evidence-based strategies.
Finally, a growing number of organizations have an explicit mission of helping to translate evidence (including but not specific to economic evidence) into policy or practice. For instance, several relatively young organizations have developed with a mission of scaling up evidence-based practices and interventions (for example, Bridgespan Group, the Coalition for Evidence-Based Policy [now part of the Laura and John Arnold Foundation], Results First, and Results for America). Finally, philanthropic organizations play a large role in connecting evidence to practice, and many have specific portfolios (or a more general mission) to aid in that process.
In addressing how better use of economic evidence can be supported, the committee considered the needs of these many different types of users and the many different decisions for which they might bring economic evidence to bear. The key question here is how relevant the evidence is to the type of decision being made. For instance, evidence about the cost-effectiveness of a broad area of policy—such as early education for young children or teenage pregnancy prevention—is most relevant to decisions about how to allocate public or private resources across different areas of policy and practice. On the other hand, information on effective implementation of a given intervention is most relevant to those charged with the intervention’s implementation (such as a nongovernmental organization or other implementing organization) or those charged with supporting and overseeing implementation (such as a government agency or technical assistance provider). Relevance applies both to the types of decisions for which the evidence is most suited and to the ways that evidence can be used to influence those decisions (Neuhoff et al., 2015).
The set of stakeholders with interest in the production and use of evidence from economic evaluations extends to other intermediaries that play various roles, often in combination with their roles as producers and/or consumers. Such intermediaries include organizations in the public and private sectors that fund the evaluation research underlying economic evaluations or fund the economic evaluations themselves. Such organizations typically serve as consumers of economic evidence as well. Another type of intermediary sets standards for best practices in the application of economic evaluation, a role often played by professional associations, government agencies or research arms, and foundations, among others. Professional associations, along with institutions of higher education and other independent groups, also contribute to the field through involvement in
capacity building and training, typically focused on producers, but in some cases appropriate for consumers as well. Other organizations perform the critical function of aggregating and translating the findings from economic evaluations, whether through centralized repositories, syntheses, or other strategies for dissemination. Chapter 4 provides additional discussion of the roles of intermediaries.
As discussed above and in greater detail in Chapter 3, high-quality economic evaluations are based on credible evidence of intervention impact. The importance of this point cannot be overstated in the context of how economic evidence is used. For example, when intervention impacts are credible and economic savings are identified, legislators may appropriate funds for specific interventions based on that evidence, pass legislation requiring practitioners to use a particular intervention, or reduce or eliminate funding for existing interventions. As both the quantity and quality of economic evidence have expanded, professionalization of the field has grown, and interest in and funding support for economic evaluation has increased. In this context, the discussion now turns to the variety of ways in which economic evidence is currently used and the implications for decision making.
Chapter 1 introduces a typology for characterizing different uses of evidence more generally, which is applied here to the use of economic evidence. The discussion focuses in particular on the ways in which economic evidence has been used for three of the use categories: instrumental use, imposed use, and conceptual use. The discussion is not intended to be exhaustive of all the ways in which economic evidence has been and is being used to inform decision making. Rather, specific examples are cited to illustrate the point that economic evidence may be used in various ways—both productive and unproductive—in policy debates regarding investments in children, youth, and families.
In the face of growing public pressure for accountability and efficiency, employees in public and nonprofit settings increasingly are being called upon to collect, analyze, and interpret data on the effectiveness of specific interventions. Similarly, policy makers and funders are expected to make use of economic evidence in making decisions. In particular, evidence of cost-effectiveness from a CEA or of a positive economic return from a BCA can make the case even stronger for investing resources in an interven-
tion that has demonstrated favorable impacts. The addition of economic evidence to existing evaluation evidence can elevate an intervention from being just “evidence-based” to being a “good investment,” thereby attracting resources and other support needed to keep the intervention operating or extend it into other localities.
An excellent example of the complementarity between program evaluation and economic evaluation is the Nurse-Family Partnership (NFP), a home visiting program that has been evaluated extensively through a series of randomized controlled trials (RCTs) and has also been subjected to several economic evaluations. As noted in Box 2-5, on the strength of the evaluation evidence, this program’s reach has been significantly expanded across the United States. At the same time, the absence of evidence of effectiveness or cost-effectiveness is not always the end of the line for publicly funded interventions. For example, federally funded abstinence-only programs, designed to delay sexual activity until marriage as a way of reducing
The 2010 Patient Protection and Affordable Care Act included $1.5 billion in new funds to allow states to experiment with and adopt evidence-based models for home visiting with families with pregnant women and children ages 0-5. The Maternal, Infant, and Early Childhood Visiting (MIECHV) Programa requires that 75 percent of grant funding be spent on proven home visiting models. Arguably the most visible program on the list of 17 approved “evidence-based models” is the Nurse-Family Partnership (NFP) Program.b
The NFP began as a demonstration program in Elmira, New York, known then as the Prenatal/Early Infancy Project (PEIP). David Olds and colleagues designed the program to provide economically disadvantaged first-time mothers with a series of home visits by registered nurses who were trained in and delivered a structured curriculum designed to promote healthy maternal behaviors during pregnancy and postpartum, parental caregiving, and maternal life-course development. An average of 9 home visits occurred during pregnancy, and another 23 visits on average took place during the next 2 years until the child turned age 2. To evaluate the program, a group of eligible pregnant women were recruited starting in 1977 and randomly assigned to the treatment group (N = 116) or a control group (N = 184). Mothers and children were followed during pregnancy and every 4-6 months for 4 years. A later follow-up occurred when the children reached ages 15 and 19. Published findings from the experimental evaluation showed favorable effects on maternal and child outcomes in multiple domains, including pregnancy outcomes, health-related behaviors, utilization of health services, welfare use,
and criminal activity, particularly for a higher-risk sample of unmarried mothers of low socioeconomic status (Eckenrode et al., 2010; Olds, 1996; Olds et al., 1986a, 1986b, 1988, 1994, 1997a, 1997b, 2002, 2004a, 2004b).
A cost-savings analysis by the evaluation team, based on findings 2 years after the program ended when the children were age 4, showed that government savings just exceeded program costs for low-income families, but net savings to government were negative for the sample as a whole (Olds et al., 1993). With the additional follow-up data through age 15, when other outcomes such as reduced crime and delinquency for mothers and children were measured, a BCA by researchers at RAND estimated net benefits to society per higher-risk family of $30,766 (in 1996 dollars using a 4 percent discount rate) or a benefit-cost ratio of about $5 to $1 (Karoly et al., 1998). For lower-risk families, the benefits to society just exceeded the costs of the program. Likewise, net savings to government were estimated to be positive for the higher-risk population served but negative for the lower-risk group. Given that the NFP was focused on targeting higher-risk first-time mothers, the evidence of a favorable economic return to society and to government was taken to indicate that the program was a worthwhile investment, and also indicated where such resources were likely to produce the highest returns. Subsequent BCAs of the program, based either on the Elmira results or including the findings from replication trials in Memphis, Tennessee (in 1988) and Denver, Colorado (in 1994) have also concluded that net benefits to society are positive, with benefit-cost ratios ranging from 2.89 to 6.20 (Karoly et al., 2005; Miller, 2013; Washington State Institute for Public Policy, 2015).
On the basis of both the evaluation of impacts and the evidence of economic returns, the NFP began to expand its reach, first with replications in Ohio and Wyoming in 1996 and with additional sites soon thereafter in California, Florida, Missouri, and Oklahoma, funded by the U.S. Department of Justice. Pennsylvania was one of the first states with statewide implementation. The opportunity for public funding was greatly expanded with the advent of the 2010 MIECHV program. As a result, NFP programs are now found in 43 states, as well as the U.S. Virgin Islands and six tribal communities (Nurse-Family Partnership, 2015). Estimates provided by Miller (2015) indicate that the nearly 180,000 pregnant women enrolled in NFP programs from 1996 to 2013 will generate government savings from reduced expenditures on Medicaid, Temporary Assistance to Needy Families, the Supplemental Nutrition Assistance Program of $3.0 billion (present-value 2010 dollars), well in excess of the program’s $1.6 billion cost. This budgetary impact analysis does not fully incorporate the benefits to society from the array of improved maternal and child outcomes projected by Miller (2015) to include 500 fewer infant deaths, 10,000 fewer preterm births, 15,000 fewer childhood injuries, 42,000 fewer cases of child maltreatment, and 90,000 fewer violent crimes by youth.
teenage pregnancy, at one time received strong federal support (doubling from slightly less than $100 million in 2000 to $200 million in 20096), although there was virtually no evidence of such programs’ effectiveness, and in fact some indication that they could contribute to higher levels of teenage pregnancy (Stranger-Hall and Hall, 2011). In such cases, values and moral judgment may simply trump the evidence from program and economic evaluations; however, the research also may compel supporters to amend their approaches, re-examine the details behind their theory of change, and essentially attempt different intervention designs.
While the NFP illustrates the use of economic evidence to support investments in a single intervention, the Washington State Institute of Public Policy (WSIPP) (a research institution described in more detail in Chapter 4) has developed a BCA model that supports the use of economic evaluation to assess the costs, benefits, and net benefits of multiple interventions within a domain (e.g., early childhood or youth development interventions), and potentially across domains. Based on a survey of the 50 states as part of the Pew-MacArthur Results First Initiative, however, the impressive WSIPP model is the exception rather than the rule (Pew Charitable Trust and MacArthur Foundation, 2013). While state policy makers recognize the potential value of CEAs and BCAs, and there is some forward momentum toward increased production and use of such economic evidence, states vary considerably in the production and use of evidence from high-quality economic evaluations. While all states conducted at least one BCA between 2008 and 2011, the majority of the nearly 350 analyses identified were carried out in just 12 states. Slightly more than half (29) of the states reported that BCAs had informed one or more decisions on the part of the legislative or executive branch to fund or eliminate interventions (Pew Charitable Trust and MacArthur Foundation, 2013).
Economic evidence also has contributed to the rapidly evolving pay for success (PFS) movement—also known by various other names, such as social impact bonds, outcome-based financing, and pay-for-performance and payment-by-results models.7 The PFS financing tool leverages private investment to support preventive services that lead to public savings (Liebman and Sellman, 2013). In essence, the underlying premise of PFS and related financing mechanisms is that there is potential for a positive economic return from investing in an effective government intervention. When a PFS contract is successful, the private investors receive back their
7 In this context, these terms are used to refer to financing instruments. The terms are used differently in some contexts. In international development, for example, outcome-based financing and pay-for-performance more often refer to incentive-based payment mechanisms.
initial capital outlay that supported service delivery, as well as a percentage return, while the public sector benefits from the remaining cost aversion or savings (often in the form of reduced service utilization). This financing structure makes PFS contracts of particular interest for interventions that target developmental processes that otherwise lead to downstream costs (Finn and Hayward, 2013; Golden, 2013).
Since a 2010 pilot program was launched in the United Kingdom,8 several U.S. municipalities and states have initiated PFS arrangements to fund interventions with an empirical record of reducing recidivism among juvenile offenders, emergency care costs for children with asthma, and special education utilization for at-risk youth (Brush, 2013; Olson and Phillips, 2013). Interest in PFS interventions is increasing at the federal and state levels—especially for those interventions targeting early childhood, whose return on investment may be the greatest (Currie and Widom, 2010; Heckman et al., 2010; Office of Management and Budget, 2011; Walters, 2014). As of August 2015, PFS projects had been launched in 6 states and were being explored in 27 others (Nonprofit Finance Fund, 2015). The benefits and challenges of the PFS model are discussed in Chapter 4.
The federal government has a long history of requiring the use of economic evaluation to justify action in some policy domains. The 1992 Office of Management and Budget (OMB) Circular No. A-94, for example, establishes guidelines for the use of economic evaluation “to promote efficient resource allocation through well-informed decision making by the Federal Government” (Office of Management and Budget, 1992). While the circular includes in its scope “benefit-cost or cost-effectiveness analysis of Federal programs or policies,” in practice federally funded programs serving children, youth, and families typically are not subjected to CEA or BCA, nor are the many “programs” that operate through the tax code, such as the earned income tax credit, child credit, or exclusion from tax for employer-provided health insurance. At the same time, under the Obama administration, there has been a push to expand the use of evidence of effectiveness in making resource allocation decisions. As catalogued in
8 In March 2010, the United Kingdom’s Ministry of Justice (MOJ) and Social Finance, a not-for-profit organization created in 2007, launched a pilot program aimed at reducing recidivism among prisoners released from the Peterborough prison. The key feature of this pilot was its financial arrangement: private parties, mainly charitable trusts and foundations, provided approximately £5 million to fund the program, while MOJ agreed to pay them up to £8 million after 7 years, accordingly to observed recidivism among program participants. Furthermore, if the program failed to achieve a reduction in recidivism of 7.5 percent, investors would lose their money (Disley et al., 2011; Nicholls and Tomkinson, 2013).
Haskins and Margolis’ (2014) Show Me the Evidence, the federal government had six evidence-based social policy initiatives under way as of 2015 to allocate resources in such areas as early care and education, home visiting, K-12 education, teen pregnancy prevention, employment and training, and community-based programs. Notably, while these evidence-based initiatives require at least a preliminary level of evidence of impact and a commitment to further evaluation to add to the evidence base, they have not required economic evaluation as part of the justification for new or expanded funding. Several state-specific initiatives summarized in Table 2-2 share this feature of emphasizing “research-based” or “evidence-based” programs but not requiring evidence from economic evaluation to support funding decisions.
With the growing emphasis on results-based accountability, evidence from economic evaluation has served to provide a larger framework within which to view policy choices concerning interventions serving children, youth, and families. Perhaps the best example of this type of conceptual use has occurred in the framing of investments in early childhood interventions. In particular, results of BCAs for specific early childhood interventions such as the Perry Preschool Program (Box 2-3) and Nurse-Family Partnerships (Box 2-4) have been used to frame such interventions as investments: these interventions require an up-front investment in return for a stream of future dividends in the form of lower public-sector costs, higher levels of economic and social well-being for participants, and gains for the rest of society from reduced crime and other social ills. The investment framework has been used to appeal to the business community, in which such concepts as ROI resonate strongly (Christeson et al., 2013; Committee on Economic Development, 2012; Institute for a Competitive Workforce, 2010; Pepper, 2014). Investments in early childhood interventions also have been framed as an economic development strategy, one with an even higher rate of return than such traditional community investment strategies as building a sports arena or attracting businesses to relocate to a new community (Bartik, 2011; Rolnick and Grunewald, 2003).
At the same time that economic evidence is contributing to a conceptualization of early childhood programs as an investment with a high rate of return, the evidence has sometimes been simplified and misused. For example, the Perry Preschool finding of a return as high as $16 for every dollar invested (Box 2-3) applies to a small-scale demonstration preschool program implemented in the 1960s in one midwest city, considered to be of high quality, and serving a highly disadvantaged population of African-American children and families. Yet that result often is cited to suggest
that any preschool program—including a universal program that would be available to both low- and high-income children—would generate such favorable returns. This application of the evidence reflects little recognition of the context within which Perry Preschool was implemented and how that context affects the generalizability of the findings for that one program to the range of early childhood programs being implemented today. For example, returns may be lower if programs are not delivered with the same level of quality and intensity of services as the Perry Preschool Program.
In his 2013 State of the Union address, President Obama referenced a somewhat smaller $7 return for every dollar invested in high-quality preschool in making the case for expanding access to high-quality preschool to every 4-year-old (Obama, 2013). That estimate is closer to the benefit-cost ratio estimated for the Chicago Child-Parent Centers Program, operated by the Chicago Public Schools and targeted to low-income children (Karoly, 2012). While the 7-to-1 ratio may be more realistic for a scaled-up program operating in real-world conditions, it is not clear that this return would apply to a universal program. A more universal program, depending on its design, might not only include more children with fewer needs but also lead to greater shifts from privately to publicly financed education. Notably, the WSIPP model shows a benefit-cost ratio of about 4-to-1 for publicly funded district and state preschool programs for 4-year-olds, based on a meta-analysis of the evaluation literature (Washington State Institute for Public Policy, 2015).
The Perry Preschool example is a reminder that all economic evaluations of existing interventions provide the most information about the economics of that intervention in relation to the alternative, which may be no intervention or some alternative program. The Perry Preschool example is also a reminder about the importance of context. Whether the economic evidence can be applied to decisions to fund other interventions or even the same intervention in a new setting requires careful consideration of the contextual factors of the interventions. The greater the similarity between the context of the new intervention and the context in which the evidence was generated, the more likely economic estimates are to apply. The importance of conveying information about the context of an intervention in an economic evaluation is discussed in Chapter 3, and issues pertaining to valid use of existing evidence are taken up in Chapter 4.
Interest in improving the use of different types of evidence (e.g., scientific, economic) in the social and medical sciences has increased, but its impact on public policy making and decision making has remained limited
|Type of Action||Example Legislation/Programs||Impact||State|
|Require the use of evidence- or research-based programs||The Public Safety and Offender Accountability Act (2011) mandates use of evidence-based programs for supervision, treatment, and intervention for the pretrial population, inmates, and those on probation and parole.||
|Dedicated funding for evidence- or research-based programs||State agencies are required to demonstrate that an increased percentage of funds is dedicated to evidence-based substance use and mental health treatment and to adult recidivism and juvenile crime prevention programs.||
|Financial incentives for evidence-based interventions||The Treatment Alternatives and Diversion Program provides counties with grants to fund alternatives to incarceration for nonviolent offenders with substance use disorders.||
|Categorization of funded programs by effectiveness||A bill requires the creation of standards for program effectiveness, including evidence of cost-effectiveness where possible, and an inventory of programs meeting these standards in child welfare, mental health, and juvenile justice.||
SOURCE: Data from Pew Charitable Trust and MacArthur Foundation (2015).
(National Research Council, 2012). Research has shown that decision makers across sectors and levels of government do not consistently utilize scientific evidence, and that economic evidence is even less likely to inform decisions about the allocation or prioritization of resources (Eddama and Coast, 2008; National Research Council, 2012; Nutbeam and Boxall, 2008; Orton et al., 2011; van Dongen et al., 2013). Numerous factors—methodological, individual, organizational, and contextual—affect why and how certain types of evidence are brought to bear in determining the value of investments in children, youth, and families (Bowen and Zwi, 2005; Lessard et al., 2010).
The committee’s information-gathering processes led to the two guiding principles articulated in Chapter 1: quality counts and context matters. Box 2-6 highlights the key issues identified with regard to the use of economic evidence in decision making—issues identified in the literature,
Quality of Inputs
- Cost data are not collected prospectively; cost estimates are incomplete.
- Rigorous program evaluation evidence is not available; results are based on research designs that do not provide causal evidence.
- Available evaluation results often are for demonstration programs that may be less effective when scaled up.
- Many outcomes relevant for children, youth, and families do not have economic values for use in benefit-cost analysis.
- Limitations on access to data (particularly administrative data) preclude evaluation and valuation of outputs.
Quality of Outputs
- In the absence of standards, producers apply methods differently, and at times inappropriately or not comprehensively.
- No standards exist for reporting results of economic evaluation.
Usefulness of Economic Analyses
- Weighing alternatives is difficult because results are based on different methods, making them difficult to compare.
- Results are presented in a manner that obscures their relevance to investment decisions.
- Questions of interest to policy makers may not be understood or incorporated in research studies.
- Economic analyses may not be available in the time frame relevant for decision making.
as well as through the committee’s information-gathering sessions with key stakeholders and other informants. Several of the specific issues pertain to aspects of quality: the quality of the inputs that go into producing economic evidence and the quality of the resulting output. Other issues are more germane to aspects of the context in which evidence from economic evaluation is used (or not used) to inform decision making. Some issues are most relevant for the producers of economic evidence identified earlier, while others are more closely aligned with the consumer side of the equation. Notably, there also are cross-cutting issues pertaining to incentives, capacity, and infrastructure that are faced by both producers and consumers, along with the intermediaries who transfer and interpret information among them.
Use of Economic Analyses
- Information needs to come from a known and trustworthy source, and such relationships may be lacking.
- Economic evidence is not available to inform decisions (poor data availability).
- Methods are complex and can be difficult to understand (reporting and communication issues).
- Results may be misinterpreted or misapplied by advocates.
- A lack of understanding of the policy and funding worlds may impede the ability to harness findings to meet decision-makers’ needs.
- The organizational culture around the use of economic evidence may be weak.
Incentives, Capacity, and Infrastructure
- Funding for quality economic evaluations is lacking or insufficient.
- The cadre of trained professionals needed to conduct economic evaluations is lacking.
- Government agencies often lack the capacity to conduct economic evaluations or the expertise to use them well.
- There is a shortage of professionals trained to translate and use the evidence from economic evaluation and move the science forward.
- Systematic processes for assessing the quality of economic evidence, including specific guidance in funding announcements, are lacking.
- There are few incentives to use economic evidence in decision making.
- Incentives for researchers to conduct economic evaluations are insufficient.
Challenges Related to Quality
“The purpose of economic evidence is to be an input in the process. The better we can make the input, the better off the outcomes will be.”
—Jerry Croan, senior fellow, Third Sector Capital, in the committee’s open session discussion on March 23, 2015.
Box 2-6 enumerates several issues related to the quality of the available economic evidence pertaining to investments in children, youth, and families.9 Some of these issues can be categorized as affecting the quality of the inputs to economic evaluations. As noted earlier, CA is a tool that provides valuable information on its own and also provides the foundation for CEA and BCA. While program administrators regularly estimate the costs of services for budgeting purposes, evaluation-oriented CAs remain rare for many interventions that impact children, youth, and families (Goldhaber-Fiebert et al., 2011). The committee reviewed a convenience sample of 1,294 articles relating to RCTs of interventions for children, youth, and families published in 2012-2015. Only 36 reported the cost of the intervention.10 The committee’s literature review also revealed that almost no articles address the factors that need to be taken into account when one is attempting to estimate the costs of interventions operating at scale compared with their costs in trials. This gap in information on intervention costs as part of program evaluation means that those conducting a CA, CEA, or BCA often try to reconstruct the required information retrospectively and may miss key cost components altogether or derive biased estimates.
Other issues are more relevant to the quality of the inputs required for CEA and BCA. In particular, both methods require evidence of intervention impact, preferably from a rigorous evaluation design such as an RCT or a quasi-experimental method that supports causal inference.11 However, many interventions serving children, youth, and families may have been evaluated not at all or only using weaker evaluation designs—reflecting in part the costs in terms of time and other resources to conduct high-quality evaluations. While there has been support for implementing lower-cost RCTs and other designs using administrative data (Coalition for
10 SCOPUS database search using key terms: “child,” “children,” “youth,” “families,” “randomized trial,” “program,” intervention,” “cost-benefit,” “benefit-cost,” “cost-effectiveness,” “cost analysis,” published > year 2012.
11 Causal inference is one of many factors that are relevant to the validity of a study or set of studies for any given decision. It can be challenging to address certain factors beyond causal inference because they are often dependent upon concerns that the researcher cannot reasonably foresee or control (e.g., the generalizability of the study context).
Evidence-Based Policy, 2015), the simple lack of data because of previous inattention to what might be required for later evaluations of new or existing interventions, as well as the lack of access to existing data, precludes wider application of such designs. When evaluations are conducted, another challenge is the absence of economic values (shadow prices) for many of the relevant outcomes, especially outcomes for young children, such as measures of school readiness, academic performance, and social and emotional development (Karoly, 2008). Here, too, better development of and access to administrative data could play a role in helping to calculate valid shadow prices. And even where the required evaluation evidence is strong and shadow prices exist, those who conduct economic evaluations may follow different practices with respect to key methods, which limits the comparability of results (Karoly, 2012).
One potential reason for these shortfalls in the quality of the inputs to economic evaluation and the resulting outputs is that the literature provides little guidance on best practices in general or specific to interventions for children, youth, and families. With a few notable exceptions (Children’s Bureau et al., 2013; Gorsky, 1996; Yates, 1996), most guidance for CA comes from texts, primers, or government documents on how to conduct BCA and CEA, with chapters/text devoted to assessing program costs (Drummond et al., 2005). Recognizing this issue, the 2013 Institute of Medicine (IOM)/ National Research Council (NRC) workshop on Considerations in Applying Benefit-Cost Analysis to Preventive Interventions for Children, Youth, and Families (Institute of Medicine and National Research Council, 2014) identified four areas pertaining to cost analyses that could benefit from standardization: (1) identify essential cost categories that all cost analyses should strive to include; (2) develop guidelines for appropriate handling of costs that are not reflected in program budgets; (3) establish minimum levels of sensitivity analyses to explore uncertainty in cost estimates; and (4) ensure consistent reporting of cost estimates to enhance transparency and utility. There is a similar lack of guidance regarding the conduct and reporting of BCAs for interventions serving children, youth, and families although some resources exist for specific policy areas (see, for example, Karoly, 2012, for early childhood programs). Just as expert guidance exists for standardized methodology pertaining to CEA (Gold et al., 1996), CA and BCA could benefit from greater standardization from the field. Based on its review of the literature and expert guidance, the committee recommends in Chapter 3 a set of best practices that would enhance the production of, availability of, or opportunity to conduct high-quality economic evaluations of interventions for children, youth, and families.
Challenges Related to Usefulness and Use
“Talking about uncertainty and 95-percent confidence intervals can be difficult to communicate to legislators.”
—Stephanie Lee, senior research associate, Washington State Institute for Public Policy, in the committee’s open session discussion on March 23, 2015.
“How and what you evaluate really matters. We have to pick the right tools to generate the best evidence about a very particular set of issues that we are trying to solve. Contextualizing your tools for the problems you are solving is extraordinarily important.”
—Nadya Dabby, assistant deputy secretary for innovation and improvement, U.S. Department of Education, in the committee’s open session discussion on March 23, 2015.
Numerous factors beyond the quality of the evidence for the effectiveness of or economic return on an intervention can drive choices about what investments in children, youth, and families will be made in a community.12 Research on the usefulness and use of evidence in general has helped illuminate some pressing issues, many of which also help explain why economic evidence is not well utilized. These issues include the timeliness and relevance of the evidence (Innvaer et al., 2002; Oliver et al., 2014), access to the evidence and sufficient time to review it (Merlo et al., 2015; O’Reilly, 1982), and the perceived credibility of the evidence (Jennings and Hall, 2011; Lorenc et al., 2014), all of which have impacted the extent to which leaders have relied on scientific evidence in the past. Jennings and Hall (2011) suggest that knowing the degree of conflict within an agency (e.g., competing pressures and demands, scientific capacity) also helps in understanding why some agencies are more or less likely to use evidence-based approaches.
A lack of connection between researchers and policy makers breeds a mistrust that can undermine the success of both parties (Innvaer et al., 2002; Oliver et al., 2014). Administrators have been known to blend related funding so as to maximize the reach or depth of available services or to design comprehensive approaches to complex social problems. The drive to meet the needs of the greatest number of eligible residents often is at odds with an administrator’s desire to best serve each child or intervention. This often limits the analysis conducted and the conclusions that can be drawn from it.13
Whether one can expect the impact or cost-effectiveness of an intervention to be as successful as previous research suggests depends on the quality of the intervention’s implementation, including the depth of the monitoring performed, the number of local resources committed, and other important contextual factors.14 Historically, researchers have placed greater emphasis on the internal than on the external validity of studies (Brownson et al., 2009; Kemm, 2006), yet the assessment of fit between intervention and context is a powerful indicator of long-term adoption and long-term investment. Influential factors external to the individual decision maker include the organizational culture around the use of evidence, the role of leadership, the prevailing political ideological and budgetary context, and the strength of advocacy agendas (Armstrong et al., 2014; Brownson et al., 2009). A number of studies have found that government leaders may perceive the influence and usefulness of local data, public opinion, and organizational capacity as more important than scientific evidence (Armstrong et al., 2013, 2014; Atkins et al., 2005; McGill et al., 2015). In addition, values and belief systems play a large role in which interventions garner public support and are funded.
The ways in which data are collected, measured, and regulated vary greatly across states and localities. Furthermore, limited communications between and within agencies, as well as the significant challenges faced in transferring data across agencies, undermine confidence in the available data and affect how well they can be used in establishing an evidence base or determining best practices in the implementation of interventions.15
Cross-cutting Issues Regarding Incentives, Capacity, and Infrastructure
“The more evidence that is made available, the more informed the end decision maker will be. We have heard several agencies tell us that they’ve done internal evaluations, but that these data are available to those inside their county or state, and not necessarily to other potentially applicable audiences. I think that appropriate infrastructure needs to be in place to support the sharing of data and evaluations. That is definitely something that could actually be done in the short term.”
—Danielle Berfond, consultant, The Bridgespan Group, in the committee’s open session discussion, June 1, 2015.
Box 2-6 lists a remaining set of cross-cutting issues that affect both producers and consumers of economic evidence, as well as other intermediaries involved in supporting the production and dissemination of the evidence.
Many of these issues were cited in the committee’s information-gathering sessions, while others have been identified in the literature.
In terms of incentives, one challenge is that funding often is not available to support quality economic evaluations.16 Funders may wish to see evidence of favorable impact before deciding to support an economic evaluation such as a CEA or BCA. This “wait-and-see” approach is one reason why the data required for cost analysis are not collected routinely as part of program evaluation: study teams simply were not given the resources to collect the required data. Funding gaps mean that program evaluators have little incentive to integrate economic evaluation into the program evaluation’s agenda and design. If economic analyses are not called for in funding announcements and economists are not included on review panels, the incentives for conducting economic evaluation are further diminished. Even a lack of interest on the part of publishers, such as those that produce peer-reviewed field-specific journals (e.g., in the areas of child development, youth development, and prevention science) could signal that economic evidence is not valued as part of building the evidence base. On the consumer side, in the absence of imposed use of economic evidence (such as the initiatives discussed earlier), there may be little incentive to use the evidence from economic evaluation to support resource allocation decisions, especially if the evaluation results are not presented in an accessible way and the analysis is not provided by a trusted source.
Capacity issues also affect producers, consumers, and other intermediaries. On the producer side, no well-established cadre of trained professionals is available to conduct economic evaluations—a precondition for more widespread use of such evidence. While economists may have a working knowledge of economic evaluation issues as part of their academic training, few specialize in such analyses. Such training is not always routine in policy programs or in fields outside of economics. The establishment of the Society for Benefit-Cost Analysis (SBCA) in the last 5 years has helped raise the visibility of economic evaluation and provide a forum for developing interest in the area, encouraging new researchers to enter the field, and sharing the latest developments in methods and findings.
On the consumer side, many agencies at the federal, state, or local level lack the caacity to produce economic evaluations, and their staff do not necessarily possess the expertise required to be knowledgeable users of the evidence that is available (Pew Charitable Trust and MacArthur Foundation, 2013). The problem has two facets: users ideally would know what economic evaluation both can and cannot do. In addition, there is a shortage of professionals available to serve as intermediaries between the producers and consumers of economic evidence—individuals who could
help assemble the available evidence and translate it in ways that are useful for decision makers.
Finally, infrastructure gaps affect many of the intermediaries identified earlier. At present, for example, there are no centralized repositories of economic evidence for interventions serving children, youth, and families (Neumann, 2009). Those seeking to fund economic evaluations lack access to guidelines that could be used to establish requirements or standards for high-quality economic evaluations.
Despite some of the high-profile examples cited earlier, a great deal of unrealized potential remains for incorporating evidence from economic evaluation into decision making regarding investments in children, youth, and families. The factors relating to the quality of economic evidence outlined above are elaborated in Chapter 3, while those relating to the context in which this evidence is developed and used—and hence to its usefulness and use—are discussed in detail in Chapter 4.
The ultimate objective of this study was to determine what steps can be taken to ensure that evidence from economic evaluations contributes—along with results of program evaluations and other information—to decisions about investments in children, youth, and families. Ideally, consideration of economic evidence is incorporated into an overall evaluation framework addressing important questions at each stage of planning, documenting, and testing an intervention. At the earliest stages of a process or implementation study, the information required for CA can be collected as part of understanding the intervention model and how to implement it with fidelity, a point at which there are opportunities for quality improvement. When evaluations turn to assessing the impact of an intervention, the choice of which outcomes to measure can be guided both by the underlying logic model or theory of change and by a delineation of which outcomes are most amenable to evaluation using CEA, BCA, or related methods. Once information needs have been met, the goal is to ensure the production of high-quality economic evidence that is accessible, relevant, and used appropriately by decision makers who understand both the value of the economic evaluation methods employed and their limitations.
Armstrong, R., Waters, E., Dobbins, M., Anderson, L., Moore, L., Petticrew, M., Clark, R., Pettman, T.L., Burns, C., Moodie, M., Conning, R., and Swinburn, B. (2013). Knowledge translation strategies to improve the use of evidence in public health decision making in local government: Intervention design and implementation plan. Implementation Science, 8, 121.
Armstrong, R., Waters, E., Moore, L., Dobbins, M., Pettman, T., Burns, C., Swinburn, B., Anderson, L., and Petticrew, M. (2014). Understanding evidence: A statewide survey to explore evidence-informed public health decision-making in a local government setting. Implementation Science, 9(1), 188.
Atkins, D., Slegel, J., and Slutsky, J. (2005). Making policy when the evidence is in dispute: Good health policy making involves consideration of much more than clinical evidence. Evaluating Evidence, 24(1), 102-113.
Barnett, W.S. (1993). Benefit-cost analysis of preschool education: Findings from a 25-year follow-up. American Journal of Orthopsychiatry, 63(4), 500-508.
Barnett, W.S. (1996). Lives in the Balance: Benefit-Cost Analysis of the Perry Preschool Program through Age 27. Monographs of the High/Scope Educational Research Foundation. Ypsilanti, MI: High/Scope Press.
Bartik, T.J. (2011). Investing in Kids: Early Childhood Programs and Local Economic Development. Kalamazoo, MI: W.E. Upjohn Institute.
Belfield, C., Nores, M., Barnett, W., and Schweinhart, L. (2005). Updating the benefit-cost analysis of the High/Scope Perry Preschool Programme through age 40. Educational Evaluation and Policy Analysis, 27(3), 245-262.
Belfield, C.R., Nores, M., Barnett, S., and Schweinhart, L. (2006). The High/Scope Perry Preschool Program. Journal of Human Resources, XLI(1), 162-190.
Berrueta-Clement, J.R., Schweinhart, L.J., Barnett, W S., Epstein, A.S., and Weikart, D.P. (1984). Changed Lives: The Effects of the Perry Preschool Program on Youths through Age 19. Monographs of the High/Scope Educational Foundation Research Foundation, No .8. Ypsilanti, MI: High/Scope Press.
Boardman, A.E., Greenberg, D.H., Vining, A.R., and Weimer, D.L. (2001). Cost-Benefit Analysis: Concepts and Practice (2nd Edition). Upper Saddle River, NJ: Prentice Hall.
Bowen, S., and Zwi, A.B. (2005). Pathways to “evidence-informed” policy and practice: A framework for action. PLoS Medicine, 2(7), 0600-0605.
Brownson, R.C., Fielding, J.E., and Maylahn, C.M. (2009). Evidence-based public health: A fundamental concept for public health practice. Annual Review of Public Health, 30, 175-201.
Brush, R. (2013). Can pay for success reduce asthma emergencies and reset a broken health care system? Community Development Investment Review, 115-125. Available: http://www.frbsf.org/community-development/files/pay-for-success-reduce-asthma-emergenciesreset-broken-health-care-system.pdf [March 2016].
Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services. (2013). Cost Analysis in Program Evaluation: A Guide for Child Welfare Researchers and Service Providers. Washington, DC: Calculating the Costs of Child Welfare Services Workgroup.
Christeson, W., Bishop-Josef, S.J., Taggart, A.D., and Beakey, C. (2013). Georgia Report: A Commitment to Pre-Kindergarten is a Commitment to National Security: High-Quality Early Childhood Education Saves Billions While Strengthening Our Military and Our Nation. Washington, DC: Mission: Readiness.
Coalition for Evidence-Based Policy. (2015). Low-Cost RCT Competition. Available: http://coalition4evidence.org/low-cost-rct-competition [March 2016].
Committee for Economic Development. (2012). Unfinished Business: Continued Investment in Child Care and Early Education is Critical to Business and America’s Future. Washington, DC: Committee for Economic Development.
Cookson, R. (2015). The Use of Economic Evidence to Inform Investments in Children, Youth, and Families. Commissioned Paper on the Methods for Incorporating Equity into Economic Evaluation of Social Investments. Available: http://iom.nationalacademies.org/Activities/Children/EconomicEvidence.aspx [May 2016].
Crowley, D.M., Jones, D.E., Greenberg, M.T., Feinberg, M.E., and Spoth, R.L. (2012). Resource consumption of a diffusion model for prevention programs: The PROSPER delivery system. Journal of Adolescent Health, 50(3), 256-263.
Currie, J., and Widom, C.S. (2010). Long-term consequences of child abuse and neglect on adult economic well-being. Child Maltreatment, 15(2), 111-120.
Dhaliwal, I., Duflo, E., Glennerster, R., and Tulloch, C. (2013). Comparative cost-effectiveness analysis to inform policy in developing countries: A general framework with applications for education. In P. Glewwe (Ed.), Education Policy in Developing Countries (Ch. 8) (pp. 285-338). Chicago, IL: University of Chicago Press.
Disley, E., Rubin, J., Scraggs, E., Burrowes, N., and Culley, D.M. (2011). Lessons Learned from the Planning and Early Implementation of the Social Impact Bond at HMP Peterborough. Santa Monica, CA: RAND.
Drummond, M.F., Sculpher, M.J., Claxton, K., Stoddart, G.L., and Torrance, G.W. (2005). Methods for the Economic Evaluation of Health Care Programmes. New York: Oxford University Press.
Eckenrode, J., Campa, M., Luckey, D.W., Henderson, C.R., Cole, R., Kitzman, H., Anson, E., Sidora-Arcoleo, K., Powers, J., and Olds, D. (2010). Long-term effects of prenatal and infancy nurse home visitation on the life course of youths: 19-year follow-up of a randomized trial. Archives of Pediatrics & Adolescent Medicine, 164(1), 9-15.
Eddama, O., and Coast, J. (2008). A systematic review of the use of economic evaluation in local decision-making. Journal of Health Policy, 86(2-3), 129-141.
Finn, J., and Hayward, J. (2013). Bringing success to scale: Pay for success and housing homeless individuals in Massachusetts. Community Development Investment Review, 9(1). Available: http://www.frbsf.org/community-development/files/bringing-success-scale-payfor-success-housing-homeless-individuals-massachusetts.pdf [December 2015].
Gold, M.R., Siegel, J.E., Russell, L.B., and Weinstein, M.C. (Eds.). (1996). Cost-Effectiveness in Health and Medicine. New York: Oxford University Press.
Golden, M. (2013). Using Pay for Success Financing to Improve Outcomes for South Carolina’s Children. Greenville, SC: Institute for Child Success.
Goldhaber-Fiebert, J.D., Snowden, L.R., Wulczyn, F., Landsverk, J., and Horwitz, S.M. (2011). Economic evaluation research in the context of child welfare policy: A structured literature review and recommendations. Child Abuse & Neglect, 35(9), 722-740.
Gorsky, R.D. (1996). A method to measure the costs of counseling for HIV prevention. Public Health Reports, 111(Suppl. 1), 115-122.
Gramlich, E.M. (1997). A Guide to Benefit-Cost Analysis (2nd Edition). Long Grove, IL: Waveland Press.
Haskins, R., and Margolis, G. (2014). Show Me the Evidence: Obama’s Fight for Rigor and Results in Social Policy. Washington, DC: Brookings Institution Press.
Heckman, J.J., Moon, S.H., Pinto, R., Savelyev, P.A., and Yavitz, A. (2010). The rate of return to the Highscope Perry Preschool Program. Journal of Public Economics, 94(1-2), 114-128.
Innvaer, S., Vist, G., Trommald, M., and Oxman, A. (2002). Health policy-makers’ perceptions of their use of evidence: A systematic review. Journal of Health Services Research & Policy, 7(4), 239-244.
Institute for a Competitive Workforce. (2010). Why Business Should Support Early Childhood Education. Washington, DC: U.S. Chamber of Commerce.
Institute of Medicine and National Research Council. (2014). Considerations in Applying Benefit-Cost Analysis to Preventive Interventions for Children, Youth, and Families: Workshop Summary. S. Olson and K. Bogard (Rapporteurs). Board on Children, Youth, and Families. Washington, DC: The National Academies Press.
Jennings, E.T., Jr., and Hall, J.L. (2011). Evidence-based practice and the uses of information in state agency decision making. The Journal of Public Administration Research and Theory, 22, 245-255.
Karoly, L.A. (2008). Valuing Benefits in Benefit-Cost Studies of Social Programs. Technical Report. Santa Monica, CA: RAND.
Karoly, L.A. (2012). Toward standardization of benefit-cost analysis of early childhood interventions. Journal of Benefit-Cost Analysis, 3(1).
Karoly, L.A., Greenwood, P.W., Everingham, S.S., Hoube, J., Kilburn, M.R., Rydell, C.P., Sanders, M., and Chiesa., J. (1998). Investing in Our Children: What We Know and Don’t Know about the Costs and Benefits of Early Childhood Interventions. Santa Monica, CA: RAND.
Karoly, L.A., Kilburn, M.R., Bigelow, J.H., Caulkins, J.P., and Cannon, J.S. (2001). Assessing Costs and Benefits of Early Childhood Intervention Programs: Overview and Application to the Starting Early Starting Smart Program. Seattle, WA: Casey Family Programs and Santa Monica, CA: RAND.
Karoly, L.A., Kilburn, M.R., and Cannon, J.S. (2005). Early Childhood Interventions: Proven Results, Future Promise. Santa Monica, CA: RAND.
Kemm, J. (2006). The limitations of “evidence-based” public health. Journal of Evaluation in Clinical Practice, 12(3), 319-324.
Lessard, C., Contandriopoulos, A.P., and Beaulieu, M.D. (2010). The role (or not) of economic evaluation at the micro level: Can Bourdieu’s theory provide a way forward for clinical decision-making? Social Science & Medicine, 70(12), 1948-1956.
Levin, H.M., and McEwan, P.J. (2001). Cost-Effectiveness Analysis: Methods and Applications (Vol. 4). Thousand Oaks, CA: Sage.
Levin, H.M., Glass, G.V., and Meister, G.R. (1987). Cost-effectiveness of computer-assisted instruction. Evaluation Review, 11(1), 50-72.
Liebman, J., and Sellman, A. (2013). Social Impact Bonds: A Guide for State and Local Governments. Cambridge, MA: Harvard Kennedy School Social Impact Bond Technical Assistance Lab.
Lorenc, T., Tyner, E.F., Petticrew, M., Duffy, S., Martineau, F.P., Phillips, G., and Lock, K. (2014). Cultures of evidence across policy sectors: Systematic review of qualitative evidence. European Journal of Public Health, 24(6), 1041-1047.
McGill, E., Egan, M., Petticrew, M., Mountford, L., Milton, S., Whitehead, M., and Lock, K. (2015). Trading quality for relevance: Non-health decision-makers’ use of evidence on the social determinants of health. BMJ Open, 5(4), e007053.
Merlo, G., Page, K., Ratcliffe, J., Halton, K., and Graves, N. (2015). Bridging the gap: Exploring the barriers to using economic evidence in healthcare decision-making and strategies for improving uptake. Applied Health Economics and Health Policy, 13(3), 303-309.
Miller, T.R. (2013). Nurse-Family Partnership Home Visitation: Costs, Outcomes, and Return on Investment. Beltsville, MD: Pacific Institute for Research and Evauation.
Miller, T.R. (2015). Projected outcomes of nurse-family partnership home visitation during 1996-2013, USA. Prevention Science, 16(6), 765-777.
National Research Council. (2012). Using Science as Evidence in Public Policy. Committee on the Use of Social Science Knowledge in Public Policy. K. Prewitt, T.A. Schwandt, and M.L. Straf (Eds.). Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Neuhoff, A., Axworthy, S., Glazer, S., and Berfond, D. (2015). The What Works Marketplace: Helping Leaders Use Evidence to Make Smarter Choices. Boston, MA: The Bridgespan Group.
Neumann, P.J. (2009). Costing and perspective in published cost-effectiveness analysis. Medical Care, 47(Suppl. 1), S28-S32.
Nicholls, A., and Tomkinson, E. (2013). The Peterborough Pilot Social Impact Bond. Oxford, UK: Oxford University, Saïd Business School.
Nonprofit Finance Fund. (2015). Pay for Success U.S. Activity. Available: http://payforsuccess.org/pay-success-deals-united-states [March 2016].
Nores, M., Belfield, C.R., Barnett, W.S., and Schweinhart, L. (2005). Updating the economic impacts of the High/Scope Perry Preschool Program. Educational Evaluation and Policy Analysis, 27(3), 245-261.
Nurse-Family Partnership. (2015). Nurse-Family Partnership: Program History. Available: http://www.nursefamilypartnership.org/about/program-history [March 2016].
Nutbeam, D., and Boxall, A.-M. (2008). What influences the transfer of research into health policy and practice? Observations from England and Australia. Public Health, 122(8), 747-753.
Obama, B. (2013). Remarks by the President in the State of the Union Address. Presented at the United States Capitol, February, Washington, DC.
Office of Management and Budget. (1992). Circular No. A-94 Revised: Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. Available: https://www.whitehouse.gov/omb/circulars_a094#1 [March 2016].
Office of Management and Budget. (2011). OMB Circular A-133 Compliance Supplement 2011. Available: https://www.whitehouse.gov/omb/circulars/a133_compliance_supplement_2011 [March 2016].
Olds, D.L. (1996). Reducing Risks for Childhood-Onset Conduct Disorder with Prenatal and Early Childhood Home Visitation. Paper presented at the APHA Pre-Conference Workshop, Prevention Science and Families, Mental Health Research and Public Policy Implications, November, New York.
Olds, D.L., Henderson, C.R., Tatelbaum, R., and Chamberlin, R. (1986a). Improving the delivery of prenatal care and outcomes of pregnancy: A randomized trial of nurse home visitation. Pediatrics, 77(1), 16-28.
Olds, D.L., Henderson, C.R., Chamberlin, R., and Tatelbaum, R. (1986b). Preventing child abuse and neglect: A randomized trial of nurse home visitation. Pediatrics, 78(1), 65-78.
Olds, D.L., Henderson, C.R., Tatelbaum, R., and Chamberlin, R. (1988). Improving the life-course development of socially disadvantaged mothers: A randomized trial of nurse home visitation. American Journal of Public Health, 78(11), 1436-1445.
Olds, D.L., Henderson, C.R., Phelps, C., Kitzman, H., and Hanks, C. (1993). Effect of prenatal and infancy nurse home visitation on government spending. Medical Care, 31(2), 155-174.
Olds, D.L., Henderson, C.R., and Kitzman, H. (1994). Does prenatal and infancy nurse home visitation have enduring effects on qualities of parental caregiving and child health at 25 to 50 months of life? Pediatrics, 93(1), 89-98.
Olds, D.L., Eckenrode, J., Henderson, C.R., Kitzman, H., Powers, J., Cole, R., Sidora, K., Morris, P., Pettitt, L.M., and Luckey, D. (1997a). Long-term effects of home visitation on maternal life course and child abuse and neglect: Fifteen-year follow-up of a randomized trial. Journal of the American Medical Association, 278(8), 637-643.
Olds, D.L., Kitzman, H., Cole, R., and Robinson, J. (1997b). Theoretical foundations of a program of home visitation for pregnant women and parents of young children. Journal of Community Psychology, 25(1), 9-25.
Olds, D.L., Robinson, J., O’Brien, R., Luckey, D.W., Pettitt, L.M., Henderson, C.R., Ng, R.K., Sheff, K.L., Korfmacher, J., Hiatt, S., and Talmi, A. (2002). Home visiting by paraprofessionals and by nurses: A randomized, controlled trial. Pediatrics, 110(3), 486-496.
Olds, D.L., Robinson, J., Pettitt, L., Luckey, D.W., Holmberg, J., Ng, R.K., Isacks, K., Sheff, K., and Henderson, C.R. (2004a). Effects of home visits by paraprofessionals and by nurses: Age 4 follow-up results of a randomized trial. Pediatrics, 114(6), 1560-1568.
Olds, D.L., Kitzman, H., Cole, R., Robinson, J., Sidora, K., Luckey, D.W., Henderson, C.R., Hanks, C., Bondy, J., and Holmberg, J. (2004b). Effects of nurse home-visiting on maternal life course and child development: Age 6 follow-up results of a randomized trial. Pediatrics, 114(6), 1550-1559.
Oliver, K., Invar, S., Lorenc, T., Woodman, J., and Thomas, J. (2014). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research, 14, 2.
Olson, J., and Phillips, A. (2013). Rikers Island: The first social impact bond in the United States. Community Development Investment Review, 97-101. Available: http://www.frbsf.org/community-development/files/rikers-island-first-social-impact-bond-unitedstates.pdf [December 2015].
O’Reilly III, C.A. (1982). Variations in decision makers’ use of information sources: The impact of quality and accessibility of information. Academy of Management Journal, 25(4), 756-771.
Orton, L., Lloyd-Williams, F., Taylor-Robinson, D., O’Flaherty, M., and Capwell, S. (2011). The use of research evidence in public health decision making processes: Systemic review. PLoS One, 6(7), 1-10.
Pepper, J. (2014). Business case for early childhood investments. Chamber Executive, Fall 2014. Available:http://www.acce.org/magazine-archive/fall-2014/the-business-case-forearly-childhood-investments [March 2016].
Pew Charitable Trust and MacArthur Foundation. (2013). States’ Use of Cost-Benefit Analysis: Improving Results for Taxpayers. Philadelphia, PA: Pew Charitable Trust.
Pew Charitable Trust and MacArthur Foundation. (2015). Legislating Evidence-Based Policymaking: A Look at State Laws that Support Data-Driven Decision-Making. Available: http://www.pewtrusts.org/~/media/assets/2015/03/legislationresultsfirstbriefmarch2015. pdf [March 2016].
Pliskin, J.S., Shepard, D.S., and Weinstein, M.C. (1980). Utility functions for life years and health status. Operations Research, 28(1), 206-224.
Rolnick, A., and Grunewald, R. (2003). Early childhood development: Economic development with a high public return. The Region, 17(4), 6-12.
Schweinhart, L.J., Barnes, H.V., and Weikart, D.P. (1993). Significant Benefits: The High-Scope Perry Preschool Study through Age 27. Monographs of the High/Scope Educational Research Foundation, No. 10. Ypsilanti, MI: High/Scope Press.
Schweinhart, L.J., Montie, J., Xiang, Z., Barnett, W.S., Belfield, C.R., and Nores, M. (2005). Lifetime Effects: The High/Scope Perry Preschool Study through Age 40. Monographs of the High/Scope Educational Research Foundation, No. 14. Ypsilanti, MI: High/Scope Press.
Stranger-Hall, K.F., and Hall, D.W. (2011). Abstinence-only education and teen pregnancy rates: Why we need comprehensive sex education in the U.S. PLoS One 6(10), e24658.
van Dongen, J.M., Tompa, E., Clune, L., Sarnocinska-Hart, A., Bongers, P.M., van Tulder, M.W., van der Beek, A.J., and van Wier, M.F. (2013). Bridging the gap between the economic evaluation literature and daily practice in occupational health: A qualitative study among decision-makers in the healthcare sector. Implementation Science, 8(57), 1-12.
Walters, C. (2014). Inputs in the Production of Early Childhood Human Capital: Evidence from Head Start. Cambridge, MA: National Bureau of Economic Research.
Washington State Institute for Public Policy. 2015. Nurse Family Partnership for Low-Income Families: Benefit-Cost Estimates Updated July 2015. Available: http://www.wsipp.wa.gov/BenefitCost/Program/35 [March 2016].
Weinstein, M.C., Siegel, J.E., Gold, M.R., Kamlet, M.S., and Russell, L.B. (1996). Recommendations of the panel on cost-effectiveness in health and medicine. Journal of the American Medical Association, 276(15), 1253-1258.
Yates, B.T. (1996). Analyzing Costs, Procedures, Processes, and Outcomes in Human Services: An Introduction (Vol. 42). Thousands Oak, CA: Sage.
Yates, B.T. (2009). Cost-inclusive evaluation: A banquet of approaches for including costs, benefits, and cost-effectiveness and cost–benefit analyses in your next evaluation. Evaluation and Program Planning, 32(1), 52-54.
Zerbe, R.O., and Bellas, A.S. (2006). A Primer for Benefit-Cost Analysis. Northhampton, MA: Edward Elgar.
This page intentionally left blank.