Benefit-cost analyses hold great promise for influencing policies related to children, youth, and families. By comparing the costs of preventive interventions with the long-term benefits of those interventions, benefit-cost analysis provides a tool for determining what kinds of investments have the greatest potential to reduce the physical, mental, and behavioral health problems of young people (NRC and IOM, 2009). More generally, the growth of benefit-cost analysis as a field of research and practice represents an exciting and promising trend in the development and implementation of public policies.
The application of benefit-cost analyses to the field of prevention has been expanding rapidly. Preventive interventions occur prior to the onset of a disorder and are intended to prevent or reduce risk for the disorder (NRC and IOM, 2009). Benefits from investing in certain early childhood programs for economically disadvantaged children are among the best documented (Karoly, 2012), but benefit-cost analysis has been applied in many other areas as well (Lee et al., 2012; Pew-MacArthur Results First Initiative, 2012). For example, a 2006 benefit-cost analysis performed by the Washington State Institute for Public Policy demonstrated that by investing in a portfolio of evidence-based crime prevention programs, the Washington State legislature could reduce crime rates, avoid the need to construct a new prison, and save taxpayers $2 billion
1 The planning committee’s role was limited to planning the workshop, and the workshop summary has been prepared by the workshop rapporteurs as a factual summary of what occurred at the workshop. Statements, recommendations, and opinions expressed are those of individual presenters and participants, and are not necessarily endorsed or verified by the Institute of Medicine or the National Research Council, and they should not be construed as reflecting any group consensus.
(Aos et al., 2006). In 2007, the legislature used these findings as the basis for expanding investments in evidence-based crime prevention, which resulted in a lowering of the state's long-term prison forecast such that a new 2,000-bed prison was no longer needed (Drake, 2010).
However, the utility of benefit-cost analyses has been limited by a lack of uniformity in the methods and assumptions underlying these studies. Researchers use a variety of techniques to calculate the costs of a program and the benefits it produces. They apply different discount rates to assign value to future costs and benefits. They report their results using different formats, with different levels of cost and benefit disaggregation and different levels of detail about underlying assumptions and uncertainties. For years, those who perform and those who use benefit-cost analyses have argued that the development and use of theoretical, technical, and reporting standards for benefit-cost analyses would enhance the validity of results, increase comparability across studies, and accelerate the progress of the field.
To explore this issue, the Board on Children, Youth, and Families of the Institute of Medicine (IOM) and the National Research Council (NRC) held a workshop on November 18–19, 2013, in Washington, DC, titled “Standards for Benefit-Cost Analysis of Preventive Interventions for Children, Youth, and Families.” The workshop constituted the first phase of a possible two-part effort directed toward guiding future benefit-cost studies and enhancing the relevance of benefit-cost analysis to governments and other organizations wanting to make sound prevention decisions. The workshop brought together leading practitioners in the field, researchers who study the methodological and analytic dimensions of benefit-cost analysis, and representatives of organizations that use the results of benefit-cost analyses to shape and implement public policies. Box 1-1 provides a list of questions that guided the development of the workshop’s agenda. A webcast of the workshop is available at http://www.iom.edu/Activities/Children/AnalysisofPreventiveInterventions/2013-NOV-18.aspx. This report summarizes the presentations and discussions at the workshop for researchers, practitioners, and policy makers and as input for a possible follow-on consensus study.
Statement of Task
An ad hoc committee will plan and conduct a 2-day public workshop to highlight the issues on finding consensus on the standards for benefit-cost analysis of preventive interventions for children, youth, and families. An individually authored workshop summary will be prepared based on the information gathered and the discussions held during the workshop sessions. The workshop will feature invited presentations and discussions that address the following questions:
• What level of research rigor should be met before results from an evaluation are used to estimate or predict outcomes in a benefit-cost analysis?
• What are best practices and methodologies for costing prevention interventions, including the assessment of full economic/opportunity costs?
• What prevention outcomes currently lend themselves to monetization? Are shadow prices available for those outcomes that are not typically monetized?
• What processes and methodologies should be used when theoretically and empirically linking prevention outcomes to avoided costs or increased revenues?
• Over what time period should the economic benefits of prevention interventions be projected and what discount rates should be used?
• What outcome domains are appropriate for a benefit-cost analysis of early childhood programs to consider?
• What are the best methods for handling risk and uncertainty in estimates? (E.g., What are the strengths and limitations of Monte Carlo simulations?)
• What information needs to be included in benefit-cost analysis summaries and reports?
• What issues arise when the results of benefit-cost analyses are applied to prevention efforts at scale? Do benefit-cost results from efficacy trials need to be adjusted when prevention is taken to scale?
• How should we account for heterogeneity in program effects in benefit-cost analyses?
• Can we define standards all studies should meet before they can be used to inform policy and budget decisions?
• How could research be used to create policy models that can help inform policy and budget decisions, analogous to the benefit-cost model developed by the Washington State Institute for Public Policy?
• What is the role of meta-analysis in the application of benefit-cost analysis to prevention programs?
In the final session of the workshop, the moderators of the preceding sessions summarized the major messages that emerged from the presentations and discussions that occurred during those sessions. Those messages are compiled in this section as an introduction to the themes of the workshop. These observations should not be seen as consensus recommendations of the workshop. Rather, they are points made by individual speakers that structured their presentations and the subsequent discussions.
The session moderators who participated in the final panel discussion were Jeanne Brooks-Gunn, the Virginia and Leonard Marx Professor of Child Development at Teachers College and the College of Physicians and Surgeons at Columbia University, who also chaired the planning committee for the workshop; Janet Currie, the Henry Putnam Professor of Economics and Public Affairs at Princeton University; Jorge Delva, professor of social work and associate dean for research in the School of Social Work at the University of Michigan; Roseanne Flores, associate professor in the Department of Psychology at Hunter College of the City University of New York; J. David Hawkins, endowed professor of prevention and founding director of the Social Development Research Group at the University of Washington School of Social Work; Melanie Lutenbacher, associate professor of nursing and medicine at Vanderbilt University; and Gary VanLandingham, director of the Results First Initiative, a joint project of the Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation. Additional comments were made in the final session by Max Crowley, a research fellow at Duke University; Lynn Karoly, senior economist at the RAND Corporation; and Rebecca Maynard, University Trustee Chair Professor of Education and Social Policy at the University of Pennsylvania.
Standards in the Field
There has been significant progress in the field of benefit-cost analysis in recent decades. As Hawkins noted, even in 1980 very few such analyses had been done, and their effect on policy was negligible. But an increasing range of topics and program areas has been subjected to benefit-cost analysis, including preschool education, substance use pre-
vention, foster care, crime prevention, and health care (Barnett, 1985; Miller and Hendrie, 2008; Zerbe et al., 2009; Jacobsen, 2013).
The field of benefit-cost analysis could benefit from the development of standards in many different areas, said both Brooks-Gunn and Currie, including assessment of the cost of interventions, assigning values to outcomes, the use of randomized controlled trials and other experimental designs, applying discount rates, incorporating uncertainty into results, reconciling approaches across clearinghouses, and translating research results into a format useable by policy makers.
As Flores pointed out, a particular need is for better ways of monetizing the outcomes of interventions. For example, social and emotional development and other noncognitive skills can be important outcomes of an intervention, but what do those things mean and what is their economic value? As another example, standards on valuing the use of volunteer time could greatly facilitate comparisons across studies.
Reporting only statistically significant findings can leave out benefits and may have an effect on benefit-cost ratios, Hawkins observed. In addition, serendipitous secondary findings can lead to new research avenues that advance the field. One way to help standardize the reporting of outcomes would be to provide baseline information along with the uncertainties associated with that information.
Several of the panelists made the point that the costs and benefits of interventions vary across groups, locations, and times, which complicates both the analysis and replication of interventions. However, Delva added, new electronic technologies can capture data more quickly and accurately than in the past and may be a way to overcome these obstacles.
Randomized controlled experiments can provide valuable information about preventive interventions, several panelists said, but such trials are not always feasible, and standards for randomized controlled trials would be very helpful. VanLandingham added that a thorough description of the control groups in randomized controlled trials can help people understand whether they will be able to replicate a program in their setting and what the key elements of an intervention are.
Research designs other than randomized controlled trials can be appropriate and useful, Currie said. Different designs will have different
standards, but all research studies can be well or poorly designed. Criteria could be established that different kinds of designs need to meet, Brooks-Gunn suggested. Many of these issues are being considered in the area of public health and health care, and cross-fertilization among the fields could yield progress. Karoly pointed to the value of administrative data both in short-term evaluations and in learning about long-term impacts.
Incorporating an ethnographic component into benefit-cost analyses could increase understanding of what a program is doing and how it differs from other programs, Delva pointed out. The application of standards to qualitative research also could help increase comparability across studies, added Flores.
Clearinghouses to Disseminate the Results of Benefit-Cost Analyses
Clearinghouses can play a critical role in collecting and disseminating information, but, as VanLandingham observed, greater uniformity in the formats used to gather, analyze, and report data could make results more useable. Clearinghouses also could serve a useful function by providing all the information that may be of value to policy makers, not just the positive results.
Though the clearinghouses were set up for purposes other than standardization, they could be adapted to a common framework, said Maynard—a step currently being considered by a federal interagency group. In addition, existing clearinghouses can be used to examine and deliberate over standards of rigor. In particular, experimental designs other than randomized controlled trials could be considered so policy makers can take advantage of the full range of information that is available. At the same time, clearinghouses could be given the flexibility to address other questions and perspectives and to be flexible in the use of data.
The Development of Standards
Standardization is a process, Currie emphasized. Consensus may exist in some areas, while in other areas consensus will need to be devel-
oped for standardization. Transparency and consistency in the development of standards can help obtain their acceptance.
Even where it is not possible to agree on standards, it may be possible to agree on principles that can guide decisions and that may, with time, lead to standards, said Lutenbacher. In addition, establishing standards for benefit-cost analyses provides an opportunity to educate people about the field and about how best to use the results of this research.
Eagerness for Results
With tight budgets and demands for accountability, policy makers and others are eager for information about which policies work, which policies do not work, and which interventions are cost beneficial. Policy makers also can enable benefit-cost analyses when they are developing and authorizing programs, noted Flores, especially if they are engaged in communication with researchers.
The continued development of benefit-cost analysis provides an opportunity to think differently about how government operates, Hawkins observed. Most government social programs have been organized around responding to a problem instead of preventing that problem. But the potential to study the effectiveness of preventive programs creates hope about avoiding health and behavior problems before they occur.
As VanLandingham said, policy makers will always have to make compromises in funding and setting up these programs, so they need to know which aspects of a program are critical. Programs have different components, and these components can be treated separately to build the most efficient and effective program possible, just as resources can be allocated to portfolios of programs in the most efficient manner. As Crowley added, costs then can be linked to program components to understand what resources need to be invested in what strategies and how those strategies lead to specific outcomes (Crowley et al., 2012).
Finally, whether a program is implemented with fidelity can have a major influence on whether it produces benefits and on the extent of those benefits, Hawkins reminded the group. Support for high-quality implementation may therefore be necessary to replicate a program’s success. Otherwise, good programs may fail, leading policy makers to shun programs that have the potential to be successful.
The next chapter of this workshop summary describes several benefit-cost analyses of programs that have had significant effects on public policy. Chapter 3 turns to the costing of interventions and the economic assessment of a program’s effects. Chapter 4 looks at several technical issues that arise in benefit-cost analyses, including the validity of research designs other than randomized controlled trials, the treatment of uncertainty, and discount rates. Chapter 5 considers benefit-cost analyses from the perspectives of several users of the results of those analyses. Finally, Appendix A includes a glossary of terms used in this summary, and Appendix B is the agenda from the workshop.