The conclusions and recommendations in this report reflect the unrealized potential of economic evidence to increase the returns to society from investments in children, youth, and families. Suggestions are offered throughout the preceding chapters for improving current practices in the production and use of economic evidence. After providing a brief overview of those chapters, this chapter consolidates these suggestions into a roadmap for achieving this goal. This roadmap encompasses fostering multi-stakeholder partnerships to address cross-cutting issues related to the use of economic evidence, understanding what consumers and producers of economic evidence want each other to know, building a coordinated infrastructure to support the development and use of economic evidence, and providing stronger incentives for the production and use of better evidence—all with a foundational understanding that recognizing the needs of end-users is a critical prerequisite for success.
The committee’s charge was to study how to improve the use of economic evidence to inform policy and funding decisions concerning investments in children, youth, and families. Chapters 3 and 4 detail two vital ways in which this goal can be advanced: by developing higher-quality economic evidence (Chapter 3) and by devoting significant attention to the context in which investment decisions are made (Chapter 4), captured by the report’s guiding principles: (1) quality counts and (2) context matters. If researchers, funders, and policy and administrative officials dealing with
investments in children, youth, and families make concerted efforts to apply both of these principles, the committee believes it will have carried out its charge successfully. Even under significant resource constraints, too many studies lack sufficient quality, and too many efforts at integrating economic evidence into decision making neglect the context in which the evidence will be used.
To provide a foundation for the report’s key messages, Chapter 2 offers an overview of the common methods used in economic evaluation; describes categories of stakeholders in the use of evidence resulting from such evaluation; provides selected examples of current uses of economic evidence related to children, youth, and families; highlights current challenges in the use of such evidence; and describes the role of this evidence within the broader evidence ecosystem. Chapter 2 also provides a list of many of the current issues related to the quality, use, and utility of economic evidence (Box 2-6). Many of these issues are addressed throughout this report, particularly within the conclusions and recommendations of Chapters 3 and 4.
Chapter 3 addresses issues related to the report’s first principle, quality counts. The committee offers guidelines and best practices for the design, conduct, and reporting of high-quality economic evaluations so that the evidence produced is useful to its intended end-user(s). The committee recommends that, to the extent possible, stakeholders implement these guidelines and best practices to increase the overall use and utility of economic evidence. At the same time, the committee considers it vital that decision makers are aware of the limitations of the evidence as well (see Box 5-1).
In Chapter 4, the committee maintains that even if the highest-quality economic evidence is made available, the use and utility of this evidence depend largely on the context in which end-users consider it—that is, context matters. This chapter reviews an array of context-related issues that affect the use of evidence in decision-making processes, including the relevance of the evidence, the capacity to acquire and make use of the evidence, and such factors as politics and values. Many of the recommendations in this chapter are targeted to public and private funders (see Box 5-2).
Fostering Multi-Stakeholder Partnerships to Address Cross-Cutting Issues Related to the Use of Economic Evidence
The conclusions offered in Chapters 3 and 4 make clear that many of the challenges to the quality, use, and utility of economic evidence affect multiple stakeholder groups. For example, poor methodological quality (as discussed in Chapter 3) affects all stakeholders in economic evidence, not just in any particular project at hand, but also weakens the demand for
RECOMMENDATION 1: In support of high-quality economic evaluations, producers1 of economic evidence should follow the best practices delineated in the checklist below for conducting cost analyses (CAs), cost-effectiveness analyses (CEAs), benefit-cost analyses (BCAs), and related methods. Producers should follow the core practices listed and, where feasible and applicable, the advancing practices as well. Consumers of economic evidence should use these recommended best practices to assess the quality of the economic evidence available to inform the investment decisions they are seeking to make.
RECOMMENDATION 2: In support of high-quality and useful economic evaluations of interventions for children, youth, and families, producers of economic evidence should follow the best practices delineated in the checklist below for reporting the results of cost analyses, cost-effectiveness analyses, and benefit-cost analyses and related methods.
future economic evaluations. The best practices outlined in Chapter 3 are relevant to both the producers (e.g., researchers) who perform and report on economic evaluations and the consumers (e.g., funders, policy makers) who interpret and apply the resulting evidence in their investment decisions. Additional examples of cross-cutting issues are given throughout Chapters 2 through 4. Arguably, the issues that have the highest shared relevance among consumers and producers of economic evidence, as well as intermediaries, are those related to inadequate incentives to use high-quality and high-utility economic evidence; development of the necessary capacity and infrastructure for access to, analysis of, and dissemination of the evidence; and reporting of and access to evidence that is clear, credible, and generalizable. These failures often disconnect the production of economic evidence from its real-world application.
Such cross-cutting issues suggest a shared responsibility among stakeholders for improving the status quo. Based on its research and information gathered in its open sessions, the committee identified a significant need for the formation and support of multi-stakeholder collaborative efforts to help address these issues (August et al., 2006; Brown et al., 2010; Crowley et al., 2014). At a minimum, such efforts could encourage researchers and those engaged in decisions about policies and investments related to children, youth, and families to engage in open dialogue about their unique
RECOMMENDATION 3: If aiming to inform decisions on interventions for children, youth, and families, public and private funders of applied researcha should assess the potential relevance of proposed research projects to end-users throughout the planning of research portfolios.
RECOMMENDATION 4: To achieve anticipated economic benefits and optimize the likelihood of deriving the anticipated outcomes from evidence-based interventions, public and private fundersb should ensure that resources are available to support effective implementation of those interventions.
RECOMMENDATION 5: Providers of postsecondary and graduate education, on-the-job training, and fellowship programs designed to develop the skills of those making or seeking to inform decisions related to children, youth, and families should incorporate training in the use of evidence, including economic evidence, in decision making.
RECOMMENDATION 6: Government agenciesc should report the extent to which their allocation of funds—both within and across programs—is supported by evidence, including economic evidence.
a Funders” here might include staff in public agencies (e.g., the National Institutes of Health, the Institute for Education Sciences, the Centers for Disease Control and Prevention), as well as staff in private philanthropic or other organizations.
b Funders” here might include elected officials at the local, state, or federal level; leadership of public grant-making agencies or regulatory bodies; and private funders of interventions for children, youth, and families.
cThe key actors in “government agencies” here would include agency leadership, budget offices, and others with management and budget functions in executive and legislative branches at the federal, state, and local levels.
perspectives, interests, and incentives. Such dialogue could serve as the first step in bridging the gaps among stakeholders. These efforts might build on existing effective multi-stakeholder organizations that seek to support the production of high-quality economic evidence and its use for policy-making and intervention decisions, including the Jameel Poverty Action Lab (supported by the Hewlett, MacArthur, and Nike Foundations), the Prevention Economics Planning and Research (PEPR) Network (supported by the National Institutes of Health), and Human Capital and Economic Opportunity (supported by the Institute for New Economic Thinking). (See Chapter 4 for additional discussions of partnerships.)
Lasting, effective change in the quality, use, and utility of economic evidence will require support for the efforts of such intermediaries in collaborative partnerships between the producers and consumers of economic evidence (Pew-MacArthur Results First Initiative, 2012, 2013; The Pew Charitable Trusts, 2012). As emphasized in observations made at the committee’s open sessions, here lies some of the greatest potential for the improved use of economic evidence (Aos et al., 2004; Crowley, 2013; National Research Council and Institute of Medicine, 2009).
A number of ongoing efforts have achieved advances in bridging the gap between consumers and producers of evidence through the establishment of multi-stakeholder collaborations. For instance, work by the Pew-Macarthur Results First Initiative is working to build infrastructure that provides tailored estimates of the fiscal impact of investing in different interventions for children, youth, and families. The Arnold Foundation’s Evidence-Based Policy and Innovation initiative seeks to advance efforts—including support for evidence-based decision making and deployment of rigorous evaluations—that both build the evidence base for social interventions and make use of that evidence. These efforts (as well as those discussed as examples in Chapter 4) can serve as a guide for future collaborative endeavors aimed at unifying the often siloed interests and efforts of producers and consumers of economic evidence.
CONCLUSION: Long-term, multi-stakeholder collaborations that include producers, consumers, and intermediaries can provide vital support for the improved use of economic evidence to inform decisions on investments in interventions for children, youth, and families.
Understanding What Consumers and Producers of Economic Evidence Want Each Other to Know
Recognizing the inherent difficulties faced by both consumers and producers of economic evidence and their often limited time for developing the types of relationships that a collaborative can help foster, the committee generated the listing in Box 5-3 of what consumers and producers of economic evidence want each other to know, regardless of the setting.
Five Things Consumers of Economic Evidence Want Producers to Know
- Many factors other than economic evidence (including political pressures and capacity) influence the decision-making process.
- The time frames for research outcomes and investment decisions can be very different and affect the value of the evidence.
- Seldom do all the benefits realized from investment decisions accrue to those who make the decisions or their community.
- Existing evidence is not always aligned with the evidence needed by the decision maker.
- Real-world constraints that affect the implementation fidelity and scale-up of an intervention need to be identified before further investments are made.
Five Things Producers of Economic Evidence Want Consumers to Know
- Better investment decisions can be made with a foundational understanding of precisely what economic evidence is, the ways it can be used, its limitations, and considerations of causality and external validity.
- Either directly or through intermediaries, consumers need to be able to distinguish between higher- and lower-quality economic evaluations.
- Clearinghouses reveal only which interventions have attained success, usually relative to some alternative and according to certain specified criteria; accordingly, they cannot and generally should not be considered adequate to indicate which programs are best suited to a particular organization, context, or goal.
- To support sound investments in children and facilitate high-quality program implementation, investment is required in the infrastructure needed to collect, analyze, and disseminate high-quality economic evidence; crucial here are data tracking children’s well-being over time so that future, often not-yet-specified, evaluations can be conducted.
- Investing in education, training, technical assistance, and capacity building often leads to successful development, analysis, and implementation of interventions.
Building a Coordinated Infrastructure to Support the Development and Use of Economic Evidence
“Users of the data need to become more statistically literate. This is not easy data to use. It is not easy. You have to think differently. It is achievable.”
—Fritz Scheuren, senior fellow and vice president, NORC at the University of Chicago, at the committee’s open session on June 1, 2015.
The committee’s review of the literature and inquiry with experts in the use of economic evidence revealed the importance of developing new infrastructure for the production and use of such evidence. The establishment of a coordinated infrastructure system, directed by the needs and interests of diverse stakeholder groups, could significantly increase the quality, utility, and use of economic evidence. A coordinated infrastructure to support the production and use of high-quality economic evidence would include (1) improved access to administrative data, (2) a database of estimates of outcome values, (3) improved efforts to track children and families over time, (4) a data archive to support the development of shadow prices, (5) training of future producers and consumers of economic evidence, and (6) tools for tracking nonbudgetary resource consumption.
As discussed in Chapter 4, administrative data often provide the most efficient and least costly way of developing high-quality estimates that are relevant to consumers. Integrated, longitudinal administrative data systems could be used to generate proxy estimates of the economic value of certain outcomes (i.e., shadow prices; see Chapter 3) and could be leveraged for assessment of the fiscal and economic impacts of interventions (English et al., 2000; Garnier and Poertner, 2000). Multi-stakeholder groups could support access to these data sets by developing trusting relationships with those who own and manage the data. Ideally, these groups would invest in core facilities available to researchers for linking research subjects and trial participants to administrative records (Drake and Jonson-Reid, 1999).
One current effort aimed at supporting access to administrative data is PEPR’s Administrative Data for Accelerating Prevention Trials (ADAPT) initiative (Crowley and Jones, 2015). This effort is deploying a flexible data infrastructure and robust security architecture to increase researcher’s access to key federal and state administrative data systems. The goal is to reduce the research and time costs of gaining access to these systems so as to increase their use in trials of preventive interventions. If educational, health, and other outcomes for children reflected in administrative data over time can be accessed, it will become much easier and more cost-effective to develop experiments to test interventions aimed at improving those outcomes. Such access includes the ability to link trial participants through direct
or indirect matching with the administrative records. In this manner, the ADAPT initiative is an example of how estimation of interventions’ fiscal and economic impacts can be greatly facilitated.
Key to supporting the production of high-quality economic evidence—particularly in cost savings or benefit-cost analyses—is the availability of monetary conversion factors or shadow prices for valuing different outcomes (see Chapter 3). Examples of such estimates include the public and private value of graduating from high school, the cost of an aggravated assault, and the average cost of an emergency department visit (Boardman et al., 1997; Cohen and Piquero, 2008; Karoly, 2008). Historically, these estimates have been dispersed throughout the literature, crossing disciplinary and methodological boundaries. The field needs a central resource for finding these estimates, developing reasonable ranges of estimates for such costs, and comparing estimates from different studies. One example of such a resource is the RAND Valuing Outcomes of Social Programs (VOSP) database, a centralized web-based repository for use by researchers and policy makers (Karoly, 2015). This database includes estimates of economic or fiscal value in the areas of child welfare, crime, education, health, labor market, means-tested benefits, cognitive and noncognitive skills, and substance abuse. Such efforts could be augmented with increased stakeholder participation.
Additional infrastructure efforts are also needed to develop new and more robust economic values for key outcomes of interventions for children, youth, and families (Cohen, 2005; Karoly, 2008; Levin et al., 2006). In particular, despite their growing importance in programming for children, youth, and families, estimates of the economic value of noncognitive or socioemotional skills are limited (Duckworth, 2011; Jones et al., 2015; Moffitt et al., 2011). Longitudinal data sets measuring these skills could be leveraged to develop estimates of their economic value whenever possible. More generally, these data sets could be archived and made available to other researchers—with appropriate data protections—to accelerate the creation of new shadow prices or monetary conversion factors. Such a data repository would be an international resource that could greatly accelerate the production of better estimates of the economic value of investing in children, youth, and families.
Multi-stakeholder groups also could augment the capacity of the field of economic evaluation by training more producers of economic evidence, as well as helping to develop more informed consumers. To this end, these groups could engage in strategic trainings focused on best practices and methodologies for generating high-quality economic evidence. Recommendations offered in Chapter 3 provide one starting point for such curricula. These trainings could occur within both formal academic and professional settings. The Centers for Disease Control and Prevention (CDC), for ex-
ample, offers the Steven M. Teutsch Prevention Effectiveness Fellowship Program, which provides postdoctoral training in cost-effectiveness analysis. This program has trained a number of researchers who produce new economic estimates. Other government agencies could benefit from offering similar fellowships. The recent National Institutes of Health (NIH) notice clarifying priorities for health economic research (NOT-OD-16-025) explicitly identifies areas of highest priority for research funding, which will necessitate a workforce trained to carry out these priorities. Unfortunately, this notice does not include priority areas for training (National Institutes of Health, 2015a). Professional societies such as the Society for Prevention Research, the American Society of Health Economists, and the Society for Benefit-Cost Analysis periodically offer trainings in economic evaluation of prevention and health promotion programs within workshops and preconferences (Hay, 2010; Kuklinski and Crowley, 2014). Importantly, to ensure the production of high-utility estimates, future trainings will have to ensure that consumer needs discussed in Chapter 4 are well reflected. Further, trainings for consumers, such as legislative education, courses within master of public policy programs, or concise briefings to decision makers, would advance consumer literacy and the use of high-quality economic evidence.
Improvements in the estimates of the costs of interventions for children, youth, and families could be supported and accelerated through new and better tools for conducting cost analyses. One such tool, recently released by the Center for Benefit-Cost Studies in Education, is the Cost Toolkit,1 which provides spreadsheets for collecting cost information. Innovative tools that increase the automation of cost estimates could affect an important change in the field.
CONCLUSION: Multi-stakeholder groups can play a larger and more impactful role in building coordinated infrastructure to support the development and use of high-quality economic evidence.
The committee identified an important role for foundations and government funders not simply in sponsoring or even requiring economic evaluation, but also in building up the infrastructure that supports its use at various levels. At the same time, the committee recognizes that such an infrastructure is still in its formative stages, and given both the strengths and limitations of economic evaluations, believes that a competition among ideas from multiple sources would be healthy for its development. Among the many possibilities are to (1) strengthen and support appropriate efforts of professional associations, such as the Association for Public Policy Analysis and Management, the American Evaluation Association, and the
Society for Prevention Research (or an international affiliate), where a sufficient body of researchers and policy analysts in this field would be expected to convene and compare ideas; (2) examine opportunities within curricula (including those addressing medical, legal, and public policy) at various graduate schools where future users of such information are being taught; (3) attend to progressive data requirements, including structured processes for data sharing2 and longitudinal tracking of individuals to enable costs and benefits to be tracked over time; and (4) provide resources, where appropriate, for better integrating evaluations into budget processes to accommodate the needs of decision makers at various levels of government.3 While the committee recognizes the value of existing foundation and government efforts made to date, it believes that multiple approaches to developing such an infrastructure would further advance the quality and utility of economic evidence.
CONCLUSION: Investments are needed to help build an infrastructure that will support the most effective production and use of high-quality economic evidence.
Providing Stronger Stakeholder Incentives for the Production and Use of Better Evidence
“There has to be an incentive within the way federal funds are dispersed and state funds are dispersed, incentives for the use of evidence, for the uptake of evidence, and for the building of evidence.”
—Jon Baron, president, Coalition for Evidence-Based Policy, at the committee’s open session on March 23, 2015.
Multiple stakeholder groups—including funders, policy makers, program developers, program evaluators, and publishers engaged in science communication—contribute to the production of economic evidence. Each of these groups can either facilitate or impede the production and use of high-quality, high-utility economic evidence.
Funding priorities impact the advance of science, cultivating new fields or weakening areas of inquiry. Funders—whether research- or policy-centered, public or private, academic or nonacademic, contract or grant—have
2 Also relevant are the legal and ethical issues related to data-sharing efforts.
an opportunity to increase the pace of the production and use of high-quality economic evidence. For instance, the NIH Common Fund Health Economics Program recently showcased research on the economics of prevention supported by the Health Economics program. A key message in the final report is the current need to “increase support for methods development” (National Institutes of Health, 2015b, p. 15).
Both public and private funding agencies generally have expressed significant interest in understanding the economic impact of interventions for children, youth, and families. (Eddama and Coast, 2008; Finkelstein and Corso, 2003; Hoffmann et al., 2002; Pew-MacArthur Results First Initiative, 2013. In fact, NIH’s recent notice (NOT-OD-16-025) affirms the importance of health economic work for all institutes’ funding portfolios, and states that one of the highest-priority areas identified by NIH includes research that seeks to “understand behavioral, financial, and other factors that influence the implementation, adherence, dissemination, and adoption of medical discoveries into health care [emphasis added]” (National Institutes of Health, 2015a). This research would include cost analyses of program resource consumption (i.e., understanding financial factors influencing implementation and adherence), as well as benefit-cost work (i.e., understanding financial factors influencing dissemination and adoption). Thus, increasing large-scale funding incentives for the use of economic evidence is one important means of adding to the knowledge base in this area.
One way to incentivize the production and use of high-quality economic evidence would be to designate a portion of the budget for many or most interventions for such purposes. An example of this approach is the Nurse-Family Partnership Program (see Chapter 2). Federal tiered funding initiatives are another incentive structure that supports the use of evidence (McNeil, 2010; Thompson et al., 2011). Although not specifically focused on economic evidence, these initiatives are designed to support the use of evidence-based evaluations in funded interventions. An example from the education sector is the Investing in Innovation Fund (i3), which, as described in Chapter 4, provides competitive grants to applicants who have established evidence of improving student achievement and attainment. By using evidence as an entrance requirement, i3 creates an incentive structure that encourages local education agencies to generate solid evidence of the impact of their programs. Building on these examples, funders could craft analogous incentive structures around the goal of producing high-quality, high-utility economic evidence for many types of interventions for children, youth, and families.
Policy makers could create incentive structures making economic evidence a priority and ensuring its use in decision making. Under federal budget rules, for example, legislation that would have a direct effect on the federal budget must be accompanied by a cost analysis developed by the nonpartisan Congressional Budget Office (CBO).4 These cost analyses provide important information about the economic effects of legislative programs that are moving through the congressional process. Greater visibility of the impact of a given program at the time of decision making could improve the effective use of this information. As another example, from the executive branch, the White House Office of Information and Regulatory Affairs administers Executive Order 12866,5 which requires federal agencies to consider alternatives to rulemakings through a specific requirement for an analysis of the costs and benefits of these alternatives. In theory, this requirement has led to improved decision making, although the evidence that these requirements have in fact led to more cost-effective rules is not strong (Harrington and Morgenstern, 2004). Within and across federal programs, a growing movement to focus on what works has placed a spotlight on the need to incentivize the production of stronger evidence before new programs are supported and scaled.
Programs developed within a local setting often face limited resources. As a consequence, only a few such programs are evaluated to determine their full actual impact (Crowley et al., 2014; Foster et al., 2003). Conversely, programs developed within a broader scientific setting may not reflect sufficient attention to the limited resources of implementing agencies (e.g., constrained capacity, time, and personnel; participant costs) or to the need to reach more participants with a given level of funding (Gruen et al., 2008; Israel et al., 1998). The result is the development of programs that are too costly for many practice or policy contexts. Program development could benefit from cost analyses conducted prospectively to assist decision makers in understanding how the structure, setting, or scope of an intervention will ultimately influence downstream costs, such as the costs of implementation (Gorksy and Teutsch, 1995; Haddix et al., 2003). This broader set of information would better support the development of cost-effective, impactful interventions. Once implemented, moreover, interven-
5 Executive Order 12866. Regulatory Planning and Review, 3 CFR, 1993 Comp., p. 638.
tions could continue to be optimized through the use of estimates from economic evaluation.
Requirements for program evaluation of interventions for children, youth, and families often lead to opportunities for increased economic evaluation. Of particular relevance here is evaluation of resource needs, program impacts, and economic value.
When preparing to undertake an economic evaluation of a program, evaluators can often go beyond a simple review of the literature on existing evaluations of the same or similar programs; they can identify end-users and consider opportunities to assess their needs for economic evidence (e.g., Yates, 1996). The particular form and structure of the evidence and perspectives extend beyond those of society at large to those of the participant, agency, level of government, and private sector, among others. Also crucial are the time horizon and scope of a program, the representativeness of the population served by the program, and the generalizability of estimates. Deeper needs assessments may reveal key assumptions that should be tested within the analysis (i.e., through sensitivity analyses).
Further, evaluators could play a key role in advocating for prospective economic evaluations. Specifically, as discussed in Chapter 3, evaluators could facilitate early planning for capturing those program costs and outcome data that foster higher-quality economic evaluations (Crowley et al., 2012; Drummond, 2005; Levin and Belfield, 2013). Otherwise, evaluators must use secondary records to glean limited cost information. Evaluators could incentivize support for such prospective work in a number of ways—even as simple as expressing a preference for high-quality economic evaluation during consultations, on professional websites, or when working with professional organizations to advertise and publish papers on program evaluation that promote prospective analyses.
Program evaluators also could support the production and use of higher-quality and more useful economic evidence by working to access administrative data systems that may house key information on program costs or outcomes (see Chapter 4). In particular, when evaluators gained access to a new administrative system, they could work to help owners of those data (government agencies, health care providers, school systems, nonprofits) develop a sustainable approach to making the data accessible to other evaluators (Drake and Jonson-Reid, 1999).
Publishers could increase incentives for the production of high-quality economic evidence. Chapter 3 outlines the committee’s conclusions and recommendations on reporting standards for economic evaluations that publishers could adopt and promote during the review process. Whenever possible, publishers could also engage reviewers with methodological expertise in economic evaluation. Additionally, publishers could consider opportunities to promote more extensive use of economic evidence for those substantive topics for which such evidence could prove most productive. Importantly, publishers could subject economic evidence to the same rigorous scrutiny as other types of evidence. Issues of replicability and rigor are of great importance to the integrity of economic evidence, as it lends itself directly to policy and budget making.
In raising the standard for the quality of economic evidence produced, it is also important to attend to the issue of publication biases. These biases, noted across diverse fields of study, often derive from the policies and practices of journal editors and peer reviewers who favor positive and statistically significant findings (Ioannidis, 2005; Ncayiyana, 2010). To support the use of high-quality economic evidence, publication biases are among the many incentives that need direct attention. For certain substantive topics, the stakeholders of evidence may find it useful to review results of well-designed research studies that have varying degrees of significance, are historical or new, or are pooled across multiple studies. By exploring these opportunities, publishers could help advance the production and use of high-quality economic evidence.
CONCLUSION: Funders, policy makers, program developers, program evaluators, and publishers engaged in science communication each have unique opportunities to advance the quality, utility, and use of economic evidence.
RECOMMENDATION 7: Program developers, public and private funders, and policy makers should design, support, and incorporate comprehensive stakeholder partnerships (involving producers, consumers, and intermediaries) into action plans related to the use of economic evidence.
A strategy for implementing this recommendation might include the following:
- The formation of a strategic action plan by foundations and governments to develop, fund, and convene these partnerships with researchers and end-users of evidence.
RECOMMENDATION 8: Multi-stakeholder groups should seek to build infrastructure that (1) supports access to administrative data; (2) maintains a database of estimates of outcome values; (3) archives longitudinal data for multiple purposes, including improved tracking of children and families and the development of better estimates of long-term impacts and shadow prices; (4) educates future producers and consumers of economic evidence; and (5) develops tools for tracking nonbudgetary resource consumption.
RECOMMENDATION 9: To support sustainable action toward the production and use of high-quality economic evidence, public and private funders should invest in infrastructure that supports (1) the regular convening of producers, consumers, and intermediaries of economic evidence; (2) enhanced education and training in economic evaluation; (3) efforts to attend to progressive data requirements and data-sharing management needs; and (4) the integration of economic evaluations into budget processes.
RECOMMENDATION 10: Public and private funders, policy makers, program developers, program evaluators, and publishers engaged in science communication should strengthen the incentives they provide for the production and use of high-quality economic evidence likely to be of high utility to decision makers.
Strategies for implementing this recommendation might include the following:
- Funders in the public and private sectors (including the National Institutes of Health and Institute of Education Sciences):
- – For a reported percentage of all funded programs, set aside sufficient resources for economic evaluation, at a minimum for collection of the data needed for prospective cost analysis in support of cost-effectiveness or benefit-cost analysis.
- – Increase internal capacity to review economic evaluations by engaging experienced program evaluators.
- – Expand competitive grants and fellowship opportunities that support training for the next generation of evaluators.
- Policy makers (including the Congressional Budget Office, the U.S. Government Accountability Office, the Congressional Research
Service, and other agencies reporting on performance or policy options):
- – In issued reports, provide documentation of the extent to which program outcomes are backed by economic evidence, including economic evaluation (this is not a requirement to produce the evidence, but to report on what has been produced or is being produced by the various programs).
- – For agencies reporting on performance or policy options, require economic evidence in general, with economic evaluation data when possible.
- Program developers:
- – Consider program costs within program development and early feasibility analyses.
- – Develop detailed logic models linking program activities to proximal and distal outcomes.
- – Use current economic estimates to optimize program effectiveness and economic impact.
- Program evaluators:
- – Conduct needs assessment of end-users to determine the most useful estimates and the most useful ways of communicating them.
- – Advocate for prospective planning of economic evaluation as a way of improving program evaluation.
- – Work with government agencies to access administrative data systems and develop those longitudinal data sets most likely to be useful for economically evaluating future, not just current, interventions, as in the areas of children’s education and health.
- – Modify reporting standards for economic evaluation to require reporting on program costs both measured and not measured in each study.
- – Engage reviewers with proven experience and content expertise in economic evaluation.
- – Foster greater demand for high-quality economic evidence.
Aos, S., Lieb, R., Mayfield, J., Miller, M., and Pennucci, A. (2004). Benefits and Costs of Prevention and Early Intervention Programs for Youth. Available: http://www.wsipp.wa.gov/pub.asp?docid=04-07-3901 [February 2016].
August, G.J., Bloomquist, M.L., Lee, S.S., Realmuto, G.M., and Hektner, J.M. (2006). Can evidence-based prevention programs be sustained in community practice settings? The early risers’ advanced-stage effectiveness trial. Prevention Science, 7(2), 151-165.
Boardman, A.E., Greenberg, D.H., Vining, A.R., and Weimer, D.L. (1997). “Plug-in” shadow price estimates for policy analysis. The Annals of Regional Science, 31(3), 299-324.
Brown, L.D., Feinberg, M.E., and Greenberg, M.T. (2010). Determinants of community coalition ability to support evidence-based programs. Prevention Science, 11(3), 287-297.
Cohen, M.A. (2005). The Costs of Crime and Justice. London, UK and New York: Routledge. Available: http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=115423 [February 2016].
Cohen, M.A., and Piquero, A.R. (2008). New evidence on the monetary value of saving a high risk youth. Journal of Quantitative Criminology, 25(1), 25-49.
Crowley, D.M. (2013). Building efficient crime prevention strategies considering the economics of investing in human development. Criminology & Public Policy, 12(2), 353.
Crowley, D.M., and Jones, D. (2015). Financing prevention: Opportunities for economic analysis across the translational research cycle. Translational Behavioral Medicine, 1-8.
Crowley, D.M., Jones, D.E., Greenberg, M.T., Feinberg, M.E., and Spoth, R. (2012). Resource consumption of a diffusion model for prevention programs: The PROSPER Delivery System. Journal of Adolescent Health, 50(3), 256-263.
Crowley, D.M., Hill, L.G., Kuklinski, M.R., and Jones, D. (2014). Research priorities for economic analyses of prevention: Current issues and future directions. Prevention Science, 15(6), 789-798.
Drake, B., and Jonson-Reid, M. (1999). Some thoughts on the increasing use of administrative data in child maltreatment research. Child Maltreatment, 4(4), 308-315.
Drummond, M.F. (2005). Methods for the Economic Evaluation of Health Care Programmes. New York: Oxford University Press.
Duckworth, A.L. (2011). The significance of self-control. Proceedings of the National Academy of Sciences, 108(7), 2639-2640.
Eddama, O., and Coast, J. (2008). A systematic review of the use of economic evaluation in local decision making. Health Policy, 86(2-3), 129-141.
English, D.J., Brandford, C.C., and Coghlan, L. (2000). Data-based organizational change: The use of administrative data to improve child welfare programs and policy. Child Welfare, 79(5), 499-515.
Finkelstein, E., and Corso, P. (2003). Cost-of-illness analyses for policy making: A cautionary tale of use and misuse. Expert Review of Pharmacoeconomics & Outcomes Research, 3(4), 367-369.
Foster, E.M., Dodge, K.A., and Jones, D. (2003). Issues in the economic evaluation of prevention programs. Applied Developmenal Science, 7(2), 76-86.
Garnier, P.C., and Poertner, J. (2000). Using administrative data to assess child safety in out-of-home care. Child Welfare, 79(5), 597-613.
Gorsky, R.D., and Teutsch, S.M. (1995). Assessing the effectiveness of disease and injury prevention programs: Costs and consequences. Morbidity and Mortality Weekly Report, 44(RR-10), 1-10.
Gruen, R., Elliott, J., Nolan, M., Lawton, P., Parkhill, A., McLaren, C., and Lavis, J. (2008). Sustainability science: An integrated approach for health-programme planning. The Lancet, 372(9649), 1579-1589.
Haddix, A.C., Teutsch, S.M., and Corso, P.S. (Eds.). (2003). Prevention Effectiveness: A Guide to Decision Analysis and Economic Evaluation. New York: Oxford University Press.
Harrington, W., and Morgenstern, R.D. (2004). Evaluating Regulatory Impact Analyses. Washington, DC: Resources for the Future.
Hay, J. (2010). Economic Evaluation of Drugs, Devices and other Medical Interventions. Ithaca, NY: American Society of Health Economists. Available: http://ashecon.org/conference/2010/preconference/economic-evaluation-drugs-and-medical-technology.pdf [February 2016].
Hoffmann, C., Stoykova, B.A., Nixon, J., Glanville, J.M., Misso, K., and Drummond, M.F. (2002). Do health-care decision makers find economic evaluations useful? The findings of focus group research in UK health authorities. Value in Health, 5(2), 71-78.
Ioannidis, J.P. (2005). Why most published research findings are false. PLoS Med, 2(8), e124.
Israel, B., Schulz, A., Parker, E., and Becker, A. (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173-202.
Jones, D.E., Greenberg, M., and Crowley, M. (2015). Early social-emotional functioning and public health: The relationship between kindergarten social competence and future wellness. American Journal of Public Health, 105(11), 2283-2290.
Karoly, L.A. (2008). Valuing Benefits in Benefit-Cost Studies of Social Programs. Santa Monica, CA: RAND. Available: http://www.rand.org/content/dam/rand/pubs/technical_reports/2008/RAND_TR643.pdf [February 2016].
Karoly, L.A. (2015). Valuing Outcomes of Social Programs: The RAND Database of Shadow Prices for Benefit-Cost Analysis of Social Programs. Presented at the Society for Prevention Research, May, Washington, DC. Available: https://spr.confex.com/spr/spr2015/webprogram/Paper22878.html [February 2016].
Kuklinski, M.R., and Crowley, M. (2014). Building the Economic Case for Prevention: Methods and Tools for Assessing the Resource Needs and Economic Costs for Preventive Intervention. Presented at the Society for Prevention Research, May, Washington, DC. Available: http://www.preventionresearch.org/2014-annual-meeting/pre-conferenceworkshops-and-international-networking-forum [February 2016].
Levin, H., and Belfield, C. (2013). Guiding the Development and Use of Cost-Effectiveness Analysis in Education. New York: Center for Benefit-Cost Studies of Education, Columbia University. Available: http://cbcse.org/wordpress/wp-content/uploads/2013/08/Guiding-the-Development-And-Use-of-Cost-effectiveness-Analysis-in-Education.pdf [February 2016].
Levin, H., Belfield, C., Muennig, P.A., and Rouse, C. (2006). The Costs and Benefits of an Excellent Education for All of America’s Children. New York: Columbia University. Available: http://www3.nd.edu/~jwarlick/documents/Levin_Belfield_Muennig_Rouse.pdf [February 2016].
McNeil, M. (2010). Duncan carving deep mark on policy. Education Week, 1-18.
Moffitt, T.E., Arseneault, L., Belsky, D., Dickson, N., Hancox, R.J., Harrington, H., Houts, R., Poulton, R., Roberts, B.W., Ross, S., Sears, M.R., Thomson, W.M., and Caspi, A. (2011). A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences of the United States of America, 108(7), 2693-2698.
National Institutes of Health. (2015a). Clarifying NIH Priorities for Health Economics Research. Available: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-025.html [February 2016].
National Institutes of Health. (2015b). Economics of Prevention Workshop Summary. Available: https://commonfund.nih.gov/sites/default/files/Final_August_2015_Economics_of_Prevention_Workshop_Summary_REV_11-06-15_ld_508_0.pdf [February 2016].
National Research Council and Institute of Medicine. (2009). Strengthening Benefit-Cost Analysis for Early Childhood Interventions. A. Beatty (Rapporteur). Committee on Strengthening Benefit-Cost Methodology for the Evaluation of Early Childhood Interventions. Board on Children, Youth, and Families. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Ncayiyana, D.J. (2010). “Truth” in medical journal publishing. SAMJ: South African Medical Journal, 100(2), 71-72.
Pew-MacArthur Results First Initiative. (2012). Better Results, Lower Costs: Washington State’s Cutting-Edge Policy Analysis Model. Washington, DC. Available: http://www.pewtrusts.org/~/media/legacy/uploadedfiles/pcs_assets/2012/resultsfirstwashington casestudypdf.pdf [February 2016].
Pew-MacArthur Results First Initiative. (2013). States’ Use of Cost-Benefit Analysis Improving Results for Taxpayers. Available: http://www.pewtrusts.org/~/media/legacy/uploadedfiles/pcs_assets/2013/pewresultsfirst50statereportpdf.pdf [February 2016].
The Pew Charitable Trusts. (2012). Results First. Available: http://www.pewstates.org/projects/ results-first-328069 [February 2016].
Thompson, D.K., Clark, M.J., Howland, L.C., and Mueller, M.-R. (2011). The Patient Protection and Affordable Care Act of 2010 (PL 111-148): An Analysis of maternal-child health home visitation. Policy, Politics, & Nursing Practice, 12(3), 175-185.
Yates, B.T. (1996). Analyzing Costs, Procedures, Processes, and Outcomes in Human Services. Thousand Oaks, CA: Sage.
This page intentionally left blank.