Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 54
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary 7 Benefit-Cost Analysis in a Policy Context These methodological and conceptual questions can have a profound influence on policies that affect children and families every day. Rigorous benefit-cost analysis is relatively new in the early childhood context, but available analyses generally point to benefits that significantly outweigh costs. Still, the message to policy makers is not crisp; differences among programs, settings, populations served, goals, available data, and measurement approaches all affect outcomes, costs, and overall conclusions about the value of early childhood programs. The field faces a double challenge: improving research methods while providing policy makers with accurate information to guide social policy and public investments for children and families. In the final session of the workshop, several views of the tension between research and policy were presented, followed by discussion about future goals and directions. PERSPECTIVES Rudy Penner, Jon Baron, and Steve Aos provided three perspectives on the relationship between research and decision making for policy makers. Keep It Simple Penner, who offered the perspective of a political veteran, began the discussion with a look at the challenge of using program evalua-
OCR for page 55
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary tions as the basis for budget allocations. The fundamental problem, he suggested, is that “all that budgets do is measure the cost of inputs to various programs—they tell you very little about outputs.” He pointed out that this is an old problem, citing President Lyndon Johnson’s application of analysis that had been used in the Pentagon to evaluate social programs, President Richard Nixon’s management by objectives program, and President Jimmy Carter’s zero-based budgeting as examples of efforts to bridge the gap. He suggested that none of these efforts was long-lasting or successful because they became overly bureaucratized. It is difficult to make descriptions of complex social programs that deal with human problems fit into neat categories that work across many sectors. Penner suggested that benefit-cost analysis is difficult even in the context of flood control projects or highway construction, because calculating discount rates for future benefits and costs is never simple, nor is valuing a human life. But evaluating interventions for children is still more complex, and Penner suggested an alternative approach. Instead of providing an empirically supported value for the output of this kind of social program, it might be more useful to simply identify the outcomes and give politicians the responsibility for calculating the program’s value. In his view, not only is it the case that many important outcomes cannot be quantified, but also that good and bad outcomes are often comingled. For example, he recalled a Canadian program designed to help unemployed mothers find jobs. Although it was very successful, a collateral result was an increase in problems with their adolescent children, who lost supervision while their mothers were working. For him, identifying the best response to that situation is a social question. He closed by noting that “if you want to influence policy, you really have to try and identify those things that are important to politicians and help them make the kind of value-based tradeoffs that they have to make.” Focus on Finding Effects Jon Baron focused on evidence of impact as well, but from a somewhat different perspective. A meaningful benefit-cost analysis, he noted, begins with valid evidence of program effects; the next question is whether the benefits of that effect exceed the costs. He suggested that in many fields—in medicine as well as social policy areas—valid evidence of effectiveness is not common. Many widely accepted conclusions about effective programs are based on observational evidence or small randomized trials with short-term follow-up. These programs often show weak effects or no effects when they are evaluated more rigorously. He described as an exception an example of a nurse home visitation program for poor, mostly single women in their first pregnancy, which
OCR for page 56
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary had been subjected to several high-quality evaluations (Olds et al., 1998, 2004, 2007; Luckey et al., 2008). The program provides regular visits during the pregnancy and for the first two years of the child’s life. It has been evaluated in three well-implemented randomized trials, which examined different populations and included long-term follow-up. The program demonstrated sizable effects, including—in the study with the longest follow-up—40 to 70 percent reductions in child abuse and neglect and criminal arrests of the children and their mothers by the time the children reached age 15. Based largely on these results, evidence-based home visitation programs are being scaled up; the U.S. Department of Health and Human Services will spend $13.5 million on such home visitation programs in 2009 and the president’s fiscal year 2010 budget proposes $8.6 billion over the next decade. This is the way it’s supposed to work, Baron suggested, but there are few such examples. He cited analysis conducted by the Coalition for Evidence-Based Policy suggesting that only 10 to 15 programs across all social policy areas show sizeable, sustained effects in multiple high-quality evaluation studies (he emphasized that a great number of programs show evidence of effectiveness, but in very few does the evidence meet the highest criteria for rigor). Looking at medical examples, he cited a number of seemingly well-supported interventions or findings that later were found to be ineffective or even harmful in well-conducted randomized controlled trials, including intensive efforts to lower blood sugar in diabetic patients (increases risk of death), hormone replacement therapy for postmenopausal women (increases risk of stroke and heart disease for many women), dietary fiber to prevent colon cancer (shown ineffective), use of stents to open clogged arteries (shown no better than drugs for most patients), having babies sleep on their stomachs (increases risk of sudden infant death syndrome), beta-carotene and vitamin E supplements (antioxidants) to prevent cancer (ineffective or harmful), oxygen-rich environment for premature infants (increases risk of blindness), recent promising AIDS vaccines (found to double risk of AIDS infection), and bone marrow transplants for women with advanced breast cancer (ineffective).
OCR for page 57
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary He presented a similar list for social policy, of programs that were believed to be effective but later found to have weak or no effect or even adverse effects. These include education programs, such as Upward Bound, federal dropout prevention programs, and a widely used teacher induction program; programs for troubled youth, such as Scared Straight and DARE; and others. For Baron, the bottom line is that there is a pressing need for research that can accurately identify interventions that work—that have sizable sustained effects and in which “your grandmother would notice the difference in outcomes between the treatment group and the control group.” Benefit-cost analyses are valuable for making the case to policy makers for scaling up those programs with sound evidence of effectiveness and for untangling questions about programs that are very costly. But these analyses are best saved for programs that have already been demonstrated to be effective through rigorous evaluations in typical community settings. Make the Research Work for Policy Makers Steve Aos illustrated how Washington State produces and uses evidence in policy decision making. The state legislature formed the nonpartisan Washington State Institute for Public Policy to provide analysis of policy options for lawmakers. The institute has become a valuable resource for state legislators, Aos observed, for several reasons. First, it is locally based and closely tied to the community and the lawmaking process. The staff has close working relationships with the lawmakers, and they know exactly which ones to approach on a given issue. Second, they work in many policy areas. In recent years the institute has examined crime; education, including early childhood education; child abuse and neglect; substance abuse; and mental health, for example. Other states have separate commissions to address different issues, but, in Aos’s view, the advantage to the Washington State approach is that the institute has been able to build trust over many years. They draw on information from many sources, often conducting their own metaanalyses, and distill the answers to the precise questions that are current in the state. They present their information in consistent ways (they use a Consumer Reports–type format) so it is easy for busy legislators to find what they need and understand the basis for the conclusions. And the institute has remained scrupulously nonpartisan; Aos observed that benefit-cost analysis has consistently been the most useful tool for helping Democrats and Republicans to identify an approach they can agree on, regardless of the problem. Because the institute has been willing to recommend cutting programs when evidence emerges that they do not work, they have built trust in the evidence-based approach.
OCR for page 58
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary Participants followed up on this point, noting that at all levels of government, studies are often used as weapons in strategic conflicts, rather than as factual resources, so the institute approach is designed to move policy makers past that temptation. At the same time, program advocates may be apprehensive about evaluations if they perceive them as political tools or as potentially inaccurate threats to the program’s existence. If, instead, evaluation is viewed as a management tool that can identify the most effective aspects of a program, such as Head Start, that has wide political support because of its mission, it may be more politically useful. Yet the fact that policy makers may not always appreciate the subtlety of research findings is a perpetual problem. When results are not a clear-cut “yes, it works” or “no, it doesn’t,” there is ample room for misrepresentation of results and confusion about their policy implications. The scale of costs may also affect the nature of the discussion. An expensive early childhood intervention might be unaffordable despite voluminous evidence of long-term benefits, while a low-cost jobs program might make sense even if its outcomes are fairly modest. Aos noted that one of Washington’s biggest successes in applying evidence to policy hinged on a question of cost. When the state began questioning its incarceration rate and the high cost of building more and more prisons, it examined the costs of alternative methods of fighting crime. By changing the mix of its crime-fighting resources, it could achieve the same results with less expensive alternatives to prison—while allowing policy makers to maintain their anticrime credentials. These different perspectives on the role that evidence can and does play in policy discussions led into a wide-ranging discussion of the major ideas that surfaced over the two days. LOOKING FORWARD Robert Haveman kicked off the concluding discussion with an overview of key points. First, he noted that a wide range of technical questions is associated with each element in a benefit-cost analysis. Some of the main unsettled points include how best to define an intervention, how to stipulate the elements of benefits and costs, which potential methodological approaches can reliably account for all relevant effects, how to accurately measure all of the important benefits and costs, how to empirically link the intervention to specific impacts, and how to identify shadow values for nonmarket-based benefits. These technical challenges will need to be resolved in order to generalize from available studies to effective policy making. For Haveman, the big questions to confront in the early childhood intervention area are the following:
OCR for page 59
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary With a large increase in spending on early childhood interventions on the way, can benefit-cost analysis be used to guide allocations to the most effective programs or types of intervention? Given the current methodological gaps, would it make sense to use expert panels to settle the sorts of technical questions the workshop has highlighted, including questions about measurement and valuation estimates? Is benefit-cost analysis strong enough to guide future policy choices, or should the research and policy community be asking a different question? For example, would it be wiser to focus on monitoring short-run performance to guide the next stage of policy? Others posed different questions that highlighted the magnitude of the technical challenges. “Are we in a world where scientists can say the money should be spent in the following way to get the biggest bang for our buck, or are we in a world where we should be talking about planned variation and then program evaluation and monitoring?” one asked. For many, the response was clearly the latter. Investments in preschool programs have wide support, for example, but no single particular model has yet been shown to be most effective. As a result, the policy and research focus is turning toward how to structure planned variations that could reveal specific components that would be desirable in a generic model. Despite the technical challenges, many in the group felt optimistic about the potential for benefit-cost analysis to provide meaningful guidance to future evaluation efforts. At this point, multiple analyses provide valuable information about outcomes as well as costs, even if methodologists still have issues to unravel. The discussion closed with a few thoughts about what would be most useful. First, many thought that a move toward greater standardization in reporting, not only for benefit-cost analysis but also for evaluation in general, would be very useful. “We are not actually in a position to compare the benefits and costs of various programs at this point,” one participant suggested, because they have been measured in such different ways and important outcomes have not been captured. Standardization would make it much easier to capture shadow prices and solve other methodological problems. The Washington State model, in which they use the same shadow prices and comparable methods, demonstrates the value of this approach in the policy context. A core set of measures, with common measurement approaches, would improve comparability. For this sort of research to have real value to policy makers, as one person put it, “the witch doctors have to agree.” Policy makers do not care about regression discontinuity or other technical matters, they want accurate, comparable information. Researchers also need to recognize
OCR for page 60
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary that events can dictate a need for policy decisions independent of the pace of research. “There’s something a little unsatisfying about waiting 40 years and then looking back and saying, what a great program we had in 1963 in Ypsilanti, Michigan, and now here we are wondering what to do today.” Policy makers need the information to develop positions they feel comfortable defending. Participants indicated that opportunities exist now to enhance the methodology and to improve certainty about what works, without waiting for the results of expensive longitudinal studies. Improved access to and use of administrative data, including use of these data to project long-term outcomes, for example, was one opportunity cited by several participants in need of further exploration. However, another participant noted, methodology “may not be the only place where we should be investing time and talent.” Although no one at the workshop proposed that a particular methodological approach solves all problems and should be viewed as the state of the art, some benefit-cost analyses do demonstrate benefits that far exceed the costs. “We should also be thinking about where we can’t get proof but we can put together good evidence that is not only persuasive to policy makers but will lead us to good policies and good allocation of resources.” FINAL OBSERVATIONS Federal and state policy makers are showing increased interest in expanding public investments in early childhood interventions. Multiple studies have provided evidence that many such interventions provide long-term benefits for children, their families, and society, but significant questions remain about the extent to which such benefits translate into savings that outweigh the costs of large-scale programs. Improving the quality of evidence that can be used to identify relevant benefits and costs from early childhood interventions will be a valuable asset to policy discussion and support effective policy decisions. The workshop participants identified multiple technical challenges that deserve attention. While these challenges are daunting, emerging approaches have the potential to significantly enhance the value of these types of analyses in the policy process. The persistent dilemma is how to make immediate decisions about public investments and program priorities with the information at hand while also striving to obtain knowledge through research and evaluation of different program models and policy strategies. Convincing analysis of benefits and costs would provide a guide to the best ways to spend scarce resources for early childhood programs. Methods for conducting the benefit-cost analysis that can provide this kind of evidence are complex in the context of early childhood. However, in a time of limited resources, new collaborative strategies are
OCR for page 61
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary emerging that allow researchers, program staff, and policy makers to standardize definitions and measures, to assign explicit values to outcomes and inputs, and to develop other productive approaches for improving benefit-cost methodologies of early childhood interventions.
OCR for page 62
Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary This page intentionally left blank.