tions as the basis for budget allocations. The fundamental problem, he suggested, is that “all that budgets do is measure the cost of inputs to various programs—they tell you very little about outputs.” He pointed out that this is an old problem, citing President Lyndon Johnson’s application of analysis that had been used in the Pentagon to evaluate social programs, President Richard Nixon’s management by objectives program, and President Jimmy Carter’s zero-based budgeting as examples of efforts to bridge the gap. He suggested that none of these efforts was long-lasting or successful because they became overly bureaucratized. It is difficult to make descriptions of complex social programs that deal with human problems fit into neat categories that work across many sectors.
Penner suggested that benefit-cost analysis is difficult even in the context of flood control projects or highway construction, because calculating discount rates for future benefits and costs is never simple, nor is valuing a human life. But evaluating interventions for children is still more complex, and Penner suggested an alternative approach. Instead of providing an empirically supported value for the output of this kind of social program, it might be more useful to simply identify the outcomes and give politicians the responsibility for calculating the program’s value. In his view, not only is it the case that many important outcomes cannot be quantified, but also that good and bad outcomes are often comingled. For example, he recalled a Canadian program designed to help unemployed mothers find jobs. Although it was very successful, a collateral result was an increase in problems with their adolescent children, who lost supervision while their mothers were working. For him, identifying the best response to that situation is a social question. He closed by noting that “if you want to influence policy, you really have to try and identify those things that are important to politicians and help them make the kind of value-based tradeoffs that they have to make.”
Jon Baron focused on evidence of impact as well, but from a somewhat different perspective. A meaningful benefit-cost analysis, he noted, begins with valid evidence of program effects; the next question is whether the benefits of that effect exceed the costs. He suggested that in many fields—in medicine as well as social policy areas—valid evidence of effectiveness is not common. Many widely accepted conclusions about effective programs are based on observational evidence or small randomized trials with short-term follow-up. These programs often show weak effects or no effects when they are evaluated more rigorously.
He described as an exception an example of a nurse home visitation program for poor, mostly single women in their first pregnancy, which