awards) may not accurately measure the totality of school outcomes, Duncombe and Yinger prefer an alternative "indirect" control for school district performance. They assume that average school district performance is one where voters likely to have children in the public schools vote for average property tax rates in communities with average incomes. These (abstract) communities can then be used to observe how much per-pupil spending is necessary to achieve average educational outcomes, again while controlling for other cost or discretionary factors.
Using this preferred "indirect" measure, Duncombe and Yinger conclude that average school performance in New York City costs 7 percent more than the cost of this performance in the average district statewide. Yet when they calculate education costs using the "direct" performance measures (test scores, graduation rates, and Regents Diplomas), they find that New York City's costs are 262 percent more than average. These widely divergent results discourage confidence in either measure. Statistical sophistication has, in this case, outpaced ability to explain the relationship between spending and school performance.
Reschovsky and Imazeki also utilize a statistical methodology to estimate the cost of adequacy in Wisconsin districts. They measure outcomes by 10th-grade test scores, controlled for 8th-grade scores of the same students. By this means, they attempt to isolate the "value added" by school districts, reasoning that the 8th-grade score may reflect students' social capital and instruction in other locations as well as the effectiveness of instruction in the present district. "Adequate" outcomes are defined as the average 10th-grade value-added throughout the state; Reschovsky and Imazeki conclude that the cost of achieving this adequacy, before adjusting for student need and geographic differences, is $6,331 per pupil. They truncate their cost index, however, and reduce the adjustment called for in the case of Milwaukee, believing that ''no district could have costs that were more than twice the average." (Recall that the Duncombe-Yinger direct approach yielded a result for New York City that was 3.5 times the state average.) Reschovsky and Imazeki also find that education costs in the Milwaukee suburbs were 11 percent below the state average, while Chambers found that teacher costs in these suburbs were 17 percent above average. These substantial differences between models with seemingly similar plausibilities suggest that statistical methods cannot soon be expected to command authority as a way actually to adjust state education expenditures.2
Even if it were possible to quantify all outcomes, such models could at best tell us what resource levels were generally associated with acceptable achievement (with inefficient practices removed, to the extent known), not what resource levels would be necessary, if used efficiently, for this achievement. To reach this level of analysis, the statistical controls would have to include alternative pedagogies and curricula, something beyond our sophistication. If the policy goal is for a legislature to adopt (or a court to mandate) the minimum level of resources necessary to achieve acceptable outcomes, this becomes a crucial distinction.