systems, and states. This suggests that much of the variation in these standardized test scores is caused by experiences outside schools, especially experiences that vary with social class and environmental opportunities. This in turn is not surprising since the tests are designed to be independent of particular curriculum experiences.

So, when Eric Hanushek reports there is no systematic evidence that resource differences among schools have a large effect on student achievement, we should not be shocked given his outcome measures. We should, however, carefully scrutinize the policy implications that might be drawn from these results.

We might also think about improving the current model by changing the dependent variable as well as the independent variables. It seems much more reasonable to have an assessment device that is designed to measure what is being taught. That is what happens in many countries. That is what happens in college classes. Examinations are designed to assess student learning in the course taught, not in some generic course.

Imagine, then, a new scenario for a K-12 education production function. We carry out our new study in a state that has challenging content and performance standards. For our dependent variable we use student performance on a state assessment that is aligned with challenging state standards. For our independent variables we use measures of the resources that our theory of schooling indicates are critical to providing students with the opportunity to learn to the challenging new standards. Our study could analyze, for example, the relationship between student achievement and variations in teachers' knowledge and the quality of their teaching of the substance and skills in the content standards. It could explore the impact not of having computers in a classroom but of using computers and software to support students in their efforts to achieve the state's standards. And it could examine the relationship between students' control over their learning and their actual achievement.

Our objective in examining production functions would be subtly different from the old version. We would no longer be interested in the global question "do variations in school resources and practices influence student achievement?" Of course, they do! We know that from many other studies. Few students learn French or calculus or plate tectonics unless they are taught it in school. Few students learn much science in their elementary years if their teachers lack the expertise to teach it effectively. The new production function should clearly show such effects.

Our interest would be in understanding how, to what extent, and under what circumstances the variations in specific circumstances and resources in classrooms relate to student achievement. One of the great advantages of an aligned system should be the efficiency that follows from having all of one's ducks in a row. The incentive would finally be right—hard work by students and well trained teachers would result in higher assessment scores. In addition to improving the quality of teaching and enhancing students' opportunities for learning, an

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement