The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Learning and Instruction: A SERP Research Agenda
and Sellers, 1997), and cognitively guided instruction in problem solving and conceptual understanding (Carpenter et al., 1996) both reported positive effects. With support from the National Science Foundation (NSF), several full-scale elementary mathematics curricula with embedded assessments have been developed, directed at supporting deeper conceptual understanding of mathematics concepts and building on children’s informal knowledge of mathematics to provide a more flexible foundation for supporting problem solving. Three curricula developed separately take somewhat different approaches to achieving those goals: the Everyday Mathematics curriculum, the Investigations in Number, Data and Space curriculum, and the Math Trailblazers curriculum (Education Development Center, 2001).
All three curricula show positive gains in student achievement in implementation studies, in which the developers collect data on program effects. While such findings are encouraging, they must be viewed with a critical eye, both because those providing the assessment have a vested interest in the outcome and because the methodologies employed generally do not allow for direct attribution of the results to the program.1 Third party evaluations using comparison groups have been done in some cases, but none of these has involved random assignment, the condition that maximizes confidence in attributing results to the intervention. Nor do these studies measure either fidelity of implementation of the reform curriculum for the experimental group or the specific program features of the alternative used with the control group (see, for example, Fuson et al., 2000).
How students taught with these curricula compare with each other in mathematical proficiency and, perhaps more importantly, how they compare with students taught with curricula that devote more instructional time to strengthening formal procedural knowledge have not been carefully studied. From the perspective of practice, these are important omissions. To make informed curriculum decisions, teachers and school
Implementation studies generally do not involve controlled experimentation that allows for comparison of results of one intervention with another. It is also widely understood that the introduction of a new program can have positive effects not because of program content but because something new is being tried.