closely to the state policy and reported that it affected their teaching. But when he looked inside their classrooms, only 4 had fundamentally changed the kinds of tasks students were expected to perform and the discourse in their classroom (the study examined mathematics teaching and learning). In 11 classrooms, there was no indication that the tasks and discourse had changed at all (Spillane, 1997).
In large part, Spillane found, the discrepancy reflected the variation in teachers' understanding about the tests' instructional goals. For example, teachers saw that the test put a premium on problem solving, but for some, that meant adding a word problem at the end of each lesson. This variation in understanding was true among principals and district office staff as well.
A separate study of 22 classrooms in 6 states found a similar pattern (David, 1997). In examining teachers' responses to new assessments, David distinguishes between “imitation” and “improvement.” Most teachers imitated the form of the new assessment, she found, often by adding open-ended questions to their classroom assessments or assigning more writing. But these responses produced limited results. By contrast, she noted, some teachers went beyond imitation and changed their practice fundamentally.
Districts' capacity to monitor the conditions of instruction in schools is limited, and there are few examples of districts that have been shown to be effective in analyzing such conditions and using the data to improve instruction. The research base on such efforts is slim, in large part because there are so few examples to study.
The examples begin to suggest, however, that examining instructional practices, along with data on performance, and using that information to develop a professional development strategy, can help teachers improve their instruction and help improve student performance.