The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
than the knowledge and skills schools are focusing on, do not respond immediately to instructional changes, no matter how effective. So in order to raise scores quickly, schools employ test-based strategies, and achievement does not increase. If, however, schools used instructionally sensitive instruction, they could raise scores and achievement by improving instruction.
Another factor that contributes to inappropriate test preparation strategies is the use of a single test as the basis for rewards and sanctions. Although the Title I statute calls for the use of multiple measures of student achievement, states and districts at this point continue to use one test in designing accountability. Schools get the message that they have to raise scores on that test in order to earn rewards or avoid sanctions. Using multiple measures could encourage schools to focus less on a single measure and more on improving achievement generally.
In an effort to broaden the measure of achievement, some states include additional factors for accountability. Texas, for example, includes graduation rates and attendance rates, along with test scores, in determining ratings for schools. But few schools have earned low ratings because of these factors; as a result, schools continue to focus their attention on the tests (Gordon and Reese, 1997).
The way that states calculate performance also affects schools' responses to accountability. In some states, schools or districts must reach a threshold level of performance in order to earn rewards; that is, a certain percentage of students must attain a passing score or reach a particular level of proficiency. In these states, some schools reason that the most efficient way of meeting those targets is to focus on students who are just below the bar, and provide them with intensive test preparation.
As Willms (1998) found, this strategy may be shortsighted. Examining data from British Columbia, he found that schools that improved performance overall did so by raising the performance of low-performing students. This occurred, he notes, because high-performing students tend to do well in any circumstances; raising the floor also raises overall performance.
Other states have tried to encourage schools to focus their efforts on low-performing students by placing an emphasis on improving the distribution of performance, and reducing the number of low performers as well as increasing the average. In these states, test preparation for a few students will not work; improving instruction across the board will earn them rewards.
Test preparation alone will also be effective only if the objective is to reach a certain level of performance, rather than to improve performance continually. States such as Kentucky, where schools must reach new performance goals every two years, have found that they can raise performance in the early years by focusing on the test; sustaining gains requires instructional improvement—which in turn requires support for professional development.