with highly specified state curricula and an extensive menu of rewards and sanctions.47
Of course, this relationship does not imply that simply easing statewide test policies would improve achievement.
To give teachers, students, parents, and other caregivers sufficient time to prepare for high-stakes assessments, states typically administer them for several years before the consequences take effect. During these trial runs, the failure rates are sometimes alarmingly high. In Arizona, for example, only 1 in 10 sophomores passed the mathematics test first given in the spring of 1999. That same spring, only 7% of Virginia schools were able to achieve a 70% passing rate, which was to become the condition for accreditation in 2007. In response to these results, some states have begun to relax their expectations, reconsider the test, or withdraw it altogether. Wisconsin, for example, yielded to pressure from parents and withdrew its high school graduation test. Massachusetts and New York set lower passing scores for their exams.48
Most states report the level of student results on their assessments by setting so-called cut scores to define categories with such labels as advanced, proficient, needs improvement, and failing,49 terms similar to those used in NAEP: advanced, proficient, and basic. When results on state assessments are compared with the state results in NAEP, the proportions of students reaching the proficient level are often higher.50 Some researchers, politicians, and policy makers have concluded from this discrepancy that most state tests do not reflect sufficiently high expectations.51 Others argue instead that minimum competence and high expectations are different goals that cannot be measured by the same assessment and certainly not with the same cut scores. Thus, the results appear discrepant because the same categories are used to describe performance on assessments with very different goals.
Many states and school districts use standardized tests52 (which may or may not coincide with the state assessments discussed above) to assess how their students are achieving. Commercially published standardized mathematics achievement tests are quite variable in the topics they cover and in the proportion of these topics emphasized at each grade level.53 The tests frequently are not aligned with the teaching materials used in a district or even with the goals of the district. This misalignment further dilutes teaching efforts, as teachers must add to their long list of goals coverage of the major topics emphasized on a specific standardized test.
Standardized tests can have other negative consequences. The word standardized is likely to carry certain connotations: that such a test is more objective than other instruments, that it contains mostly grade-level items, that it