areas (Oakes et al., 1990; Shepard, 1991; Glaser and Silver, 1994). Publishers of norm-referenced tests study state curricular guidelines and existing textbooks, and establish test specifications based on the content they identify. In some instances, publishers customize tests according to the criteria of a particular state or district. Generally, such tests are not released to educators or the public; their confidential nature often makes it difficult to analyze what the tests actually measure.
Over half of the states and some districts use some form of “criterion-referenced” assessments (CCSSO, 1998). Such assessments attempt to establish whether a student has met a particular performance level by estimating the extent to which each student has learned certain content, regardless of how others might have performed (NRC, 1999d). A number of states and districts have attempted to use portfolios to document student learning over time, but have encountered substantial problems due to scoring difficulties and costs (Koretz, 1998; Stecher, 1998).
In addition to state tests, school districts may use a variety of other tests, which interact with decisions made about curriculum and instruction. Tests that measure what students know overall are different from those designed to measure what students have learned within a particular course or time interval, placing different demands on what teachers are expected to teach. From test to test, the conditions and the nature of the content tested may vary widely. For example, one test may allow the use of calculators, another may not; one may emphasize mastery of science terms, another may emphasize understanding of science concepts. Some assessment reports may disaggregate the data, highlighting changes in performance for students of different ethnicities, socioeconomic backgrounds, or cultures, leading to greater focus on students within those groups.
Within two years after high school graduation, nearly 75 percent