Skip to main content

Currently Skimming:

1 Introduction
Pages 1-12

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 1...
... The history of state assessments has been eventful. Education officials have developed their own assessments, have purchased ready-made assessments produced by private companies and nonprofit organizations, and have collaborated to share the task of test development.
From page 2...
... . The goal for this workshop, the first of two, was to collect information and perspectives on assessment that could be of use to state officials and others as they review current assessment practices and consider improvements, as Diana Pullin indicated in her opening remarks.
From page 3...
... Chapter 2 explores possibilities for changing the status quo, by changing both standards and assessments with the goal of improving instruction and learning, as well as some of the technical challenges of implementing innovative approaches on a large scale. Chapter 3 examines practical and political lessons from past and current efforts to implement innovative assessment approaches, and Chapter 4 focuses on the political considerations that have affected innovative assessment programs.
From page 4...
... The new requirement compelled states to define coherent content standards by grade level, rather than by grade span, and to articulate more precisely what the performance standards should be for each grade. Testing at every grade has also opened up new possibilities for measuring student achievement over time, such as value-added modeling.2 The design of states' test forms has also been affected, most notably in the almost complete elimination of matrix sampling designs because they do not provide data on individual students.
From page 5...
... This approach makes it possible to better assess a broad academic domain because the inclusion of more complex item types is likely to yield more generalizable inferences about students' knowledge and skills. The types of test questions commonly used have also changed, Marion suggested, with developers relying far less on complex performance assess ments and more on multiple-choice items.
From page 6...
... Interim assessments are often explicitly designed to mimic the format and coverage of state tests and may be used not only to guide instruction, but also to predict student performance on state assessments, to provide data on a program or approach, or to provide diagnostic information about a particular student. Researchers stress the distinction between interim assessments and formative assessments, however, because the latter are typically embedded in instructional activities and may not even be recognizable as assessments by students (Perie, Marion, and Gong, 2007)
From page 7...
... Marion noted that the limited research available provides little guidance for developing specifications for interim assessments or for support and training that would help teachers use them to improve student learning. There is tre mendous variability in the assessments used in this way, and there is essentially no oversight of their quality, he noted.
From page 8...
... A number of other factors help explain recent changes in the system, Marion suggested. NCLB required rapid results, and the "adequate yearly progress" formula put a premium on a "head-counting" methodology (that is, measuring how many students meet a particular benchmark by a particular time, rather than considering broader questions about how well students are learning)
From page 9...
... Test designers have paid attention to the challenges of specifying constructs to be measured more explic itly and removing construct irrelevant variance from test items (for example, by reducing the reading burden in tests of mathematics so that the measure of students' mathematics skills will not be distorted by reading disabilities)
From page 10...
... Improved Reporting The combination of stricter reporting requirements under NCLB and improved technology has led states and districts to pay more attention to their reporting systems since 2002. Some have made marked improvements in presenting data in ways that are easy for users to understand and use to make effective decisions.4 Weaknesses Greater Reliance on Multiple-Choice Tests In comparison with the assessments of the 1990s, today's state assessments are less likely to measure com plex learning.
From page 11...
... Insufficient Rigor Current assessments are regarded as insufficiently rigorous. Analysis of their cognitive demand suggest that they focus on the lower levels of cognitive demand defined in standards and that they are less difficult than, for example NAEP (see, e.g., Ho, 2008; National Research Council, 2008; Cronin et al., 2009)
From page 12...
... Ideally, a user-friendly information management system will focus teachers' attention on the content of assessment results so they can easily make correct inferences (e.g., diagnose student errors) and connect the evidence to specific instructional approaches and strategies.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.