Page 13

1

Introduction

Since 1969, the National Assessment of Educational Progress (NAEP) has been assessing students across the country (U.S. Department of Education, 1999). Since its inception, NAEP has summarized academic performance for the nation as a whole and, beginning in 1990, for the individual states. Reporting results below the state level was prohibited until 1994. The Improving America's Schools Act of 1994, which reauthorized NAEP in that year, removed the language prohibiting below-state reporting and set the stage for consideration of reporting district-level and school-level results.

NAEP's policy-making body believes “below state results could provide an important source of data for informing a variety of education reform efforts at the local level” (National Assessment Governing Board, 1995a). Some districts have expressed interest in district-level NAEP with an eye toward augmenting their current assessments, filling in gaps for content areas not currently tested or even substituting NAEP instruments for those measures that have been locally developed or purchased (National Research Council, 1999c). NAEP's sponsors have also suggested districtlevel reports could increase motivation for districts' participation in the assessment by providing them with feedback on performance in return for their participation.

At the same time, NAEP's sponsors have taken a critical look at their reporting methods with the objective of improving the usefulness and interpretability of reports (National Assessment Governing Board, 1996; Na-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 13
Page 13 1 Introduction Since 1969, the National Assessment of Educational Progress (NAEP) has been assessing students across the country (U.S. Department of Education, 1999). Since its inception, NAEP has summarized academic performance for the nation as a whole and, beginning in 1990, for the individual states. Reporting results below the state level was prohibited until 1994. The Improving America's Schools Act of 1994, which reauthorized NAEP in that year, removed the language prohibiting below-state reporting and set the stage for consideration of reporting district-level and school-level results. NAEP's policy-making body believes “below state results could provide an important source of data for informing a variety of education reform efforts at the local level” (National Assessment Governing Board, 1995a). Some districts have expressed interest in district-level NAEP with an eye toward augmenting their current assessments, filling in gaps for content areas not currently tested or even substituting NAEP instruments for those measures that have been locally developed or purchased (National Research Council, 1999c). NAEP's sponsors have also suggested districtlevel reports could increase motivation for districts' participation in the assessment by providing them with feedback on performance in return for their participation. At the same time, NAEP's sponsors have taken a critical look at their reporting methods with the objective of improving the usefulness and interpretability of reports (National Assessment Governing Board, 1996; Na-

OCR for page 13
Page 14 tional Assessment Governing Board, 1999a). NAEP's sponsors have attempted over the years to produce reports of achievement results that were more usable by lay audiences and that contain more easily interpreted displays of the information. NAEP has experimented with a variety of approaches including, for example, reports that utilize a newspaper format, specific brochures of topical areas, and reports with easier-to-read graphs and tables (U.S. Department of Education, 1999). They have funded studies to examine the ways in which reports are used by policy makers, educators, the press, and others and to identify misuses and misinterpretations of reported data (Hambleton & Slater, 1996; Jaeger, 1995; Hambleton & Meara, 2000). In addition, NAEP has attempted to design and introduce innovative research approaches to help with the interpretation of the data. Along this vein, advisers to the National Assessment Governing Board (NAGB) have proposed the use of “market-basket” reporting methods as another means to accomplish simpler reporting that may be more useful to NAEP's audiences (Forsyth, Hambleton, Linn, Mislevy, & Yen, 1996). Like the Consumer Price Index (CPI), which presents information on inflation by measuring price changes on a “market basket” of goods and services, a market-basket NAEP report would present information on student achievement based on a “market basket” of knowledge and skills in a content area. Under one scenario, for example, NAEP would report results as percentages of items correct on sets of representative items, an approach to reporting that could lead to easier-to-understand reports of student achievement. As part of their evaluation of NAEP, the National Research Council's Committee on the Evaluation of National and State Assessments of Educational Progress stressed the need for clear and comprehensible reporting metrics that would simplify the interpretation of results and encouraged exploration of market-basket reporting for NAEP (National Research Council, 1999b). Market-basket reporting might be expected to provide an easier-to-understand picture of students' academic accomplishments. In pursuit of the goals of improved reporting and use of test results, NAEP's sponsors were interested in exploring the feasibility and potential impact of both district-level and market-basket reporting practices as well as the possible connections between them. Accordingly, at the request of the U.S. Department of Education, the National Research Council (NRC) established the Committee on NAEP Reporting Practices to study these reporting practices. Because these two topics are intertwined, the committee is examining them in tandem.

OCR for page 13
Page 15 The committee developed two sets of study questions to address issues associated with district-level and market-basket reporting. With regard to district-level reporting, the committee examined the following: 1. What are the proposed characteristics of a district-level NAEP? 2. If implemented, what information needs might it serve? 3. What is the degree of interest in participating in district-level NAEP and what are the factors that would influence interest? 4. Would district-level NAEP pose any threats to the validity of inferences from national and state NAEP? 5. What are the implications of district-level reporting for other state and local assessment programs? With respect to market-basket reporting, the committee investigated the following: 1. What is market-basket reporting? 2. How might reports of market-basket results be presented to NAEP's audiences? Are there prototypes? 3. What information needs might be served by market-basket reporting for NAEP? 4. Are market-basket results likely to be relevant and accurate enough to meet these needs? 5. Would market-basket reporting pose any threats to the validity of inferences from national and state NAEP? What types of inferences would be valid? 6. What are the implications of market-basket reporting for other national, state, and local assessment programs? What role might an NAEP short form play? In addressing these issues, the committee considered the future context in which NAEP may be operating. For instance, the National Center for Education Statistics (NCES) set a priority to have all states sign up for NAEP and secured participation agreements with 48 states for the assessment in 2000. For numerous reasons, however, several states were unable to successfully take part in the assessment. In two states, one large district refused to participate, making it impossible for each of these states to meet participation criteria. Similarly, other states were unable to secure participation of enough schools to meet the threshold criteria. In fact, even some

OCR for page 13
Page 16 states that enacted legislation mandating state NAEP participation were unable to garner the necessary interest to meet the inclusion criteria (National Center for Education Statistics, 2000a). Tied to the increasing difficulty in securing participation for NAEP is the proliferation of assessment programs in general. Because of state education reforms and the requirements of federal education legislation (e.g., Improving America's Schools Act (IASA), Individuals with Disabilities Education Act (IDEA), and Carl Perkins Act), state assessment programs have expanded greatly in both scope and complexity in the past decade (Council of Chief State School Officers, 2000). Similarly, many local school districts, particularly the large urban school districts so important to state NAEP sampling strategies, have expanded the use of assessment instruments in their own testing programs (National Research Council, 1999c). Further, a potential factor in the changing context of NAEP is the proposal to make NAEP a more “high-stakes” measure by connecting rewards and/or sanctions to states' performance. For example, in its fiscal 2001 budget, the Clinton administration proposed a “Recognition and Reward Program” that would provide “high performance bonuses to states that make exemplary progress in improving student performance and closing the achievement gap between high- and low-performing groups of students” (National Center for Education Statistics, 2000b:2). While at the time of the writing of this report, it is impossible to predict if this proposal will be enacted, it remains a distinct possibility. STUDY APPROACH To gather information on the issues surrounding market-basket and district-level reporting, the committee reviewed the literature on these two topics, invited representatives from NAEP's sponsoring agencies (NAGB and NCES) to attend meetings and present information, attended NAGB board and subcommittee meetings, held a discussion during the Large-Scale Assessment Conference sponsored by the Council of Chief State School Officers (CCSSO), and conducted two multiday workshops specifically on these two topics. The workshops addressed key issues from a variety of perspectives. The purpose of the NRC's Workshop on District-Level Reporting for NAEP was to explore with various stakeholders their interest in and perceptions regarding the likely impacts of district-level reporting. Similarly, the purpose of the NRC's Workshop on Market-Basket Reporting was to explore with various stakeholders their interest in and

OCR for page 13
Page 17 perceptions regarding the desirability, feasibility, and potential impact of market-basket reporting for NAEP. Chapter 3 provides additional details about the workshop on district-level reporting; additional information about the workshop on market-basket reporting is included and Chapter 4 and Chapter 5. WHAT IS DISTRICT-LEVEL REPORTING? When first implemented, NAEP results were reported only for the nation as a whole. Following congressional authorization in 1988, the Trial State Assessment was initiated which allowed reporting of results for participating states, although below-state reporting was still prohibited. The 1994 reauthorization of NAEP removed this prohibition, but the law neither called for district or school-level reporting nor did it outline details about how such practices would operate. While NAGB and NCES have been exploring the issues associated with providing district-level results, the policies for district-level reporting are not yet in place nor are the details to guide program implementation. WHAT IS MARKET-BASKET REPORTING? Market-basket reporting was first discussed in connection with NAEP's redesign in 1996 (National Assessment Governing Board, 1996) and was again included in the most recent redesign effort, NAEP Design 2000-2010 (NationalAssessment Governing Board, 1999a). The market-basket concept is based on the idea that a limited set of items can represent some larger construct. The most common example of a market basket is the CPI, produced by the Bureau of Labor Statistics. The CPI tracks price changes paid by urban consumers in purchasing a locally representative set of consumer goods and services. The CPI measures monthly cost differentials for products in its market basket; therefore, the CPI is frequently used as an indicator of change in the U.S. economy. The CPI market-basket concept resonates with the general public; it invokes the tangible image of a shopper at the market filling a basket with a set of goods regarded as broadly reflecting consumer spending patterns at (Bureau of Labor Statistics, 1999). The general idea of a NAEP market basket draws on a similar image: a collection of test questions representative of some larger content domain; and an easily understood index to summarize performance on the items.

OCR for page 13
Page 18 There are two components of the NAEP market basket: the collection of items and the summary index. The collection of items could be large (longer than a typical test form given to a student) or small (small enough to be considered an administrable test form). The summary index currently under consideration is the percent correct score (Mazzeo, 2000). There are a number of configurations for a NAEP market basket. We discuss several in Chapter 4 of this report. To acquaint the reader with the basic ideas and issues associated with market-basket reporting, two alternative scenarios are portrayed in Figure 1-1. Figure 1-1 presents a diagram of various components of the market basket and describes two alternate configurations. Under one scenario, a large collection of items would be assembled and released publicly. To adequately cover the breadth of the content domain, the collection would be much larger than any one of the forms used in the test and probably too long to administer to a single student at one sitting. This presents some challenges for the calculation of the percent correct scores. Because no student would take all of the items, complex statistical procedures would be needed for estimating scores. This alternative appears in Figure 1-1 as “scenario one.” A second scenario involves producing multiple “administrable” test forms (called “short forms”). Students would take an entire test form, and scores could be based on students' performance for the entire test in the manner usually employed by testing programs. Although this would simplify calculation of percent correct scores, the collection of items would be much smaller and less likely to adequately represent the content domain. This scenario also calls for assembling multiple test forms. Some forms would be released to the public, while others would remain secure, perhaps for use by state and local assessment programs, and possibly to be embedded into or administered in conjunction with existing tests. This alternative appears in Figure 1-1 as “scenario two.” ORGANIZATION OF THIS REPORT This report begins with an overview of NAEP in Chapter 2. Chapter 3 is devoted to district-level reporting, and market-basket reporting is covered in Chapter 4. Because of the analogies that have been drawn between market-basket reporting and the CPI, we include discussion of the similarities and differences in Chapter 4; full details about construction and reporting of the CPI appear in Appendix A. The short form, which

OCR for page 13
Page 19 ~ enlarge ~ FIGURE 1-1 Components of the NAEP Market Basket would be created under scenario two for the market basket, is addressed in Chapter 5. We believe that creation and administration of short-form NAEP would alter the fundamental purposes of NAEP, and we take up these issues of “changed NAEP” in this chapter. NAEP's sponsors do not yet have prototypical models of either market-basket reports or district-level reports. During the course of our study, we reviewed a preliminary example of a market-basket report and a report

OCR for page 13
Page 20 provided to one district, but neither report was presented to us as a prototypic market-basket or district-level report. To get a better sense of the design and contents of such reports, we reviewed other current NAEP reports. In Chapter 6, we discuss ways NAEP's sponsors might formulate reports to ensure their usefulness, ease of understanding, and portrayal of meaningful information. A detailed example of an application of these procedures appears in Appendix B. Both market-basket and district-level reporting could potentially affect the internal configuration of the NAEP program, because they pose challenges for sampling, scoring, and the number and types of reports to be prepared. For local school systems, reporting district-level results brings NAEP to a more intimate level of analysis. It is not too difficult to imagine district-level results included in accountability systems or put to other high-stakes uses, especially with the rewards that have been proposed (National Center for Education Statistics, 2000b). In Chapter 7, we present likely implications of the proposed reporting practices for NAEP and for local educational systems.