2
Background and Problem Statement

Peggy Carr, associate commissioner for assessment at the National Center for Education Statistics, and Jim Carlson, assistant director for psychometrics at the National Assessment Governing Board (NAGB), made the opening presentations, providing historical context about the inclusion of students with special needs in NAEP and laying out what they hoped to learn from the days’ interactions. Carlson began by describing a series of resolutions through which NAGB established a plan for conducting research on the effects of including students with disabilities and English-language learners in the assessment. In these resolutions, the Board articulated dual priorities of including students who can “meaningfully take part” in the assessment while also maintaining the integrity of the trend data that are considered a key component of NAEP. According to Peggy Carr, the resolution and research plan provided “a bridge to the future” in which NAEP would be more inclusive, and “a bridge to the past” in which NAEP would continue to provide meaningful trend information. One of the chief concerns was that new policies and procedures would not interfere with the ability to report trends in the important subjects both for the nation and for the states.

In her presentation, Carr described the research plan implemented with the 1996 mathematics assessment. This plan called for data to be collected for three samples, referred to as S1, S2, and S3. The S1 sample maintained the status quo, in which administration procedures were handled in the same way as in the early 1990s. In the early 1990s, a student with an



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 8
Reporting Test Results for Students with Disabilities and English-Language Learners: Summary of a Workshop 2 Background and Problem Statement Peggy Carr, associate commissioner for assessment at the National Center for Education Statistics, and Jim Carlson, assistant director for psychometrics at the National Assessment Governing Board (NAGB), made the opening presentations, providing historical context about the inclusion of students with special needs in NAEP and laying out what they hoped to learn from the days’ interactions. Carlson began by describing a series of resolutions through which NAGB established a plan for conducting research on the effects of including students with disabilities and English-language learners in the assessment. In these resolutions, the Board articulated dual priorities of including students who can “meaningfully take part” in the assessment while also maintaining the integrity of the trend data that are considered a key component of NAEP. According to Peggy Carr, the resolution and research plan provided “a bridge to the future” in which NAEP would be more inclusive, and “a bridge to the past” in which NAEP would continue to provide meaningful trend information. One of the chief concerns was that new policies and procedures would not interfere with the ability to report trends in the important subjects both for the nation and for the states. In her presentation, Carr described the research plan implemented with the 1996 mathematics assessment. This plan called for data to be collected for three samples, referred to as S1, S2, and S3. The S1 sample maintained the status quo, in which administration procedures were handled in the same way as in the early 1990s. In the early 1990s, a student with an

OCR for page 8
Reporting Test Results for Students with Disabilities and English-Language Learners: Summary of a Workshop individual education plan (IEP) could be excluded from the assessment if he or she was mainstreamed less than 50 percent of the time in academic subjects or was judged to be incapable of participating meaningfully in the assessment (U.S. DoEd, 1994). Any students identified by school officials as “limited English proficient” could be excluded if he or she was “a native speaker of language other than English,” had been enrolled “in an English-speaking school for less than two years,” and was “judged to be incapable of taking part in the assessment” (U.S. DoEd, 1994: pg. 126). In the S2 sample, revisions were made to the criteria given to schools for determining whether to include students with special needs, but no accommodations or adaptations were offered. For S2, students with IEPs were to be included unless the school’s IEP team determined that the student could not participate; or the student’s cognitive functioning was so severely impaired that she or he could not participate; or the student’s IEP required that the student be tested with an accommodation or adaptation, and that the student could not demonstrate his or her knowledge without that accommodation (Mazzeo, Carlson, Voelkl, and Lutkus, 2000: pg. 10). Students designated as limited English proficient by school officials and receiving academic instruction in English for three years or more were to be included in the assessment. [Those] receiving instruction in English for less than three years were to be included unless school staff judged them to be incapable of participating in the assessment in English (Mazzeo, Carlson, Voelkl, and Lutkus, 2000: pg. 10). In S3, the revised inclusion criteria were used, and accommodations were made available for students with disabilities and English-language learners. These students were allowed to take the test with the accommodations that they routinely received in their state or district assessments, as long as the accommodations were approved for use on NAEP. NAEP-approved accommodations for the 1996 administrations included extended time; individual or small group administration; a large-print version of the test; transcription, oral reading, or signing of directions; and use of bilingual dictionaries in mathematics. Final decisions about which accommodations to provide to students in S3 were made by school authorities. The criteria for the three samples are summarized in Box 2–1. Analyses of the 1996 data revealed no differences in participation rates between the S1 and S2 samples. Thus, the S1 criteria were discontinued, and research was based on samples of schools that applied either the S2 or

OCR for page 8
Reporting Test Results for Students with Disabilities and English-Language Learners: Summary of a Workshop BOX 2–1 Inclusion and Accommodation Criteria Utilized in NAEP Research Samples S1: Students with special needs who required accommodations were not included in the assessment. S2: Students with special needs were included, but no accommodations were provided. S3: Students with special needs were included and accommodations were provided. the S3 criteria. The research continued with the 1998 national and state NAEP reading assessment and the 2000 assessments (mathematics and science at the national level in grades four, eight, and twelve and at the state level in grades four and eight; reading at the national level in grade four). The accommodations permitted were similar to those allowed in 1996, and a bilingual booklet was offered in mathematics at grades four and eight. Reading aloud passages or questions on the reading assessment was explicitly prohibited. Alternative language versions and bilingual glossaries were not permitted on the reading or science assessments. Findings from studies in 1996, 1998, and 2000 are described in detail in Chapter 6. Based on the research findings and other considerations, NAGB passed the following resolution in 2001 (NAGB, 2001: pg. 43): For the 2002 NAEP, the entire NAEP sample, for both national and state-level assessments, will be selected and treated according to the procedures followed in the S3 samples of 1998 and 2000. All students identified by their school staff as students with disabilities (SD) or limited-English proficient (LEP) and needing accommodations will be permitted to use the accommodations they receive under their usual classroom testing procedures, except those accommodations deemed to alter the construct being tested. (The most prominent of these is reading the reading assessment items aloud, or offering linguistic adaptations of the reading items, such as translations.) No oversampling of SD or LEP students is planned. In reading, trends will compare data from 2002 to the S3 sample for 1998…The S2 sample, in which all students were tested under standard conditions only, will be discontinued. Through this policy NAGB adopted the criteria applied in the S3

OCR for page 8
Reporting Test Results for Students with Disabilities and English-Language Learners: Summary of a Workshop sample as the official procedures (i.e., permitted accommodations will be provided to students who need them). There are a number of unanswered questions about the comparability of scores from standard and nonstandard (accommodated) administrations and the effects of changes in inclusion policies on NAEP’s trend information. Although an accommodation is intended to correct for the disability, there is a risk that the accommodation over- or undercorrects in a way that further distorts a student’s performance and undermines validity. Thus, it cannot simply be assumed that scores from standard and nonstandard administrations are comparable. Adopting the procedures used for the S3 sample represents a significant change in NAEP’s inclusion policy, since special needs students who required accommodations were not included in the pre-1996 assessments. The change in inclusion policy could mean that results from the pre-1996 assessments are not comparable to results based on the inclusion policy used for S3 (National Institute of Statistical Sciences, 2000). One of NAEP’s chief objectives is to provide information about trends in U.S. students’ educational achievement, but changes in policy regarding who participates in NAEP and how the test is administered can have an impact on the comparability of trend data. Carlson and Carr both emphasized that they hoped that the day’s discussions would provide them with a better understanding of the effects of accommodations on test performance and assist them as they work with others to formulate and refine NAEP’s reporting policies.