ELP tests have long been used by the states to classify ELL students by language proficiency level for instructional program placement and decision-making purposes. Many were developed in response to legislation and litigation of the 1970s (e.g., the Lau v. Nichols Supreme Court decision and the Equal Educational Opportunities Act of 1974), a time when very few instruments were available to assess ELP (Bauman et al., 2007). For the most part, these tests reflected the predominant structural linguistic approach to assessing ELP (Abedi, 2007; Francis and Rivera, 2007). They were designed to assist local educators with English as a second language and bilingual education program placement and exit decisions, and they typically focused on oral (listening and speaking) domains, measuring discrete phonological and basic interpersonal communication skills. These tests focused largely on basic interpersonal communication skills rather than academic language skills. As a result, students may have scored well on them without having mastered the English language skills needed for learning subject matter in an English-only classroom (Lara et al., 2007).

Before NCLB, there was no attempt to bring uniformity to the ELP assessments with regard to what they measured, their technical measurement properties, or how they were used. Moreover, states typically allowed local school districts to choose among a variety of commercial ELP assessments that varied widely in their characteristics, emphases, and technical properties. Reviews of the pre-NCLB ELP tests have revealed that they differed from each other in their theoretical foundations, the type of language assessed, the types of skills assessed (i.e., receptive or expressive skills), the content assessed, the types of assessment tasks, structural characteristics (i.e., administration procedures, grade level ranges, assessment time required), and technical qualities (e.g., reliability and validity) (Del Vecchio and Guerrero, 1995; Zehler et al., 1994). Many of these tests were not based on an operationally defined concept of ELP, had limited questions that measured academic language proficiency, were not based on explicitly articulated ELP content standards, and had psychometric flaws and other shortcomings (Abedi, 2007, 2008; Bauman et al., 2007; Del Vecchio and Guerrero, 1995; Lara et al., 2007; Zehler et al., 1994).

Under Titles I and III of NCLB, the U.S. Department of Education (DoEd) required states to make improvements to ELP assessments, specifically (adapted from Abedi, 2008, p. 5):

  1. Develop and implement ELP standards suitable for ELL students learning English as a second language.

  2. Implement a single reliable and valid ELP assessment that is aligned to ELP standards and that annually measures listening, speaking, reading, writing, and comprehension skills.

  3. Align the ELP test with the state’s challenging academic content and student academic achievement standards described in section 1111(b)(1)(PL 107-110. Available: [April 2011]).

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement