Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 5
1 Introduction The No Child Left Behind Act (NCLB) of 2001, which amended Title III of the Elementary and Secondary Education Act (ESEA), fundamentally changed how the federal government directs federal funding to support programs for children of limited English proficiency (LEP), also known as English language learner (ELL) students.1 In the words of the U.S. Department of Education (DoEd) (2005b, p. 1), the NCLB “reflects a fundamental transformation in the relationship between the federal government and the states with regard to the education of LEP students.” THE POPULATION AND THE NEW LANDSCAPE At its broadest level, an ELL student is one who has limited proficiency in the English language. Indeed, the ESEA provides a very specific definition of “limited English proficiency”: see Box 1-1. The legal definition includes elements that are relatively objective and those that are relatively subjective. The objective criteria cover demographics, background, and ability to meet the state’s proficient level of achievement on state assessments; the subjective criteria cover perceived difficulties in sufficient command of the English language to be successful in classrooms in which the instructional language 1 In this report, the committee uses both ELL and LEP students to describe the population of interest. The committee favors the term ELL as more descriptive of the population and the challenges that the population faces, but it recognizes that LEP is defined and used in the ESEA legislation and for Title III reporting purposes. By official definition, LEP students are “ages 3-21, enrolled in elementary or secondary education, born outside of the United States or speaking a language other than English in their homes, and not having sufficient mastery of English to meet state standards and excel in an English-language classroom” (Title III of ESEA).
OCR for page 6
BOX 1-1 Limited English Proficiency (LEP) Student: Definition An LEP student is classified as one: (A.) who is aged 3 through 21; (B.) who is enrolled or preparing to enroll in an elementary school or secondary school; (C.) (i.) who was not born in the United States or whose native language is a language other than English; and who comes from an environment where a language other than English is dominant OR (ii.) (I.) who is a Native American or Alaska Native, or a native resident of outlying areas; and (II.) who comes from an environment where a language other than English has had a significant impact on the individual’s level of English language proficiency; OR (iii.) who is migratory, whose native language is a language other than English, and who comes from an environment where a language other than English is dominant; AND (D.) whose difficulties speaking, reading, writing, or understanding the English language may be sufficient to deny the individual — (i.) the ability to meet the State’s proficient level of achievement on State assessments described in section 1111(b)(3); (ii.) the ability to achieve successfully in classrooms where the language of instruction is English; or (iii.) the opportunity to participate fully in society. SOURCE: P.L. 107-110, Title IX, Part A, Sec. 9101 (25). is English and to participate fully in society. As discussed below and in Chapter 2 in more depth, this complex definition poses significant problems in measuring the population. The goals set by the NCLB were designed to ensure that LEP students and immigrant children and youths attain English language proficiency (ELP), and further, that they develop high levels of academic attainment in English and meet the same state academic content and student academic achievement standards as other children (Section 3102(1)). In requiring that all children, including English language learners, reach high standards by demonstrating proficiency in English language arts by 2014, the law challenged the states to develop an integrated system of ELP standards, assessments, and objectives that are linked to states’ academic content and student achievement standards set in accordance with other parts of the ESEA. The part of the legislation that has most changed the landscape is the language that makes it clear that states, districts, schools, and teachers must not only teach ELL students to speak, read, and write English, but they must also hold them to the same high academic standards as all other students. The goal is for all ELL students to demonstrate proficiency in English language arts and mathematics by 2014. Under the ESEA, states now must annually assess ELL students’ progress in becoming English language proficient, and they must include these students in annual assessments in all content areas. The states are being held accountable for demonstrating that ELL students are making progress in learning academic subjects. According to
OCR for page 7
the accountability provisions of NCLB, states must include the performance of ELL students in the determination of each school’s adequate yearly progress reporting. The explosive growth in the number of ELL students is another factor in the changed landscape. According to the DoEd, ELL students are the fastest growing educational subgroup in the nation. While the overall school population has grown by less than 3 percent in the last 10 years, the number of LEP students has increased by more than 60 percent in that time (U.S. Department of Education, 2008a, p. 8). The increased population of ELL students has had a profound influence on the expansion of ELL programs in some states and many localities, putting pressure on states to increase program resources. Between the 2002-2003 and 2007-2008 school years, the period in which data have been collected systematically on LEP students in grades K-12, the count of LEP students increased almost 25 percent, from 3,643,219 to 4,492,068. In some states, the growth has been profound. For instance, North Carolina and Nevada reported their ELL population growth as 500 and 200 percent, respectively, over the past 10 years (Batlova et al., 2005, as cited in Short and Fitzsimmons, 2007). In California, in 2008, about one-fourth of all students and one-third of elementary school students were English language learners (EdSource, 2008, p.1). This growth has led to a significant increase of programs to support ELL students. The ELL population is quite heterogeneous, and this heterogeneity poses measurement challenges. For example, more than 400 different languages are reported to be spoken by these students, although nearly 80 percent of LEP students speak Spanish (U.S. Department of Education, 2008a, p. vii). Many students come from families that speak multiple primary languages. This heterogeneity poses challenges to the local school systems, generating requirements for special curricula and other instructional resources as well as tailored monitoring, tracking, and assessment. Teaching this heterogeneous student population requires highly qualified teachers with specialized training for teaching such learners, and therefore requires teacher professional development for this task. The sizable ELL population is a particular challenge because students are at varying levels of ELP and may not be sufficiently proficient in English to demonstrate proficiency in academic content areas. Because they have the task of learning English and academic content simultaneously, it is not surprising that, as a group, they do not meet the proficient level in academic subjects: the academic gap between the group and the non-ELL population is considerable. State data show that the percentage of LEP students who score proficient on a state’s language arts and mathematics tests was lower than the state’s annual progress goals in nearly two-thirds of the 48 states for which the U.S. Government Accountability Office (2006a, p. 18) obtained data. Although the NCLB legislation has made a significant contribution to raising awareness about the need to improve ELL students’ learning and academic performance, “it has also generated challenges for states to establish a valid accountability system for ELL students” (Wolf et al., 2008, p. 2). NCLB has placed a greater
OCR for page 8
emphasis on addressing the education needs of LEP students than ever before, and Title III was designed specifically to address these needs (U.S. Department of Education, 2005b, p. 1). Since the passage of the NCLB, additional emphasis has been placed by the DoEd on the development of common core standards. Under the auspices of the National Governors Association’s Center for Best Practices (NGA Center) and the Council of Chief State School Officers, a set of state-led common core standards has been developed for English-language arts and mathematics for grades K-12. These standards were developed in collaboration with a variety of stakeholders, including content experts, states, teachers, school administrators and parents.2 The adoption of such standards has been promoted by the DoEd’s Race to the Top initiative under which consortia of states have been funded to develop an assessment system aligned to the common core standards3 (U.S. Department of Education, 2010b). This initiative fosters common core standards and is likely to affect programs of education for ELL students, and it may affect state ELP standards and the tests and assessment tools associated with those standards. ALLOCATING FUNDS FOR TITLE III PROGRAMS NCLB changed the way in which ELL programs are funded in a very significant and fundamental way. Prior to the Title III amendment, the federal government provided funds for specific projects and services by local educational agencies, but since the implementation of this legislation the funds have been distributed to the state education agencies through a formula grant mechanism. In fiscal 2010, these funds amounted to $750 million: they could grow to $800 million in fiscal 2011 (U.S. Department of Education, 2010b). The DoEd allocates the Title III funds through the following formula, after funds are reserved for discretionary grant awards and other purposes as specified in the legislation: 80 percent of the remaining funds are allocated to states on the basis of each state’s share of the national estimate of LEP students, and 20 percent are allocated on the basis of each state’s share of the national estimate of immigrant children and youth (see Box 1-2). There is a minimum state allocation of $500,000, and states are required to use up to 15 percent of their allotments for school districts with significant increases in school enrollment of immigrant children and youth. See Box 1-3 for the legislative language. The states in turn allocate the federal funds to local education agencies (LEAs) on the basis of the number of LEP students served in those LEAs. 2 For information on these standards, see http://www.corestandards.org/about-the-standards [December 2010]. 3 See the department’s announcement of winners at http://www.ed.gov/news/press-releases/us-secretary-education-duncan-announces-winners-competition-improve-student-asse [December 2010].
OCR for page 9
BOX 1-2 State Allotments Allotments for use in the department’s formula allocations are specified in Section 3111(c) (3) of Elementary and Secondary Education Act, Title III, Part A, as amended by the No Child Left Behind Act of 2001: (3) STATE ALLOTMENTS— IN GENERAL—Except as provided in subparagraph (B), from the amount appropriated under section 3001(a) for each fiscal year that remains after making the reservations under paragraph (1), the Secretary shall allot to each State educational agency having a plan approved under section 3113(c) — (i) an amount that bears the same relationship to 80 percent of the remainder as the number of limited English proficient children in the State bears to the number of such children in all States; and(ii) an amount that bears the same relationship to 20 percent of the remainder as the number of immigrant children and youth in the State bears to the number of such children and youth in all States. MINIMUM ALLOTMENTS—No State educational agency shall receive an allotment under this paragraph that is less than $500,000. Allowable Data Sources The Title III legislative mandate left it to the DoEd to determine the source of information to be used to determine the number of LEP and immigrant students to be used in the allocation formula, but it stipulated only two allowable data sources: (1) estimates of the population to be served from the U.S. Census Bureau’s American Community Survey (ACS) or (2) counts of the number of students being assessed for ELP by the states. ACS Estimates Since 2005, the department has been using only the ACS for estimates of two groups, the LEP and immigrant populations. ACS estimates are based on a nationwide household survey conducted by the U.S. Census Bureau (for a description and analysis, see National Research Council, 2007). The survey provides information on the U.S. population at the national, state, county, city, and neighborhood levels and for specific demographic groups, including racial and ethnic groups and children. For the LEP population component of the formula, the estimates used in the allocation formula are based on the responses to questions about the English-speaking ability of school-aged household members as a proxy for the number of LEP children
OCR for page 10
BOX 1-3 Legislative Mandate for Estimating the Number of LEP Students The mandate for estimating state numbers of LEP students for use in the department’s formula allocations is found at Section 3111(c) (4) of Elementary and Secondary Education Act, Title III, Part A as amended by the No Child Left Behind Act of 2001: (4) USE OF DATA FOR DETERMINATIONS— IN GENERAL—In making State allotments under paragraph (3), for the purpose of determining the number of limited English proficient children in a State and in all States, and the number of immigrant children and youth in a State and in all States, for each fiscal year, the Secretary shall use data that will yield the most accurate, up-to-date numbers of such children and youth. SPECIAL RULE— FIRST 2 YEARS—In making determinations under subparagraph (A) for the 2 fiscal years following the date of enactment of the No Child Left Behind Act of 2001, the Secretary shall determine the number of limited English proficient children in a State and in all States, and the number of immigrant children and youth in a State and in all States, using data available from the Bureau of Census or submitted by the States to the Secretary. SUBSEQUENT YEARS—For subsequent fiscal years, the Secretary shall determine the number of limited English proficient children in a State and in all States, and the number of immigrant children and youth in a State and in all States, using the more accurate of— the data available from the American Community Survey available from the Department of Commerce; or the number of children being assessed for English proficiency in a State as required under section 1111(b)(7). SOURCE: Section 3111(c) (3) of Elementary and Secondary Education Act, Title III, Part A, as amended by the No Child Left Behind Act of 2001. in the state. For the immigrant component of the formula, the estimates are based on responses to questions on place of birth and year of immigration.4 For the LEP component, two ACS questions are used: “Is a language other than English spoken in the home?” and if the response is yes, “How well does household member X speak English?” Four choices are given for the second question: “Very well,” Well,” “Not well,” and “Not at all.” The LEP estimates that are reported to 4 Before 2005, the LEP estimates were based on similar questions in the 2000 census long-form sample, and the immigrant estimates were based on state counts of recent immigrant students enrolled in grades K-12 in public and private schools.
OCR for page 11
the department from the ACS represent the total number of persons aged 5-21 for whom the answer is anything less than “Very well.” Details of the overall count are available by cross-tabulating the data with responses to other questions. For example, it is possible to differentiate between students who attend public and those who attend private schools and to present the data by different age cohorts. In this report, all of the data from the ACS that pertain to the ELL population start at age 5 because the language ability question is only asked about 5 years and older. Data are presented for both the total number of people aged 5-21 and those aged 5-18 because the latter group better represents the elementary and secondary school age population (in Chapters 2 and 5). Similarly, data are presented for the total population and also only for those enrolled in public schools in order to facilitate comparisons with the state counts, which represent ELL students in public schools (in Chapters 2 and 5).5 A summary of these variables as used in the legislative mandate, the current ACS data for allocation purposes, the state counts, and the ACS data used in this report for comparisons with the state data are shown in Table 1-1. For the immigrant component, two other ACS questions are used: “Where was household member X born?,” and if the response is “born abroad,” “When did household member X come to the United States?” The recent immigrant estimates that are reported to the DoEd from the ACS represent the number of persons aged 3-21 who were born abroad and arrived in the United States no more than 3 years prior to the survey. Several survey methodology factors affect the accuracy and precision of the ACS estimates, especially the sample design, mode of interviewing, and selection of the respondent. Sample Design and Size Each year, ACS questionnaires are sent to 3 million household addresses, and about 2 million responses are ultimately recorded in ACS data files. The responses are given unequal weighting due to subsampling of households that do not respond by mail or telephone, which increases variability of the sample weights and therefore the imprecision of the estimates relative to an equal probability design. Because the ACS estimates reflect relatively small sample sizes for a single year, the estimated numbers of LEP and immigrant children have varied significantly from year to year. Consequently, the relative allocations of funding across the states have also varied significantly from year to year. Mode of Interviewing About 50 percent of ACS responses come from mailed-back questionnaires; another 8-9 percent from computer-assisted telephone interviewing; and the final 40-42 percent from computer-assisted personal interviewing of about one-third of the households that did not respond by 5 It is not possible for the DoEd to replicate the legal definition of the LEP population (aged 3-21) presented in Box 1-1 because ACS data are not available for those under 5 years of age.
OCR for page 12
TABLE 1-1 Summary Definitions of Selected Variables Used in This Report Variable Legislative Mandate ACS Estimate for Allocation Purposes State Counts ACS Estimate for Use in Comparisons with State Counts Age 3-21 years 5-21 years 5-18 years 5-18 years School enrollment status Enrolled or preparing to enroll in a elementary and secondary school Enrollment status not specified Enrolled in public schools Enrolled in public schools in the last 3 months English speaking ability Those “whose difficulties speaking, reading, writing, or understanding the English language may be sufficient to deny the individual — (i.) the ability to meet the State’s proficient level of achievement on State assessments; (ii.) the ability to achieve successfully in classrooms where the language of instruction is English; or (iii.) the opportunity to participate fully in society” Speaks English “less than very well” An unduplicated count of all students in the state who meet the definition of LEP, which includes newly enrolled students whether or not they receive Title III services Speaks English “less than very well” mail or telephone. These different modes of response may affect the comparability of the responses, and because LEP and immigrant children are more likely to be in households that respond by mail versus those who are in households that respond through an interaction with an interviewer, the responses may be biased. Choice of Respondent One person in a household typically provides responses for all household members and that person’s judgment of the young family member’s English speaking ability (as solicited in the wording of the question) may bias reporting of the English speaking ability for children and youth or of the reporting of year of immigration. Household responses may also differ from results that would be obtained in other ways—for example, from state tests or records—in ways that could bias the ACS estimates. State Estimates State-provided counts of LEP children basically reflect the number of students in public schools (including charter schools) who are assessed for proficiency in English in a given year. Although states use a variety of instruments and procedures to identify which students are tested, most start with a questionnaire to identify stu-
OCR for page 13
dents who live in homes in which a language other than English is spoken. Students in these households are then assessed for English proficiency using state- or district-approved instruments and procedures. Students who fail to demonstrate proficiency on these assessments are designated as LEP, deemed eligible to receive services under ESEA Title III, and entered into the count that a state reports to the DoEd. Many factors may affect the counts of LEP students estimated by the states, given the U.S. educational system’s tradition of state and local control of education, including authority for determining the state’s definition of LEP, the criteria used to classify students as LEP or not LEP, the test or tests used to assess English language proficiency, and the criteria that the state uses to determine when LEP students are deemed ready to exit LEP status. There are many sources of differences. Some states allow local school districts to determine the procedures, tests, and criteria for identifying and classifying students as LEP, which results in counts that are derived in different ways even within a state. States may change their procedures, tests, and criteria over time, which results in counts that are derived in different ways over time. These differences among states in their procedures, tests, and criteria result in counts that are derived in different ways from state to state. Yet a high degree of comparability within states and among states is essential for equitable distribution of funds in a formula allocation that is based on shares of a fixed appropriated amount. And a high degree of comparability across time is essential to ensure that year-to-year changes in allocations reflect actual changes in LEP students and not changes in the procedures for testing and identifying students as LEP. REVIEW BY THE U.S. GOVERNMENT ACCOUNTABILITY OFFICE Given the differences in estimation practices summarized above, it is not surprising that the two allowable data sources used in the computation of the distribution of Title III funds have tended to yield marked differences in the amounts that would be allocated to some states. The use of the ACS estimates has also led to fluctuation between years in the funding amounts provided to states. As a result, at the request of Congress the U.S. Government Accountability Office (GAO) carried out a comprehensive review of the data sources in 2005 (U.S. Government Accountability Office, 2006b, p. 2). The GAO review compared the dollar amounts that would be allocated for LEP students by using state testing results and by using the ACS sample estimates based on responses to the subjective English ability questions. In a simulation with data on 12 states, the GAO confirmed that the differences in methodology yielded very different LEP estimates, which could result in very different fund allocations to the states. The review showed that ACS estimates were higher than counts based on state data in six cases and lower in six cases; the differences were sometimes quite striking. For example, based on data for the 2004-2005 school year, the ACS estimate of LEP students in California was almost 50 percent lower than the state’s estimate, and the ACS estimate for New York was almost 40 percent higher than the state’s estimate.
OCR for page 14
These widely different results, coupled with fundamental differences in how the numbers are derived, led the GAO to conclude that “ACS and State data each measure different populations in distinct ways and it is unclear how well either of the two data sources captures the population of children with limited English proficiency” (U.S. Government Accountability Office, 2006b, p. 3). GAO recommended that the DoEd provide clear instructions to states on how to provide data specified in the ESEA on the number of LEP students. GAO also recommended that the DoEd develop and implement a methodology for determining which is the more accurate of the two sources of data and seek authority to use statistical methodologies to reduce the variability associated with ACS data. In its evaluation of the state counts, the GAO study found other problems (U.S. Government Accountability Office, 2006b, p. 23): With regard to data states collect on the number of children and youth who are recent immigrants, state officials expressed a lack of confidence in these data. State officials in some of the 12 study states told us that these data were not very reliable because school and school district officials did not ask about immigration status directly. Some state and school district officials told us that in order to determine whether a student should be classified as a recent immigrant, they relied on information such as place of birth and the student’s date of entry into the school system. Officials in one state told us that in the absence of prior school documentation, they made the assumption that if a student was born outside the U.S. and entered the state’s school system within the last 3 years, then the student was a recent immigrant. In a presentation to our panel, the authors of the GAO review reiterated the report findings that state data were incomplete, inconsistent, and of poor quality in the early years of the program and that these deficiencies could affect the distribution of Title III funds. They also highlighted GAO findings that state counts from the ACS showed substantial variation for many states in the early years of the ACS, when sample sizes were much smaller than they are now, and that the variability significantly affected the amounts allocated to the states. In a response to the GAO report that was contained in the report, the DoEd agreed with the GAO findings, but argued that ACS data were selected as the source for its allocations because of the problems with the state administrative data sources (U.S. Government Accountability Office, 2006b, p. 49). The GAO report did document some of these problems with the state data, which seemed to be related to federal requests for the number of LEP students assessed for English proficiency each year. One problem was that the instructions did not include clear definitions (U.S. Government Accountability Office, 2006b, p. 12): “It was unclear whether states should provide the number of students screened for English proficiency, the number of students who were already identified as [LEP] who were then assessed for their proficiency or a combination of the two numbers.” Another inconsistency reflected lack of clarity about whether states were to
OCR for page 15
provide an unduplicated count of students or not. For example, some states use more than one assessment to evaluate a student’s English proficiency (such as separate assessments to assess skills in reading, writing, speaking, or listening); in these states, students could be reported more than once. According to the GAO (U.S. Government Accountability Office, 2006b, p. 12): “As a result, some states included duplicate counts of students, and in other states, these data included other student counts (based on screening of new students rather than assessments of already identified students).” In its response to the GAO report, the DoEd did not consider developing a methodology to compare the relative accuracy of the two approved data sources because of the serious issues with the state-provided data, but they pledged to revisit the GAO recommendation in the future as the quality of state data improved (U.S. Government Accountability Office, 2006b, pp. 48-50). The department did develop a plan to improve the quality of the data collected from the states. This plan included revising the instructions on the Consolidated State Performance Reports, comparing recent data to data for prior years, and incorporating data edits and checks to guide state officials when they entered data electronically. Department officials expected that these changes would improve data quality, beginning with the 2005-2006 school year. The planned changes were made, and the state data appear to be more complete with fewer year-to-year fluctuations (see details in Chapter 4). Thus, it is now appropriate to compare the relative accuracy of the data sources and to assess whether state data are appropriate for funding purposes. THIS STUDY AND THIS REPORT Against this backdrop, the DoEd asked the National Research Council’s Committee on National Statistics and Board on Testing and Assessment to convene a group of experts to review and make recommendations regarding the two allowable data sources to use for future Title III formula allocations, for both LEP and recent immigrant students. In evaluating the two sources, the Panel to Review Alternative Data Sources for the Limited-English Proficiency Allocation Formula under Title III, Part A, Elementary and Secondary Education Act, was asked to review alternative data sources for use in formula grants to states to ensure that limited English proficiency (LEP) children and youth attain English language proficiency under Title III, Part A, of the Elementary and Secondary Education Act. The formula includes two components: LEP children and youth and recent immigrant children and youth. The panel will evaluate the two currently allowable sources of estimates of each component—those from the American Community Survey (ACS) conducted by the U.S. Census Bureau and those from the results of state tests on English proficiency or state records of immigrants. In evaluating the two sources for each component, the panel will consider the accuracy and precision of estimates derived from each; what methodological, demographic, and other factors influence the estimates from each source; and what statistical or data collection methods might
OCR for page 16
increase the accuracy and precision of the estimates used in Title III allocations. In evaluating the ACS data, the panel will review not only the questions that are used to indicate limited-English proficiency and immigrant status (in particular, their accuracy), but also the literature to determine the experience of other household surveys in reporting of these items that may suggest research and development to improve their reporting on the ACS. In evaluating state tests of English proficiency, the panel will make a comprehensive assessment of the quality and comparability of such tests among states. Finally, the panel will determine if there are other data sources or methods that might be preferable to the two permitted by statute. On the basis of its information-gathering activities, the panel will deliberate, make recommendations, and publish these recommendations along with supporting findings as an independent NRC report at the conclusion of its study. In addition to addressing these questions, the panel probed deeply into the quality, comparability, and usefulness of state tests of ELP. The panel also reviewed the literature on the accuracy of household’s reports of LEP and recent immigrant status in other surveys: that research may provide insights for ACS reporting and the kinds of methodological research and development that the Census Bureau could conduct to improve the ACS estimates. Study Data and Information The panel began its work by reviewing the work of the GAO in its 2006 report. In addition, for the ACS, the panel determined the availability of information about the accuracy and precision of responses to the questions on English speaking ability that are used to estimate LEP children aged 5-21 by state, as well as to the questions on place of birth and year of immigration that are used to estimate children aged 3-21 by state who moved to the United States within 3 years of the survey date. With regard to precision, the panel analyzed the effects on the use of the ACS estimates for allocation purposes by combining estimates across more than 1 year in order to reduce sampling error. With regard to nonsampling error, the panel used the ACS public-use microdata sample files to evaluate patterns of nonresponse and imputation to the English speaking questions. To facilitate its work with the ACS, the panel benefitted from the willingness of the DoEd to share the tabulations that the Census Bureau had done for allocation purposes. The department also facilitated the preparation of special 3-year ACS tabulations so the panel could assess the appropriateness of these data for allocation purposes. The panel extended its investigation into likely nonsampling errors in ACS estimates of LEP and recent immigrant students by conducting a broad review of the survey research literature on the validity of household reports of both ELP and immigrant status. The panel also reviewed prior studies of the questions in the long forms of the 1990 and 2000 censuses and the precursor survey to the ACS in order to consider recommendations on the kinds of research the DoEd should request the Census Bureau carry out to evaluate the LEP and immigration questions more pre-
OCR for page 17
cisely and, ultimately, to improve them. For states’ estimates of their numbers of LEP students, the panel examined the published reports that document the procedures used by the states to identify and classify students as LEP and the assessments used to evaluate their ELP. The panel also heard presentations from a sample of Title III directors on the procedures, tests, and criteria they use, as well as from a sample of the organizations that develop ELP tests. Features of the state practices and tests that were examined by the panel included definitions of LEP; intake processes, including home language surveys, other assessments, and teachers’ observations; cutoff scores used to determine English language proficiency on these tests; the types of ELP tests that are used and their mode of administration; and provisions for “exiting” students from LEP status. Moreover, the panel’s review of the ELP tests extended beyond the yes-no question of whether state assessments of LEP students have become sufficiently standardized to justify their consideration for Title III allocations. The panel gathered more in-depth information on the ELP assessments in several states in order to provide the basis for findings and recommendations about ways in which the tests could affect the relative counts of ELL students in the states. However, this in-depth review, which included a polling of the states to ensure that the data were up to date, did not include a comparative review of the actual content of the various proficiency assessments used by the states. Such a review would have been beyond the scope of the panel’s charge and require resources beyond those available to the panel. The panel also reviewed issues concerning counts of recent immigrant students. It looked at the procedures for those counts in order to understand the comparability and quality of state reports of recently immigrated children and youth. Early on in its deliberations, the panel decided to limit its focus to the two eligible sources of data for the Title III allocations. There are no other household-based surveys that have both a sufficient sample size and the type of questions that permit identification of an ELL population with acceptable precision at the state level, nor are there any administrative record sources that are as germane to the allocations as the state counts of ELL students. If the panel were to have judged the two allowable data sources to be unsatisfactory for the purpose of allocating federal funds at the state level, it may have been advantageous to consider model-based estimates, along the line of the Small Area Income and Poverty Estimates (SAIPE) Program of the Census Bureau to provide more current estimates of selected income and poverty statistics than those from the most recent decennial census for school districts, counties, and states.6 However, the two allowable data sources each meet the basic criteria for service as a basis of allocations, so the creation of a model-based estimate is not required. 6 The SAIPE Program provides updated estimates of income and poverty statistics for the administration of federal programs and the allocation of federal funds to local jurisdictions under programs such as Title I of the ESEA. These estimates combine data from administrative records, intercensal population estimates, and the decennial census with direct estimates from the ACS to provide consistent and reliable 1-year estimates.
OCR for page 18
If the panel’s recommendation that an eventual allocation formula should be based on both the ACS and the state counts is deemed unsatisfactory, the DoEd might decide to devote resources to develop a model that combines the ACS estimates and the state counts. In the judgment of the panel, however, neither of the two allowable sources has flaws that require consideration of alternative data sources or methods at this time. Definition of an LEP Student In approaching its charge, the panel was mindful that, unlike the case with many other pieces of legislation that prescribe a formula for allocation of federal funds to other units of government, the law establishing the Title III allocations was relatively specific in defining the allowable data sources for formula elements. However, the law failed to define the specific data elements that should be drawn from these data sources for the computation of the allocation formula. The ACS definition of an LEP student can only be a proxy for the official LEP definition (see above). Though the ACS collects objective demographic and immigrant information and subjective information on English speaking ability, it collects no information on the ability of a student to meet the state’s proficient level of achievement on state assessments, nor does it directly measure command of the English language for classroom success or full participation in society. A measure based on the ACS definition implicitly assumes that, at some level of reported English speaking ability, a student will encounter difficulty in meeting the state’s proficient level of achievement on state assessments including the English language arts assessment and assessments of reading, mathematics, and science, or have difficulty learning content in English, or have difficulty participating fully in society. A definition based on state administrative records would not be similarly encumbered with the need to make assumptions about the relationship between reported English speaking ability and the ability to be proficient and to be successful in learning content in English in the classroom. For the most part, these criteria are measurable through tests and direct classroom observation and are readily summarized by means of such measures as ELP test scores and achievement data. However, the “ability to fully participate in society” is a subjective criterion. The official definition of LEP student that is promulgated by the department for use in state reports is the legal definition (U.S. Department of Education, 2010a, sec. 4.3, p. 12). This definition is carried forward into the Consolidated State Performance Reporting (CSPR) system in which the data are reported (as CPSR item number 188.8.131.52), and they are entered into the DoEd’s official EDFacts database (as data group 678 with file specification 141). This precise definitional trail has now been consistent for two reporting periods, 2007-2008 and 2008-2009. Thus, in terms of adherence to the letter of the law, state data are now in compliance. Unfortunately, definitional consistency throughout the reporting chain is not a sufficient basis on which to judge that a state administrative data system yields a
OCR for page 19
“more accurate” estimate of the number of LEP students, as is required by ESEA. To be more accurate, the statutory official definition would have to apply a consistent basis to ensure uniform measurement within and across states. A good and usable definition is not only consistent, it is also a transparent off-shoot of the operation of the programs within the states and stable over time. The task of arriving at one consistent definition applicable across the states is particularly difficult because of the rich variety of programs and measures used by the states and localities to meet their obligations under the law (and its interpretations), as well as the many changes in state practices and reporting procedures over time, particularly those having to do with students’ entry and exit from the program. In regard to defining LEP, the department has concluded that there is “no one, common, approved method to operationalize the term, either for initial identification purposes or for ultimate exit from an Language Instruction Educational Program (LIEP) or the LEP category” (U.S. Department of Education, 2008a, p. 7). The 2006 GAO study documented no less than three operational definitions that could be employed to identify the LEP population: see Table 1-2. Since 2006, considerable progress has been made by the department and the state education agencies in refining the data collected on the LEP population in the CSPR system. New policies about how data are to be collected and aggregated have also emerged from this effort. Thus, for purposes of this report, the definition that has been selected for the analysis is the definition used in collecting operational data from state agencies, which are the data as reported in the Consolidated State Performance Reports as Code 184.108.40.206: the unduplicated number of limited English proficient students enrolled in an elementary or secondary school at any time during TABLE 1-2 Operational Definitions of the ELL Population Definition Purpose How Measured The number of students with limited English proficiency in grades K-12 who are assessed for English proficiency ESEA allowable data for Title III allocation States develop assessment instruments and practices, with data collected by state education agencies. The number of students identified as limited English proficiency in grades K-12 State standards for identification of the population needing services States use various methods of identifying the population, including home language surveys or teacher observation reports, which are administered by local education agencies with data collected by state agencies. The number of students enrolled in state and local Title III programs Administrative counts of program participants State education agencies collect this data from local education agencies as an administrative requirement. SOURCE: U.S. Government Accountability Office (2006b, p. 14).
OCR for page 20
the school year. The panel selected this measure primarily because it is an inclusive number, pertaining to the total LEP population not just those who have been assessed under provisions of the NCLB. Moreover, this definition is expedient. In the department’s reporting scheme, it is one of the few measures for which there is a comparable historical time series. The measure also relates directly to subordinate measures that have also been consistently collected over time, such as grade level, home language, and language proficiency level. The definition of the data items from the ACS is the same as that used in the GAO report. This definition derives from the responses to the ACS survey question on the number of persons aged 5 to 21 who speak a language other than English at home and report speaking English less than “very well.” Overview of the Report In the ensuing chapters, the panel first discusses the desired characteristics of allocation formulas and then assesses the two allowable sources in terms of their relative ability to fulfill those desired characteristics. Chapter 2 assesses the ACS. It provides a summary of the survey and how the ACS estimates are presently used to make Title III allocations to states. It evaluates the quality of those estimates in terms of sampling properties, precision, sensitivity, coverage, and consistency. Chapter 3 discusses the ELP assessments used by the states, describes their features, and examines the ways in which they differ. It considers the technical quality of these tests and focuses on the extent to which they are likely to yield valid and comparable decisions across the states. Chapter 4 focuses on state policies and procedures for initially identifying ELL students, measuring their progress in becoming English proficient, and determining when they are ready to be reclassified as former ELL students (and exited from programs for English as a second language). Chapter 5 discusses the comparability of the estimates of the ELL population derived from the ACS and the state administrative record counts. Chapter 6 discusses the comparability of the estimates of the immigrant student population from the ACS and those reported to the states by local education agencies. The concluding chapter considers possible decision criteria for making the choice between the two allowable data sources, rates the sources by these criteria, and presents the panel’s recommendations.