National Academies Press: OpenBook

Monitoring Educational Equity (2019)

Chapter: Appendix A: Review of Existing Data Systems

« Previous: References and Bibliography
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 153
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 154
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 155
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 156
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 157
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 158
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 159
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 160
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 161
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 162
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 163
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 164
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 165
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 166
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 167
Suggested Citation:"Appendix A: Review of Existing Data Systems." National Academies of Sciences, Engineering, and Medicine. 2019. Monitoring Educational Equity. Washington, DC: The National Academies Press. doi: 10.17226/25389.
×
Page 168

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Prepublication copy- uncorrected proofs Appendix A Review of Existing Data Systems As part of its information gathering, the committee investigated the potential usefulness of existing data systems for monitoring progress (or lack thereof) toward equity among groups of children enrolled in K–12 education. The first part of this appendix provides a brief historical overview of the interest in education indicator systems in the United States. The second part describes and assesses the relevance for education equity of the major existing data systems that regularly monitor the state of education. Box A-1 lists the criteria that informed the committee’s assessment. Other information from the committee’s work is in the next two appendices: Appendix B reviews existing publications of indicators that are potentially relevant for monitoring K–12 education equity, and Appendix C summarizes the data and methodological challenges in implementing the committee’s recommended indicators. A BRIEF HISTORY OF EDUCATION INDICATORS1 1840–1960 Interest in developing a system of education indicators in the United States began in the mid-19th century. On the part of the federal government, the constitutionally mandated decennial census as early as 1840 asked about education and learning: the 1840-1930 censuses asked about literacy (for people over age 20); the 1850 through 2000 censuses asked about school attendance; and the 1940-2000 censuses asked about educational attainment (Citro, 2012). The school attendance and educational attainment questions are now part of the monthly American Community Survey (ACS), which began in 2005 and collects information on a broad range of topics (see below). As noted above, the 1840 census had questions about schools and school enrollment. In 1867, Congress, recognizing the need for and interest in greater detail about public education, such as school finances, teachers, and graduates, chartered a national Department of Education to “[collect] such statistics and facts as shall show the condition and progress of education in the several States” (P.L. 39-73, 14 Stat. 434). In the 1890 Second Morrill Act, Congress required the collection of similar statistics for private K–12 education. Congress abolished the new department in 1869 but not the statistics function, which it vested in a Bureau of Education in the Department of the Interior. The bureau (renamed the Office of Education) was transferred to the Federal Security Agency in 1939 and to the new U.S. Department of Health, Education, and Welfare in 1953—becoming the National Center for 1 This section draws heavily on materials on the National Center for Education Statistics website, including Federal Education Data Collection—Celebrating 150 Years. Available: https://nces.ed.gov/surveys/annualreports/pdf/Fed_Ed_Data_Collection_Celebrating_150_Years.pdf. App A‐1 

Prepublication copy- uncorrected proofs Education Statistics (NCES) in 1962. It was made part of the new Department of Education in 1980 and is now located in the department’s Institute of Education Sciences. Nongovernmental organizations also early became involved in reporting on the state of education. In 1912, the Russell Sage Foundation published a report that ranked states according to such indicators as school attendance and school expenditures (Russell Sage Foundation, 1912). Federal education statistics originally focused on characteristics of school districts, such as counts of students and teachers and revenue and expenditure information. The Annual Report of the Commissioner of Education provided comprehensive data on city school systems from 1871 to 1918; the Biannual Survey of Education in the United States became the vehicle for reporting such information from 1917 to 1955. The last biannual report expanded coverage to include suburban and rural school systems. In addition, the Office of Education published a series of annual studies on current expenditures per pupil in city school systems from 1918 through 1960. 1960–Present In the 1960s, the Office of Education suspended the collection of general local school system data for several years to meet the data collection needs set forth in recently enacted legislation, especially the 1958 National Defense Education Act and the 1965 Elementary and Secondary Education Act. The need to examine local school systems in their entirety rather than solely in terms of program segments, as well as demands by the educational community for basic data, led to the resumption of school district and school-based data collection in 1967. That work was carried out by NCES, initially on a sample basis through the Elementary & Secondary Education General Information Survey (ELSEGIS) and now as a census through the Common Core of Data (see below). In terms of measuring student outcomes, including achievement levels at different grades, and satisfactory completion of high school and preparation for adult success, the federal government took some steps in the late 1960s and early 1970s in this direction. These included initiation of the National Assessment of Educational Progress (NAEP) in 1969 by NCES, under the guidance of an advisory group (now the statutorily authorized National Assessment Governing Board, NAGB), which measures student achievement at several grades (see below). It also included the first of NCES’s longitudinal studies, the National Longitudinal Study of the High School Class of 1972, which follow students and their achievements over time (see below). These initiatives, however, did not represent a sustained effort to monitor student outcomes, let alone education equity, by relating outcomes to education resources for states, school districts, and schools. Publication of A Nation at Risk (National Commission on Excellence in Education, 1983) represented a milestone in national attention to education and is widely credited with stimulating an earnest and sustained push for close monitoring of student achievement and implementation of education reforms to raise achievement levels (Bryk and Hermanson, 1993; Ginsburg, Noell, and Plisko, 1988). It was followed in 1984 by the Department of Education’s “Wall Chart,” which, while informative, brought attention to the limitations of the available data and spurred interest in developing ways to enable valid state-by- state comparisons of student achievement (Ginsburg, Noell, and Plisko, 1988). Soon afterwards, there was a push for increased sampling for NAEP that would enable reporting of state-level achievement data, and in 2001 Congress required that all states participate in NAEP’s reading App A‐2   

Prepublication copy- uncorrected proofs and math assessments for grades 4 and 8 every 2 years as a condition of receiving Title I funds under the Elementary and Secondary Education Act. NAEP also added the so-called Trial Urban District Assessment, which allows large cities to monitor trends and compare their students’ achievements with those of other cities. In 2002, Title II of the Educational Technical Assistance Act provided for grants to states to develop and expand the Statewide Longitudinal Data Systems. These systems are intended to pull together administrative data on students and follow their progress through at least K–12 (see below). Some systems follow students from preschool through college and entry into the workforce, and some systems include links to students’ teachers. Grants, which are managed by NCES, began in 2005 and have been made to all but three states. Parallel with data collection efforts under NCES, other offices in the U.S. Department of Education have mandated data collection for administration of federal education funding programs to states and school districts and for enforcement of civil rights law. The Office of Civil Rights in the department, for example, collects useful data (see below). Individual school districts, particularly in large cities, have developed their own sets of indicators with which to monitor student progress and achievement, not only in terms of test scores, but also on other dimensions, such as absences. Many indicators are produced according to agreed-on definitions and standards developed by the Department of Education working with state education agencies. Some key indicators, however, such as grade-specific achievement tests, vary across states, and other indicators are tailored to the needs of the particular district. Researchers working with individual school districts have conducted surveys, obtained administrative records, and constructed measures that have potential use for a system of education equity indicators. The discussion of the committee’s proposed indicators in Chapters 4 and 5 references studies that suggest the value and feasibility of collecting relevant information, even though these studies have not themselves generated data with the breadth of geographic and demographic detail needed for a comprehensive indicator system. The Stanford Education Data Archive (SEDA) is an exception, in part (see below). It assembles achievement test scores for race and ethnicity groups and grades from all school districts in the nation and links test scores to characteristics of the school districts, such as urban/suburban/rural, obtained from publicly available sources. RELEVANT DATA SYSTEMS This section describes and assesses data systems that have at least some of the information that would be required for an accurate, informative report on education equity in U.S. K–12 education. Some systems are based on surveys, while others are based on administrative records or both kinds of data. The assessment covers the following data systems, using the criteria listed in Box A-1 (above):  Data systems that provide geographic and demographic detail: o American Community Survey (ACS) o Civil Rights Data Collection (CRDC) o Common Core of Data (CCD) o National Assessment of Educational Progress (NAEP) o Small-Area Income and Poverty Estimates (SAIPE) Program App A‐3 

Prepublication copy- uncorrected proofs o Stanford Education Data Archive (SEDA) o Statewide Longitudinal Data Systems (SLDS)  Data systems that provide national information and demographic detail: o Annual Social and Economic Supplement and School Enrollment Supplement of the Current Population Survey (CPS) (CPS ASEC and CPS SES, respectively) o NCES Longitudinal Surveys o NCES Household Education Survey (NHES) o NCES National Teacher and Principal Survey (NTPS) American Community Survey2 The ACS is a large survey of the U.S. population, covering about 300,000 households per month and about 3.6 million households per year. Conducted by the U.S. Census Bureau, it became operational in 2005 and includes content that was previously on the “long-form” questionnaire that was part of the decennial census.3 Assessed against the criteria for measuring education equity, the ACS performs as follows: Frequency and geographic detail—publishes data annually, including 1-year aggregations for the preceding calendar year for areas of at least 65,000 population and 5-year aggregations for school districts, census tracts, and block groups (5-year aggregates are necessary to provide reliable estimates for small areas). Data quality—good quality in terms of low unit and item nonresponse rates and comparisons with other surveys; sampling error is low for larger geographic areas, but becomes large for small geographic areas. Student groups of interest—collects data on age, gender, race and ethnicity, income, selected disabilities, immigrant status and year of immigration, and language spoken at home of students enrolled in public and private schools. Contextual factors—collects data on a wide variety of characteristics that can be tabulated for geographic areas, giving the population composition by race and ethnicity, income and poverty status, family type, and other attributes. Educational outcomes—has information on college enrollment and/or employment of college-age young people, which is relevant to Domain C (educational attainment), but post-high school status cannot be tied to the responsible school district or school. Educational opportunities—has information on the composition of the student body (in terms of income, race and ethnicity, and other characteristics) in a school district for public and private schools, which is relevant to Domain D (extent of segregation), but measures of racial and socioeconomic segregation are not available for individual schools within a district.                                                              2 For information on the ACS, see https://www.census.gov/programs-surveys/acs/; see also relevant articles in Anderson, Citro, and Salvo (2012). 3 As of 2010, the census includes a limited set of questions to meet its constitutional mandate to provide information for congressional reapportionment and legislative redistricting. App A‐4   

Prepublication copy- uncorrected proofs For its Education Demographic and Geographic Estimates (EDGE) program (see below), NCES regularly commissions the Census Bureau to prepare detailed school district tabulations, using 5-year ACS data, to describe the distribution of school-age children and their parents by such characteristics as family income and race and ethnicity.4 The Stanford Education Data Archive includes EDGE tabulations in its program (see below). It could also be possible to generate specialized tables for school districts, school attendance areas, and households of students attending a given school, given appropriate address information.5 Sample size, however, limits the amount of geographic or substantive detail that the ACS can provide with sufficient reliability for use, although modeling can help (see SAIPE Program below). Civil Rights Data Collection6 The federal government began collecting information in 1968 from school districts and schools with which to monitor and enforce laws prohibiting discrimination on the basis of race, gender, national origin, and disability in K–12 education (see Chapter 2). The CRDC program, known originally as the Elementary and Secondary School Survey, collects information biannually from school districts and schools, including juvenile justice facilities, charter schools, alternative schools, and schools serving only students with disabilities. The CRDC originally collected data from large samples of districts and schools; beginning with the 2011-2012 school year, the CRDC is a census of all districts and schools. The CRDC is a mandatory data collection, authorized under the statutes and regulations implementing Title VI of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, and Section 504 of the Rehabilitation Act of 1973, and under the Department of Education Organization Act (20 U.S.C. § 3413).7 The CRDC is a survey of all public schools and school districts in the United States. The program collects an extensive array of information, including student characteristics relevant to discrimination law, school and teacher characteristics, and school financial information. The CRDC database, with hundreds of data elements, is fully accessible to the public. School districts self-report and certify all their data. Assessed against our criteria for measuring education equity, the CRDC performs as follows: Frequency and geographic detail—collects data biannually and makes them available about 2 years after collection; for the past four cycles has covered all school districts and schools (previously, collected information from a large sample); items collected on enrollment (in Part 1) are reported as of October 1 of the school year; items collected on students participating in Advanced Placement (AP) exams (in Part 2) are reported at the end of the school year. Data quality—high, given that the Office of Civil Rights (OCR) in the Department of Education uses the data for enforcement and actively reviews the data and follows up                                                              4 See https://nces.ed.gov/programs/edge/Demographic/ACS; the latest available EDGE tables are for 2012- 2016. 5 Such work would require access to ACS microdata in a secure Federal Statistical Research Data Center; see https://www.census.gov/fsrdc. 6 For information on the CRDC, see https://ocrdata.ed.gov/. 7 For details, see 34 CFR 100.6(b); 34 CFR 106.71; and 34 CFR 104.61. App A‐5   

Prepublication copy- uncorrected proofs on discrepancies. Student groups of interest—collects data on numbers of students by race, gender, disability status (14 categories identified in the 1975 Individuals with Disabilities Education Act [IDEA] and students eligible only under Section 504 of the 1973 Rehabilitation Act), and limited English proficiency status; many variables are reported separately by these characteristics; has no information on student family income. Contextual factors—has no information. Educational outcomes—for student groups of interest, collects: o information relevant to Domain B (levels of learning and engagement), including Indicators 3 (engagement in schooling) and 4 (performance in course work), but not 5 (performance on tests). o no information relevant to Domains A or C. Educational opportunities—for student groups of interest, collects: o some information relevant to Domain D (extent of school segregation), such as the percentage of students in various race/ethnicity groups, though not for income (also collects financial information that could be used to construct indicators of resources relative to numbers of disadvantaged children, but not including children in low-income families). o basic information relevant to Domain E (high quality early childhood education). o extensive information relevant to Domain F (high quality instruction and curricula), including Indicators 10 (effective teaching), 11 (access to and enrollment in rigorous coursework), and 12 (curricular breadth). Also ascertains the number of school counselors, which is relevant to Indicator 13 (access to high quality academic interventions and supports). o information that could potentially be used for Domain G (supportive school and classroom environments), such as inference as to discipline practices from information on disciplinary actions. Common Core of Data8 The CCD is a long-standing program of NCES to provide basic statistics about K–12 education—its immediate predecessor was ELSEGIS, which began in 1967 on a sample survey basis. The CCD covers all public elementary and secondary schools and districts in the nation. Districts report school and district nonfinancial data to state educational agencies, which in turn submit the data through the U.S. Department of Education’s EDFacts system,9 according to definitions and reporting standards developed by NCES in cooperation with the Council of Chief State School Officers (CCSSO). School districts and state educational agencies submit financial data to the Census Bureau, on forms and definitions developed by the Bureau. Both EDFacts and Census Bureau data are regularly reviewed for accuracy and corrected as needed; data files become available with a 1- to 2-year lag. 8 For information on the CCD, see https://nces.ed.gov/ccd/. The Common Core of Data should not be confused with the Common Core State Standards (see http://www.corestandards.org/about-the-standards/). 9 For information about EDFacts, see https://www2.ed.gov/about/inits/ed/edfacts/index.html, which contains a link to all of the EDFacts file specifications. In addition to the CCD, states satisfy the reporting requirements of the 2015 Every Student Succeeds Act through EDFacts. App A‐6 

Prepublication copy- uncorrected proofs Assessed against the criteria for measuring education equity, the CCD performs as follows: Frequency and geographic detail—collects data annually from a census of school districts and schools and makes them available with about a 1-year lag; provides data for all levels of geography. Data quality—high; NCES and the CCSSO have worked to ensure common definitions of enrollment, 4-year high school graduation rates (that take account that students starting high school in one school district may transfer to another), and other variables that could be subject to different interpretations; similarly, the Census Bureau has worked to establish common definitions for financial reporting. Student groups of interest—obtains enrollment by grade by race and ethnicity, gender, and eligibility for free and reduced-price school lunches as a proxy for family income; obtains district-level counts of students with disabilities (total, not by type of disability) and English-language learners. Contextual factors—has no information. Educational outcomes—obtains limited information relevant to Domain C, Indicator 6 (on-time high school graduation), including district-level counts of high school diploma recipients and other high school completers and district-level counts of high school dropouts (access to data on grade, race and ethnicity, and gender composition of high school dropouts is restricted). Educational opportunities—obtains limited information relevant to Domain F (high quality instruction and curricula), such as training and length of service of teachers (relevant to Indicator 10), and whether high school AP courses are offered (relevant to Indicator 11); financial information submitted to the Census Bureau by districts and states breaks out sources of revenue, types of expenditures, and state and federal government support to districts by program (e.g., bilingual education) and could be used to construct indicators of resources relative to numbers of disadvantaged children at the district and state level, which is relevant to Domain D, Indicator 8. Some points to note with regard to the nonfinancial information in the CCD include:  The data do not identify immigrant children or U.S.-born children living with immigrant parents.  The variable used for socioeconomic status—namely, children eligible for free and reduced-price school lunches—is increasingly less useful for this purpose (see discussion of Indicator 8, in Appendix B). The NCES commissioner has an initiative to develop a better measure of family income for children attending a school to address this problem (see discussion of the Small-Area Income and Poverty Estimates Program, below).  Data for some groups, including children with disabilities and English-language learners, are not available for individual schools; this is also the case for counts of dropouts and high school completers. App A‐7 

Prepublication copy- uncorrected proofs National Assessment of Educational Progress10 NAEP is a long-standing and highly respected source of comparable data across the nation on student achievement at several grade levels in reading, math, and other subjects. Planning for NAEP began in 1964, and the first assessments were conducted in 1969 on a trial basis. The assessments, which were administered in public and private schools, covered citizenship, science, and writing performance of 17-year-old students in spring 1969 and of 9- and 13-year-old students and out-of-school 17-year-olds in fall 1969. Beginning in 1971 for reading and 1973 for math, NAEP has assessed 9-, 13-, and 17-year-olds every 4 years in what is termed “long-term trend NAEP,” in which content has been kept as comparable as possible over time. Beginning in1990 for math and 1992 for reading, “main NAEP,” in which content is modified about every 10 years to reflect changes in school curricula, has assessed 4th, 8th, and 12th graders. The Main NAEP sample size increased beginning in 1990 to support reliable results for states. Participation by students has always been voluntary, but in 2001 Congress required states to participate in the main NAEP 4th- and 8th-grade reading and math assessments as a condition of receiving Title I funding under the Elementary and Secondary Education Act. Beginning in 2002 Congress provided funding for selected urban school districts to participate in main NAEP as part of the Trial Urban District Assessment—eligible school districts must be above a specified number of enrolled students, a specified percentage of black or Hispanic students, and a specified percentage of students eligible for free and reduced-price school lunches. Assessed against the criteria for measuring education equity, main NAEP in its current form performs as follows: Frequency and geographic detail—frequency varies by subject area and geographic level: every 2 years for math and reading for grades 4 and 8 for the nation, states, and selected urban districts (27 as of 2017); every 4 years for math and reading for grade 12 for the nation; periodically for science and writing for grades 4 and 8 for the nation, states, and selected urban districts; periodically for other subjects— technology and engineering literacy, arts, civics, geography, economics, and U.S. history—for the nation. The national assessments include public and private schools; the additional samples for states and selected urban districts include only public schools. Data quality—high, given the extensive methodological research that goes into constructing the content in each NAEP assessment and the sample design; response rates for schools in the period 2003-2015 were 95 percent or higher for grades 4 and 8 and 90-95 percent for grade 12, while student participation rates in the same period were several points lower than the school rates for grades 4 and 8 and considerably lower than the school rates for grade 12.11 Student groups of interest—collects information on participating students’ gender and race and ethnicity; also collects data on participation in the free and reduced-price                                                              10 For information about NAEP, see https://nces.ed.gov/nationsreportcard/—particularly “History and Innovation” under “About” on the main page; see also Box 6-1 in Chapter 6 for a brief history of NAEP. 11 See Focus on NAEP, Figures 1 and 2; available: https://www.nationsreportcard.gov/focus_on_naep/files/g12_companion.pdf. App A‐8   

Prepublication copy- uncorrected proofs school lunch program and disability and ELL status (NAEP accommodates the latter two groups of students to encourage their participation). Contextual factors—has no information. Educational outcomes—estimates for each subject area the percentage of students achieving at specific proficiency levels, which is relevant to Domain B, Indicator 5 (performance on achievement tests). Educational opportunities—has no information. Although main NAEP cannot be used to provide indicators for most school districts or for any schools, it is nonetheless important for an education equity indicator system. Specifically, it can be used to calibrate state achievement test results of schools and school districts and thereby achieve greater comparability across states (see discussion of Stanford Education Data Archive, below). Small-Area Income and Poverty Estimates Program12 The U.S. Census Bureau runs the SAIPE Program, which began producing estimates in 1993 of the total number of school-age children and the number of school-age children in poverty by state, county, and school district for allocation of Title I funds under the Elementary and Secondary Education Act. The state and county estimates use a combination of 1-year ACS estimates and information from individual tax returns from the Internal Revenue Service (IRS) and records from the Supplemental Nutrition Assistance Program (SNAP, formerly food stamps), together with hierarchical Bayes modeling techniques to enhance the reliability of the estimates. The estimates for school districts use IRS data to estimate school-age poverty, adjusting those estimates to agree with the county estimates. This methodology currently produces just a single indicator of relevance for education equity—namely, the concentration of poor school-age children by school district, which is relevant to Domain D, Indicator 8. The SAIPE estimates are released annually, about a year after completion of data collection. The NCES commissioner has made it a priority to work with the Census Bureau to develop models with the ACS and administrative records to estimate poverty for students attending particular schools.13 If successful, such estimates would be an improvement over the current reliance on the number or percentage of children participating in free and reduced-price school lunch as an indicator of low-income status. Such models also could be extended to produce estimates for other groups of students of interest. Stanford Education Data Archive14 SEDA, which is supported by the Institute of Education Sciences at the U.S. Department of Education and a number of foundations, has the following mission: [to harness] data to help scholars, policymakers, educators, and parents learn how to improve educational opportunity for all children. SEDA includes a range of detailed data on educational conditions, contexts, and outcomes in school districts 12 For information about SAIPE, see https://www.census.gov/programs-surveys/saipe.html. 13 See http://magazine.amstat.org/blog/2018/10/01/meet-james-woodworth-nces-commissioner/. 14 For information about SEDA, see https://cepa.stanford.edu/seda/overview. App A‐9 

Prepublication copy- uncorrected proofs and counties across the United States. It includes measures of academic achievement and achievement gaps for school districts and counties, as well as district-level measures of racial and socioeconomic composition, racial and socioeconomic segregation patterns, and other features of the schooling system. SEDA performs on our criteria for useful data systems for monitoring education equity as follows:  Frequency and geographic detail—currently (version 2.1) contains achievement data for the 2008-2009 through 2014-2015 school years for grades 3-8, obtained from state submissions for schools to EDFacts; school district student body composition from NCES’s EDGE program of estimates derived from the 2006-2010 ACS; and financial and nonfinancial information on schools and districts from the CCD. It was scheduled to be updated in March 2019 to include data through the 2015-2016 school year and data broken down by economic status. It produces aggregate estimates for the nation, states, counties, and school districts. It also aggregates data for schools but does not release them as such. Data quality—reflects the quality of the original data source; staff put in substantial effort to standardize data where possible—for example, by calibrating state achievement test scores to NAEP scores and generating constructed estimates that are as comparable as possible across states. Student groups of interest—has information on gender, race and ethnicity, and disadvantaged status of students in each school grade (with disadvantaged status defined by the school district and generally based on participation in the free and reduced-price school lunch program), but no information on disability, English- language learning, or immigrant status. Contextual factors—has the socioeconomic and demographic composition of school-age children in the relevant grades who attend public schools in a district, from the ACS- based EDGE. Educational outcomes—has student test scores for students in grades 3-8 in reading and math and a number of constructed variables, such as calibrations with NAEP and estimates of student progress (e.g., scores of students in grade x in year t compared with the students in grade x+1 in year t+1); these variables are relevant for Domain B, Indicator 5 (performance on achievement tests). Educational opportunities—has data from the CCD, which is relevant to some indicators in Domains F and G. SEDA is a valuable resource for research on educational equity and has generated attention-getting research on inequitable educational outcomes and opportunities. All data aggregates are publicly available and appropriately protected for confidentiality (e.g., by adding small amounts of statistical noise). At present, however, SEDA could not support a full-fledged education equity indicator system, such as we recommend, for several reasons. It is not up to date and does not as yet have a regular update schedule; it does not cover high school achievement or high school student and school characteristics, principally because there is not sufficient commonality among states as to when they test high school students; and it does not include important student characteristics attached to test scores—specifically, a good measure of App A‐10   

Prepublication copy- uncorrected proofs family income or socioeconomic status or any measure of disability, immigrant, or English- language learning status. Statewide Longitudinal Data Systems15 Interest in SLDS began in the 1990s, when individual states began to recognize the value of linking administrative data on students, teachers, and schools in order to be able to track student progress, identify correlates and perhaps causes of progress (or decline), and, more generally, to obtain a clearer and fuller picture of their K–12 educational systems. Some states sought to expand their databases to include postsecondary outcomes, such as employment and college enrollment, or to link their databases with other kinds of state records, such as public assistance records. Often, states worked in cooperation with university education research centers.16 The federal government gave an important boost to these efforts when Congress provided funding for grants to states, administered by NCES, for the development and enhancement of statewide systems that linked student and other information over time using unique student identifiers. Funding was provided through Title II of the 2002 Educational Technical Assistance Act, with the first grants to states made in 2005. As of 2018, 47 states, American Samoa, the District of Columbia, Puerto Rico, and the Virgin Islands have received grants. Grants in 2005 and 2007 covered construction of data for K–12; grants in subsequent years have also supported linkages with pre-K, postsecondary, workforce, and teacher-student data. Because the awards are grants and not contracts, states have considerable latitude in the content and construction of an SLDS, although they must include variables that are required to be reported to the U.S. Department of Education, such as those in the CRDC. The committee did not attempt to evaluate each state’s SLDS for use in an education equity indicator system, especially because access to the data is under the control of each state according to its own privacy policies and its interpretation of the Family Educational Rights and Privacy Act of 1974. To the extent that the organization(s) established to develop and operate a nationwide educational equity indicator system (see Chapter 6) can gain ready access to each state’s SLDS, it is likely that the data would be highly useful for the purpose. Even without nationwide access, researchers’ use of particular states’ SLDS could generate ideas for new and refined indicators that could be produced from other, more readily available sources. Other Data Collection Programs with Nationwide Detail We briefly mention five other data collection programs that can serve such functions as providing input for national headline indicators (e.g., high school graduation rates for different student groups of interest) or supporting research that could lead to new or more refined indicators: the Annual Social and Economic Supplement and the School Enrollment Supplement of the CPS, NCES’s longitudinal surveys, NCES’s National Household Education Survey, and NCES’s National Teacher and Principal Survey. The CPS, ASEC, and CPS SES are household surveys conducted annually (the CPS 15 For information about the SLDS, see https://nces.ed.gov/programs/SLDS/. 16 For descriptions of Florida’s and North Carolina’s longitudinal database construction programs, see National Research Council (2009). App A‐11 

Prepublication copy- uncorrected proofs ASEC in February-April and the CPS SES in October) as supplements to the monthly CPS.17 The CPS ASEC samples 100,000 households and obtains detailed information on employment and income over the preceding calendar year, disability and health status, language spoken at home, citizenship, when came to the United States, educational enrollment and attainment, veteran status, marital status, and family composition. State estimates are possible by averaging over 3 years. The CPS SES, which is supported by NCES, routinely gathers data on school enrollment and educational attainment for elementary, secondary, and postsecondary education for members of about 60,000 households. Related data are also collected about pre-schooling and the general adult population. In addition, NCES funds additional items on education-related topics, such as language proficiency, disabilities, computer use and access, student mobility, and private school tuition. Beginning in 1972, NCES has conducted longitudinal surveys of students of specified grades who are followed over time.18 The first such survey, NLS-72, was of students who were high school seniors in the spring of 1972; they were reinterviewed four times over a 14-year period. The first longitudinal study of younger children began in 1998 with a sample of kindergartners, who were followed through 2007 (ECLS-K), and a sample of newborns in 2001, who were followed through 2007 (ECLS-B). The most recent such surveys include ELS:2002, a sample of high school sophomores in 2002 and seniors in 2004, who were followed through 2012; HSLS-09, a sample of students enrolled in 9th grade in 2009, who were followed through 2016; ECLS-K:2011, a sample of kindergartners in 2011, who were followed through 2016; and MGLS:2017, a sample of students enrolled in 6th grade in 2017, who will be followed through 2020. NCES also conducts longitudinal surveys of postsecondary students. These surveys contain rich content, including not only students’ academic progress and achievement, but also social, emotional, and physical development. They typically also include extensive information on the children’s homes, classrooms, and school environments. The NHES is designed to address a wide range of education-related topics.19 Administrations of the survey were conducted by telephone approximately every 2 years from 1991 through 2007. Because of falling response rates, it was redesigned as a mail survey and administered in 2012 and 2016. Topics covered relevant to education equity indicators for K–12 have included early childhood program participation (1991, 1995, 2001, 2005, 2012, 2016); school readiness (1993, 1999, 2007); parent and family involvement in education (1996, 1999, 2003, 2007, 2012, 2016); community service and civic involvement of students in grades 6-12 (1996, 1999); plans for post-high school education (1999); nonparental care and before- and after-school educational activities (1999, 2001, 2005 [after-school activities only]); and school safety and discipline (1993). Sample sizes have ranged from 7,000 to 21,000 students or parents, depending on the topic. The NTPS is a biannual sample survey of public K–12 schools, including public charter schools, designed to produce national estimates of teacher, principal, and school characteristics (sample size of about 8,300 schools).20 Each school in the sample provides a teacher listing form and fills out a school questionnaire, while its principal fills out his or her own questionnaire, and                                                              17 See https://www2.census.gov/programs-surveys/cps/techdocs/cpsmar18.pdf and https://www2.census.gov/programs-surveys/cps/techdocs/cpsoct17.pdf; https://nces.ed.gov/surveys/cps/. 18 See https://nces.ed.gov/surveys/hsls09/; https://nces.ed.gov/ecls/; https://nces.ed.gov/training/datauser/COMO_07/assets/COMO_07_Slides.pdf. 19 See https://nces.ed.gov/nhes/. 20 See http://nces.ed.gov/surveys/ntps/. App A‐12   

Prepublication copy- uncorrected proofs a sample of teachers fills out a teacher questionnaire. The NTPS is a redesign of the Schools and Staffing Survey, which NCES conducted from 1987 to 2011. The NTPS collects data on core topics, including teacher and principal preparation, classes taught, school characteristics, and demographics of the teacher and principal labor force. In addition, each administration of NTPS contains rotating modules on important education topics, such as professional development, working conditions, and evaluation. App A‐13 

Prepublication copy- uncorrected proofs BOX A-1 Criteria for Assessing Data Systems for Education Equity Indicators 1. Published on a regular, frequent basis—at least annually. 2. Available for subnational geographic areas, including states, school districts, and, ideally, schools or school attendance areas, as appropriate. 3. High-quality when assessed on measures of nonsampling error (e.g., accurate reporting of student enrollment) and on measures of sampling error (for survey-based data). 4. Available for groups of children of interest for education equity (see Chapter 2 text), as defined by race and ethnicity, gender, family income (or equivalent measure of socioeconomic resources), disability status, immigrant status, and English language capability. a. For immigrant children, indicative of time of entry into the United States to appropriately include/exclude them in equity indicators (e.g., exclude from a high school graduation measure if they arrived only a year before graduation).. b. For English-language learners, when possible, indicative of the number of years spent in an English-learner program, whether a student waived out of English-learner instruction, and time and type of reclassification to English-proficient status. 5. Measures contextual factors, such as neighborhood income and family type composition for student groups of interest (see Chapter 3).* 6. Measures students’ educational outcomes for student groups of interest in three domains comprising seven indicators, each with one or more constructs to be measured (see Chapter 4): Domain A: Kindergarten readiness Indicator 1: Disparities in academic readiness (reading/literacy, numeracy/math skills) Indicator 2: Disparities in self-regulation and attention skills Domain B: K–12 learning and engagement (measured at multiple levels/grades) Indicator 3: Disparities in engagement in schooling (attendance/absenteeism, academic engagement) Indicator 4: Disparities in performance in coursework (success in classes, accumulating credits to be on track to graduate, grades/GPA) Indicator 5: Success in classes (reading/math/science achievement, learning growth in reading/math/science achievement) Domain C: Educational attainment Indicator 6: Disparities in on-time high school graduation Indicator 7: Disparities in postsecondary readiness (enrollment in college, entry into the workforce, enlistment in the military) 7. Measures school-provided opportunities to learn for student groups of interest in four domains comprising nine indicators, each with one or more constructs to be measured (see Chapter 5): Domain D: Extent of racial, ethnic, and economic segregation Indicator 8: Disparities in students’ exposure to racial, ethnic, and economic segregation (concentrated poverty in schools, racial segregation within and across schools) Domain E: Equitable access to high-quality early learning programs App A‐14 

Prepublication copy- uncorrected proofs Indicator 9: Disparities in access to and participation in quality pre-K programs (availability and participation in licensed pre-K programs) Domain F: Equitable access to high-quality instruction and curricula Indicator 10: Disparities in access to effective teaching (teachers’ years of experience, teachers’ credentials/certification, racial/ethnic diversity of the teaching force) Indicator 11: Disparities in access to and enrollment in rigorous coursework (availability/enrollment in advanced, rigorous coursework, availability/ enrollment in AP, IB, and dual-enrollment programs, availability/enrollment in gifted and talented programs) Indicator 12: Disparities in curricular breadth (availability/enrollment in coursework in the arts, social sciences, sciences, and technology) Indicator 13: Disparities in access to high-quality academic supports (access to and participation in formalized systems of tutoring or other types of academic supports; access to and participation in appropriate academic content for English- language learners and special education children) Domain G: Equitable access to supportive school and classroom environments Indicator 14: Disparities in school climates (perceptions of safety, academic support, academically focused culture, teacher-student trust); Indicator 15: Disparities in nonexclusionary discipline practices (out-of-school suspensions/expulsions); Indicator 16: Disparities in nonacademic supports for student success (supports for emotional, behavioral, mental, physical health) * Although we do not propose indicators of context, they would be critical to inform efforts of school systems to work with other sectors to combat root causes of poverty and other factors that adversely affect students’ educational attainment. App A‐15 

Prepublication copy- uncorrected proofs

Next: Appendix B: Assessment of Relevant Publications »
Monitoring Educational Equity Get This Book
×
Buy Prepub | $74.00 Buy Paperback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Disparities in educational attainment among population groups have characterized the United States throughout its history. Education is sometimes characterized as the “great equalizer,” but to date, the country has not found ways to successfully address the adverse effects of socioeconomic circumstances, prejudice, and discrimination that suppress performance for some groups.

To ensure that the pursuit of equity encompasses both the goals to which the nation aspires for its children and the mechanisms to attain those goals, a revised set of equity indicators is needed. Measures of educational equity often fail to account for the impact of the circumstances in which students live on their academic engagement, academic progress, and educational attainment. Some of the contextual factors that bear on learning include food and housing insecurity, exposure to violence, unsafe neighborhoods, adverse childhood experiences, and exposure to environmental toxins. Consequently, it is difficult to identify when intervention is necessary and how it should function. A revised set of equity indicators should highlight disparities, provide a way to explore potential causes, and point toward possible improvements.

Monitoring Educational Equity proposes a system of indicators of educational equity and presents recommendations for implementation. This report also serves as a framework to help policy makers better understand and combat inequity in the United States’ education system. Disparities in educational opportunities reinforce, and often amplify, disparities in outcomes throughout people’s lives. Thus, it is critical to ensure that all students receive comprehensive supports that level the playing field in order to improve the well-being of underrepresented individuals and the nation.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!