Summary of Recommendations
This report addresses three key aspects of dropout and completion rates: (1) the ways they are calculated, (2) the data that are used to calculate them, and (3) the uses that are made of them. In this final chapter, we revisit and summarize our recommendations in these three broad areas.
METHODS FOR CALCULATING AND REPORTING THE RATES
Dropout and completion rates are among the most basic indicators of the effectiveness of the school systems in this country. It is therefore essential that the rates be accurate and that the strengths and limitations of the rates be understood by those making use of them. In Chapter 3, we laid out five criteria for methods for computing dropout and completion rates. These methods should (1) provide the most accurate assessment possible of how many students actually complete or drop out of school, (2) not be biased in favor of certain types of schools, (3) be inclusive of all students but not double-count them, (4) be stable enough to validly track trends over time, and (5) be sensitive to real changes in student outcomes. It is important to note, however, that no indicator is perfect, and trade-offs among these criteria are almost always required. To help users draw sound conclusions about any reported rate, the strengths and weaknesses of the rate and the decisions that went into the calculations should be documented. On this issue, we make two recommendations:
RECOMMENDATION 3-1: The strengths and weaknesses of dropout and completion rates should be made explicit when the rates are reported.
RECOMMENDATION 3-2: Rates should be accompanied with documentation about the underlying decisions that were made regarding students who transfer from one school to another, are retained in grade, receive a GED or an alternative diploma, and take longer than four years to graduate.
The federal government requires states and districts to produce four-year graduation rates that include diploma recipients only. There are compelling reasons for using this statistic as the primary indicator of high school completion. Without a common definition such as this, graduation rates will not be comparable across districts, states, or time. However, there are also legitimate reasons for producing more inclusive completion indicators that allow students more time to complete high school and that include other forms of completion, such as GEDs and alternative diplomas. On this issue, we make two recommendations. We endorse the inclusion of dropout and completion indicators in accountability policy but recommend that a variety of statistics be reported, specifically:
RECOMMENDATION 2-1: Federal and state accountability policy should require schools and districts to report a number of types of dropout, graduation, and completion rates: for all students and for students grouped by race/ethnicity, gender, socioeconomic status, English language learner status, and disability status. Furthermore, accountability policy should require schools and districts to set and meet meaningful progress goals for improving their graduation and dropout rates. Rates that are used for accountability should be carefully structured and reported in ways that minimize bias resulting from student mobility and subgroup definitions.
RECOMMENDATION 3-4: In addition to the standard graduation rate that is limited to four-year recipients of regular diplomas, states and districts should produce a comprehensive completion rate that includes all forms of completion and allows students up to six years for completion. This rate should be used as a supplemental indicator to the four-year graduation rate, which should continue to be used as the primary indicator for gauging school, district, and state performance.
Because decisions about how to handle various groups of students can affect the rates, we think it is important to supplement dropout and completion indicators with information to help users accurately interpret them. For instance, schools and states have different policies for handling transfer students. Some states require transfers to be officially verified before the student can be removed from a school’s roster; others have more lenient verification policies. Some schools have policies that implicitly or explicitly encourage
low-performing students to transfer elsewhere. These low-performing students may be more likely to transfer to schools that have little control over their enrollments, rather than to schools that have control over which students enroll (such as charter schools). Documentation of how transfers are handled is critical for interpreting school-level rates. Also useful is an estimate of the transfer and/or leave rate and supplementary graduation and dropout rates that do not remove transfers or incorporate new students. This additional information would allow examination of the ways in which schools’ policies for handling transfer students affect the reported rates.
Policies for grade retention also vary across schools, districts, states, and time. These policies, and particularly year-to-year changes in these policies, can cause trends in the rates to fluctuate over time. Age-based cohort rates can provide information to help users understand and evaluate trends in grade-based rates. Age-based rates have the advantage that they are unaffected by patterns in grade retention that may have affected one cohort differentially from another. They are also more inclusive, in that they can include students who never make it to high school and include special education students with their peers.
If the limitations associated with a reported rate are made explicit, supplemental rates can be calculated to verify any conclusions that are based on the statistics. This would require data to be available to calculate the supplemental statistics. We therefore recommend:
RECOMMENDATION 3-3: To the extent possible, data should be made available to allow supplementary rates to be calculated that compensate for the limitations in reported rates and help users to further understand the rates. Types of supplementary information include transfer rates, rates that do not remove transfer students or incorporate new students, age-based rates, and the percentage of students with unknown graduation status.
Throughout this report, we have discussed the variety of kinds of rates (e.g., status rates, event rates, cohort rates based on individual data, and cohort rates based on aggregated data) and the advantages and disadvantages of each. We have emphasized that decisions about which rate to report should be based on the intended uses. Some rates are more appropriate for providing information about the human capital of the country’s population, some are more appropriate for characterizing the holding power of schools, and some are more appropriate for characterizing students’ success at navigating through high school. When selecting from among the various kinds of rates, users should keep the underlying purpose in mind. We therefore recommend:
RECOMMENDATION 4-1: The choice of a dropout or completion indicator should be based on the purpose and uses of the indicator.
Our review also suggests that cohort rates based on aggregate data are not sufficiently accurate for research, policy, or accountability decisions. When these rates are used to make fine distinctions, such as to make comparisons across states, districts, or schools or across time, they may lead to erroneous conclusions. Three methods for calculating aggregate cohort rates—the Promoting Power Index (PPI), the Averaged Freshman Graduation Rate (AFGR), and the Cumulative Proportion Index (CPI)—are commonly used and receive wide attention. The PPI is used by the Alliance for Excellent Education and others. The AFGR is used by the National Center for Education Statistics to report district- and state-level graduation rates and, by virtue of being produced by the federal government, has an implicit stamp of legitimacy that is not justified. The CPI is used in Diplomas Count, the annual publication by Editorial Projects in Education that summarizes states’ and districts’ progress in graduating their students. Use of these rates should be phased out in favor of true cohort rates, which are most accurate when based on individual longitudinal data. We thus make two recommendations:
RECOMMENDATION 4-2: Whenever possible, dropout and completion rates should be based on individual student-level data. This allows for the greatest flexibility and transparency with respect to how data analysts handle important methodological issues that arise in defining the numerator and the denominator of these rates.
RECOMMENDATION 4-3: The Averaged Freshman Graduation Rate, the Cumulative Proportion Index, the Promoting Power Index, and similar measures based on aggregate-level data produce demonstrably biased estimates. These indicators should be phased out in favor of true longitudinal rates, particularly to report district-level rates or to make comparisons across states, districts, and schools or over time.
In the past few years, dropout and graduation rates have received much attention, in part because of discrepancies in the reported rates. These discrepancies have arisen as a result of different ways of calculating the rates, different purposes for the rates, and different ways of defining terms and populations of interest. The federal government can do much to help ameliorate the confusion about the rates. For instance, in 2008, it provided regulatory guidance about how the rates were to be calculated and reported to meet the requirements of the No Child Left Behind (NCLB) Act. We recognize that education falls within the purview of state governments; however, we think the federal government should continue to play a role in bringing comparability to the ways that the rates are calculated and in the development of improved indicators. On this issue we recommend:
RECOMMENDATION 4-5: The federal government should continue to promote common definitions of a broad array of dropout, graduation, and completion indicators and also to describe the appropriate uses and limitations of each statistic.
As part of the NCLB regulations, states and districts are expected to report disaggregated graduation rates, such as for students grouped by low-income eligibility, disability status, and English language learner status, and to track their progress over time. These subgroup statistics are often not comparable across schools, districts, or states due to differing methods and rates of identification and reclassification into and out of the subgroup. Furthermore, the methods by which students are placed into subgroups can lead to inaccurate judgments about educational efficacy in a school system for members of the subgroup. For English language learners (ELLs), inaccuracies are introduced because classification into the subgroup changes over time and the rate of reclassification is correlated with dropping out. For students with disabilities, underidentification of disabilities and different methods of classifying disabilities results in lack of comparability. Furthermore, because some students with disabilities are expected to remain in high school for more than four years, the subgroup statistics for students with disabilities will be disproportionately affected by decisions about the number of years allowed for graduation in the indicators (e.g., four-year versus five-year rates).
The main purpose of subgroup statistics is to gauge the degree to which schools, districts, and states are serving particular groups of students. To make these judgments fairly, alternative statistics could provide supplemental information for subgroups. With regard to graduation rates for these subgroups, we recommend:
RECOMMENDATION 3-5: To improve knowledge about graduation rates among subgroups, alternate statistics should complement conventional indicators. Alternate graduation rates for English language learners should include former ELL students as well as students currently classified in this category. Thus, records on ELL status should accompany students as they progress through grades, change ELL status, and transfer across districts. Alternative graduation rates for special education students and English language learners should allow additional years toward graduation.
DATA AND DATA SYSTEMS
Dropout and completion rates cannot be calculated without data. As we have described throughout this report, the accuracy of the rates depends on the accuracy and the completeness of the data used for their calculation. Decisions about the kinds of data to collect and how they are handled can substantially
impact the rates. In the various chapters of this report, we have made recommendations about actions that should be taken at different administrative levels to ensure that the data are of highest quality. Below we group these recommendations into those intended for states and local school districts and those intended for the federal government.
State and Local Education Agencies
States play the leading role in collecting the data that are used to produce cohort rates, the rates that are ultimately used for accountability purposes. In Chapter 6, we discussed the essential elements of a longitudinal data system identified by the Data Quality Campaign (see Box 6-1). We think these components are critical for ensuring that data systems are able to track students accurately, calculate dropout and completion rates, monitor students’ progress, identify students at risk of dropping out, and conduct research to evaluate the effectiveness of their programs. We encourage all states to incorporate these components into their systems and therefore recommend:
RECOMMENDATIONS 6-1: All states should develop data systems that include the 10 essential elements identified by the Data Quality Campaign as critical for calculating the National Governors Association graduation rate. These elements include a unique student identifier, student-level information (data on their enrollments, demographics, program participation, test scores, courses taken, grades, and college readiness test scores), the ability to match students to their teachers and to the postsecondary system, the ability to calculate graduation and dropout rates, and a method for auditing the system.
State and local education agencies can take a number of steps to ensure the quality of their data systems and the data that are incorporated into them. Specifically, data systems should be developed so that the information contained in them is understandable, reliable, relevant for the intended purpose, available in a timely manner, and handled in a consistent and comparable way over time. Annual written documentation of processes, procedures, and results will help maintain consistency and quality over time. It is also critical to institute a process for adding elements or making changes to the data system. Likewise, mechanisms for data retrieval should be incorporated into system designs so that usable data sets can be easily produced. New data elements should be clearly defined, the coding should be documented, and the new elements should adhere to established protocol for the system. If the goal is to make comparisons across years, it is important that the data and algorithms remain consistent. One small change in method may result in inaccurate and inappropriate comparisons. We thus recommend:
RECOMMENDATION 6-2: All states and local education agencies should maintain written documentation of their processes, procedures, and results. The documentation should be updated annually and should include a process for adding elements or making changes to the system. When data systems or recording procedures or codes are revised, old and new systems should be used in parallel for a period of time to determine consistency.
The quality of the data begins at the point when data are collected and entered into the system. It is therefore important that training be provided for those who carry out these tasks. Extensive and ongoing staff training should cover the collection, storage, analysis, and use of the data at the state, district, and school levels. To this end, system developers should develop clearly defined, carefully articulated coding systems that all contributors to and users of the system can understand. As they do this, system developers should think about ways that those entering the data might interpret the rules in ways other than what was intended and try to prevent these misinterpretations. On this point, we recommend:
RECOMMENDATION 6-3: All states and local education agencies should implement a system of extensive and ongoing training for staff that addresses appropriate procedures for collection, storage, analysis, and use of the data at the state, district, and school levels.
An important mechanism for verifying the accuracy of data that are incorporated into the system is to conduct regular audits of the school systems. Audits can help to ensure that local education agencies are following the intended procedures, that reporting of student enrollment status is accurate, and that adequate documentation is obtained to verify the status of transfer students and students coded as dropouts. Audits can help to identify procedures or processes that are posing problems and can be used to improve instructions provided to school systems. We therefore recommend:
RECOMMENDATION 6-4: All states and local education agencies should conduct regular audits of data systems to ensure that reporting of student enrollment status is accurate and that adequate documentation is obtained to verify the status of transfer students and students who drop out.
The federal government can also do much to support the development of quality data systems. Currently, support is provided through the Statewide Longitudinal Data System Grant Program established by Title II of the Educational Technical Assistance Act of 2002 and through the 2009 American Recovery and
Reinvestment Act. We applaud the federal government’s efforts along these lines and therefore recommend:
RECOMMENDATION 7-1: The federal government should continue to support the development of comprehensive state education data systems that are comparable, are interoperable, and facilitate exchange of information across state boundaries to more accurately track enrollment and completion status.
The federal government can also play a role in collecting data that can be used to validate state estimates of graduation rates. If additional information were collected through the American Community Survey (ACS), it would be possible to calculate robust individual cohort rates nationally and for individual states. The ACS already ascertains whether people complete high school via a GED or diploma, but questions could be added to determine the state and year in which people first entered grade 9 and the state and year in which they completed high school. Using this information, one could reliably estimate the percentage of first-time ninth graders who obtained high school diplomas and/or GED credentials (on time or otherwise) for multiple cohorts of students. These rates could be calculated nationally and for states, although sample size restrictions in the ACS would prevent drawing conclusions at the district level. We therefore recommend:
RECOMMENDATION 4-4: The U.S. Department of Education should explore the feasibility of adding several questions to the American Community Survey so the survey data can be used to estimate state graduation rates. This can be accomplished by ascertaining the year and state in which individuals first started high school, the year and state in which they exited high school, and the method of exiting high school (i.e., diploma, GED, dropping out). These additional questions could be asked about all individuals over age 16, but, in order to minimize problems associated with recall errors and selective mortality, we suggest that these items be asked only of individuals between the ages of 16 and 45.
USING COMPREHENSIVE DATA SYSTEMS TO IMPROVE POLICY AND PRACTICE
Improving the graduation rates in this country requires much more than simply reporting accurate and valid rates. It requires taking actions that will improve outcomes for this nation’s youth. A number of steps can be taken to improve policy and practice in this area. We first endorse states’ efforts to develop comprehensive longitudinal data systems. These data systems should incorporate the information needed both to calculate rates and to improve policy and practice,
such as by identifying the factors associated with dropping out, using these factors to identify at-risk students, and undertaking and evaluating interventions intended to improve outcomes for these students. The approach taken by the California Dropout Research Project provides an example of the ways that states can make use of national data sets to conduct their own research, identify precursors to dropping out, and evaluate the effectiveness of interventions. In addition to identifying individual factors associated with dropping out, this endeavor identified school characteristics associated with lower dropout rates, such as college preparatory programs and vocational education programs. We make two recommendations with regard to the kinds of actions that states should take to improve policy and practice:
RECOMMENDATION 7-2: State governments should develop more robust education data systems that can better measure student progress and institutional improvement efforts.
RECOMMENDATION 7-3: State governments should support reform efforts to demonstrate how districts can develop and effectively use more comprehensive education data systems to improve dropout and graduation rates along with improved student achievement.
To truly help improve outcomes for students, data systems need to incorporate the information needed to enable early identification of at-risk students. The research discussed in this report suggests that indicators such as the following are associated with dropping out: frequent absences, failing grades in reading or mathematics, poor behavior, being over age for grade, having a low grade-point average (GPA) in grade 9, failing grade 9, or having a record of frequent transfers. Moreover, the research shows that some of these factors may become evident as early as grade 6. Although this research provides an important foundation for states, districts, and schools to build on, the findings also suggest that the predictive value of these factors varies across school systems. Thus, we think it is important for states and districts to conduct their own studies to determine the factors associated with dropping out from their school systems. Once they are determined, measures of these factors should be incorporated into the data system so that at-risk students can be identified in time to intervene. We make the following recommendation for specific steps that states and districts should take.
RECOMMENDATION 5-1: States and districts should build data systems that incorporate variables that are documented early indicators of students at risk for dropping out, such as days absent, semester and course grades, credit hours accrued, and indicators of behavior problems. They should use these variables to develop user-friendly systems for monitoring students’
risk of dropping out and for supporting them based on their level of risk.
An important implication of this recommendation is that the interface for the data systems should be exceptionally user-friendly, enabling teachers and administrators to access information that will be useful to them in the course of usual educational practice.
Finally, we think the federal government should play an active role in this area by collecting data on the precursors of dropping out. This would allow for indicators of progress toward graduation at the national level and enable comparative studies on early indicators of dropout across states and localities. We therefore recommend:
RECOMMENDATION 7-4: The federal government should collect aggregate-level indicators of student progress toward high school graduation at the federal, state, and local levels. Such aggregate-level indicators should be collected by grade level in the middle grades (6 through 8) and by year during high school (first year, second year, etc.). These indicators should include variables such as the number of students missing 10 percent or more of school days, average number of days absent, average number of course failures, number of students failing one course or more, mean GPA, and indicators of behavior problems.