Using Comprehensive Data Systems to Improve Public Policy and Practice
Determining policies and practices likely to improve graduation rates and lower dropout rates is no easy task. One challenge is that the problem is complex and closely related to other aspects of students’ performance in school and their lives outside it. Thus, understanding and addressing the problem requires a broad, comprehensive perspective that focuses on the years both before and during high school. Furthermore, as described throughout this report, available data have sometimes portrayed contradictory pictures of the extent of the problem in this country. Enacting change and identifying possible solutions is difficult when the extent of the problem is under debate.
Another challenge is that authority for the governance of education and funding of programs is shared by the federal government, state governments, and the more than 15,000 local education agencies (LEAs) in the country. This diversification of responsibilities makes it difficult to effect widespread and meaningful changes in policy and practice, since each level of government is responsible for only certain aspects of education. However, each level of government has an important role to play, and having access to appropriate and accurate data is fundamental to performing these roles.
In this chapter we consider the types of information that can be gathered from comprehensive data systems and the ways these data can be used to improve policy and practice. Throughout the chapter, we provide examples of the ways in which such data have been used to promote effective practices. The chapter draws primarily from two workshop presentations, one by Robert Balfanz, with Johns Hopkins University, on how indicators of student performance in middle school can be used to build an early warning system that can
be used to guide student- and school-level interventions; the other, by Russell Rumberger, with the University of California at Santa Barbara, on how state data systems can be used to monitor students and institutions and to modify the accountability system to help focus attention on the problem. The chapter begins with a review of the major issues that comprehensive data systems can help to inform. We then explore how the data can be used at the various government levels to improve policy and practice.
USING DATA TO ENHANCE UNDERSTANDING OF THE PROBLEM
There are four fundamental issues that more comprehensive education data systems can address. First, it is important to have useful and accurate data to compute dropout, graduation, and completion statistics, so the nature of the problem can be clearly documented. Developing comprehensive data systems that meet high-quality standards is a critical step toward improving the accuracy of data used to estimate the rates. It is also important to adopt consistent conventions in calculating the rates, so that the reported rates are meaningful, well understood, and comparable across jurisdictions and over time, and, when differences are evident, that the sources of these differences are explained. These issues have been addressed throughout this report.
Second, it is important to understand the factors that cause students to drop out. This includes individual-level factors associated with students themselves, such as their attitudes, behaviors, health, school performance, and prior experiences as well as contextual factors found in students’ families, schools, and communities. Comprehensive data systems that incorporate these factors can be useful both for conducting local research to further explore the relationships between these variables and dropping out and for making use of research findings to identify at-risk students. These issues were covered in Chapter 5. The remaining two issues are discussed below.
Documenting the Outcomes Associated with Dropping Out
A third area that comprehensive data systems can help address is to further understand what happens to students when they drop out and the problems they face. As noted in Chapter 2, it is well documented that students who drop out have lower earnings, higher rates of unemployment, higher rates of crime and incarceration, higher rates of public assistance, and poorer health than high school graduates, but this information is known primarily through national survey data collected by federal agencies, such as the Census Bureau and the Bureau of Labor Statistics. Comprehensive data systems can help track dropouts after they leave high school and transition into further education and training, the labor market, and adult life and provide this information for students in a given state or school district. Although following dropouts after
they leave high school can pose significant challenges, doing so can provide invaluable information. Improving understanding of the outcomes for dropouts can help to better target state and local policy.
For instance, Florida’s data system can link education, employment, public assistance, and corrections data and can track graduates and dropouts after they leave the public school system. For the 2006-07 school year, the state was able to track 87 percent of the 114,172 graduates and 44 percent of the 37,820 dropouts. According to the state’s annual report, 55 percent of the graduates were employed, 67 percent were continuing their education in Florida, 3 percent were receiving food stamps, and virtually none were incarcerated or under community supervision (see http://www.fldoe.org/fetpip/pubs.asp, p. 1). Although tracking dropouts is more difficult than tracking graduates, the state was able to determine that 25 percent of the dropouts were employed, 4 percent were enrolled in the education system, 19 percent were receiving food stamps, and 3 percent were incarcerated or under community supervision (p. 13). Florida’s system can also track students over longer periods of time. One report documented that Florida high school graduates from the class of 1996 earned $28,252 in 2005, compared with dropouts from that class who earned $20,136 (Sellers, 2007, p. 26). Documenting what happens to dropouts after they leave school can help state and local policy makers understand the importance of enacting effective dropout prevention programs, particularly when the data reflect outcomes for local students.1
Research has also shown that dropping out is often a temporary status. That is, some dropouts return to school, either to earn a regular high school diploma or an alternative credential. One national study followed a group of students from the end of grade 8 in 1988 to 2000, eight years after their expected graduation in 1992 (Hurst, Kelly, and Princiotta, 2004). The authors reported that 20 percent of the students had dropped out of high school at least once. Among these students, 43 percent had completed high school by spring 1994, two years after their expected graduation (14 percent with a regular diploma and 29 percent with a GED or alternative certificate). An additional 20 percent completed high school between spring 1994 and spring 2000 (5 percent with a regular diploma and 15 percent with a GED or alternative certificate). Altogether, 63 percent of the dropouts in the study went on to earn some form of high school credential. A more recent national study reported similar results. This study tracked sophomores from 2002 through 2006, two years after their expected graduation (Rumberger and Rotermund, 2008) and found that 69
percent of the students who had dropped out had either completed high school (18 percent), earned a GED (31 percent), or were pursuing either a diploma or a GED (20 percent).
These two studies suggest that all is not lost when a student drops out. State and local longitudinal data systems can help to identify what happens to students who leave school and can study the factors associated with their decisions to return to school or to pursue a GED. This research can provide important information to guide decision making about state and local policy interventions and support, even after students leave school.
Evaluating the Impact of Policies and Programs to Reduce Dropout Rates
The final issue that comprehensive data can help address is the impact of policies and programs designed to reduce dropout rates and improve graduation rates. Research in this area is urgently needed, particularly experimental research that uses a design that allows one to make causal inferences about the effectiveness of a particular policy or program.2 Despite the long-standing interest in the problem of school dropouts, there is relatively little rigorous evidence on the effectiveness of intervention programs.
For instance, in 2002, the U.S. Department of Education established the What Works Clearinghouse (WWC) to review scientific evidence on the effectiveness of a variety of educational interventions, including dropout prevention programs (see http://ies.ed.gov/ncee/wwc/). As of September 2008, the WWC had reviewed 84 studies of 22 dropout intervention mechanisms and found only 23 studies of 16 interventions that were scientifically rigorous enough to determine whether the intervention was effective.3 Of those 16 interventions, 7 were effective in reducing dropout rates, 6 were effective in improving students’ progress toward graduation (such as earning credits toward graduation), but only 4 programs were effective in improving high school completion rates. Moreover, none of these four programs were effective in helping students earn a regular high school diploma; instead, they helped students pass the test required for a GED certificate.
Education data systems can provide valuable information with which to evaluate the effectiveness of locally implemented dropout prevention programs, especially when they contain longitudinal data that measure changes in student outcomes over time. Such data systems can support rigorous evaluations of program impacts (Schneider et al., 2007) that can be focused at the local level. Even when a program has proven to be effective elsewhere, it is important to
evaluate its impact when implemented in a particular school or district to be sure the program is effective in each setting.
ROLES AND DATA NEEDS OF DIFFERENT LEVELS OF GOVERNMENT
Each level of government has a role to play in addressing these four issues. Below we examine these roles and discuss how comprehensive data systems can be used by each level of government.
The federal government plays three chief roles that help to further understanding of the dropout problems in this country as follows: (1) leads efforts to collect, maintain, and analyze data; (2) issues regulatory guidance regarding calculating the rates; and (3) provides support and funding to state and local government in developing comprehensive data systems. We discuss each of these roles below.
One long-standing federal role has been to collect data to support education research, and several federal agencies collect and maintain data that can be used to study dropout, graduation, and completion rates. As described in Chapter 4, this includes the U.S. Census Bureau (which collects data through the decennial census, the monthly Current Population Survey, and the annual American Community Survey), the Bureau of Labor Statistics (which collects data through the National Longitudinal Surveys),4 and the U.S. Department of Education (which collects data through its own surveys and from state education agencies).5 Data collected by these agencies have been the primary source for documenting the extent of the dropout problem in this country and studying the causes and impacts of dropping out. Throughout this report, we have cited statistics based on these data collection efforts.
Data collected at the federal level can also be used to support state-level research. Longitudinal data are particularly useful for identifying predictors of dropping out because the data are generally more comprehensive than those collected by generic population surveys, such as the ones conducted by the Census Bureau. One example is the Education Longitudinal Study (ELS) of 2002, a longitudinal study of a sample of sophomores that began in spring 2002, with additional data collections in 2004, 2006, and scheduled for 2010 or 2012.
This effort includes survey, test, and transcript data collected on students, with additional survey data collected from their parents, teachers, librarians, and schools.6 This study collected information from parents on parenting practices that may be associated with dropout behavior and from students on their attitudes toward school (e.g., aspirations) and their behaviors (absenteeism, skipping school) that may be used to predict which students are more likely to graduate.7 Although this study was not designed to provide representative population estimates for all states, data from states with large populations are sufficient to be used for research.
Using a subsample of these data, the California Dropout Research Project was able to study students in California to identify student and school factors that predicted high school graduation (Rumberger and Arellano, 2007). The study identified three student factors similar to the at-risk factors discussed in Chapter 5: being over age for grade, having a low grade-point average (GPA) in grade 9, and failing grade 9. Their analyses revealed that 44 percent of the California subsample were at risk, and only 61 percent of these students graduated from high school. Using this data set, the researchers were also able to identify alterable school and student factors predictive of graduation.
Specifically, students in a college preparatory or vocational program were twice as likely to graduate compared with students in general programs, although schools that had an overemphasis on vocational programs tended to have lower graduation rates. Two structural features of schools were associated with higher graduation rates. For students who attended year-round schools, the odds of graduating were half of those for students who attended schools on a regular academic calendar. For students who attended alternative schools, the odds of graduating were only one-fifth of those for students attending regular high schools. Thus, this research has identified specific factors associated with students’ chances of graduating, although it is important to note that this was a descriptive study, not one that would permit one to conclude that these features of schools caused graduation rates to increase. However, these are factors that could potentially be altered if it was determined that doing so would improve students’ chances of graduating. Future research might be designed to study the extent to which making these changes results in improved graduation rates.
NCES has just initiated a new longitudinal study of high school students, the High School Longitudinal Study of 2009 that will track a cohort of grade 9 students through high school (see http://nces.ed.gov/surveys/hsls09/). One unique feature of this study is that it will produce not only a nationally representative sample of grade 9 students but also representative samples for each
See Overview: Purpose, retrieved January 22, 2009, from http://nces.ed.gov/surveys/els2002/.
The National Educational Longitudinal Study of 1988, an earlier longitudinal study conducted by the U.S. Department of Education, has been used in more than 38 studies of dropouts (Rumberger and Lim, 2008).
of 10 states. Thus, it will be valuable for producing comparable state estimates of dropout and graduation rates for this cohort of grade 9 students.
Issuing Regulatory Guidance
The federal government also issues regulatory guidance that helps to create consistency in calculating the rates. In its role of collecting data from state education agencies, the U.S. Department of Education dictates how states compute and report education data and statistics. For instance, it requires that all states use a common definition of a dropout in the data they provide for the Common Core of Data (CCD). Specifically, a dropout is someone who
was enrolled in school at some time during the previous school year;
was not enrolled at the beginning of the current school year (on October 1);
has not graduated from high school or completed a state- or district-approved education program (including special education and GED preparation); and
does not meet any of the following exclusionary conditions:
transferred to another public school district, private school, or state- or district-approved education program;
temporary absence due to suspension or school-approved education program;
Similarly, the Department of Education has issued regulations for reporting graduation rates in order to comply with the No Child Left Behind (NCLB) Act. Beginning with the 2011-12 school year, states will be required to report high school graduation rates that, for the most part,9 conform with the procedures advocated by the National Governors Association (NGA) Compact (see Chapter 2).10 This kind of regulatory guidance helps to create comparability in the rates states report and the data that the federal government collects from the states.
See Defining and Calculating Event Dropout Rates Using the CCD at http://nces.ed.gov/pubs2007/dropout05/DefiningAndCalculating.asp.
As we described in Chapter 2, the NGA rate allows states to determine what constitutes a “regular” diploma. NCLB provides a definition of a regular diploma that overrides the state definition.
See A Uniform, Comparable Graduation Rate at http://www.ed.gov/policy/elsec/reg/proposal/uniform-grad-rate.html [accessed January 2009].
Funding State Activities
Finally, the federal government provides funding to states to support certain programs and activities. As noted in Chapter 6, Title II of the Educational Technical Assistance Act of 2002 established the Statewide Longitudinal Data System (SLDS) Grant Program to help states develop their longitudinal data systems. Funding to support data system development was offered through a competitive grant program. In addition, the 2009 American Recovery and Reinvestment Act is providing funds to establish data systems at the state and local levels for improving student and teacher performance (U.S. Department of Education, 2009, p. 4).
The federal government, particularly the Department of Education, has also supported efforts to improve the quality of data collected by the states and to bring consistency to the way dropout and graduation rates are computed. As described in Chapter 6, the Department of Education created the National Forum on Education Statistics, which has organized task forces to address such issues as setting quality standards for data systems (National Forum on Education Statistics, 2005), standardizing exit codes used to indicate students’ status when they leave school (National Forum on Education Statistics, 2006), and developing coding systems for attendance data that are common across states (National Forum on Education Statistics, 2009).
This kind of federal support helps states to develop comprehensive data systems, ensure that the data incorporated into the systems are of high quality, and ensure that the resulting rates will be comparable.
States perform some of the same roles as the federal government when it comes to addressing issues related to dropout and completion statistics. That is, states collect data from the local education agencies that are used to calculate dropout and completion rates, they issue guidance to them with regard to how the data are coded and reported, and they provide support for dropout prevention programs. However, unlike the federal government, states have governance authority over public education and can directly implement policies at the school and district levels. When it becomes evident that certain schools or districts are not performing well, states have the authority to take action. For instance, the state can target specific schools for remedial programs when it becomes apparent that graduation rates are too low or dropout rates too high. Comprehensive data systems can provide an empirical basis to support this kind of decision making. There is a growing recognition among educators, policy makers, and researchers that longitudinal data systems can lead to increased accuracy in the reporting of dropout and completion rates (see National Governors Association Task Force on State High School Graduation Data, 2005; Miao and Haney, 2004; Warren, 2005; and http://dataqualitycampaign.org/survey/
actions). With more accurate rates—as well as rates that are calculated to be comparable across school systems—state policy makers can make better decisions about which schools and districts are in need of targeted interventions.
Some of the work conducted as part of the California Dropout Research Project provides an example of ways in which states can make use of comprehensive data systems. In his presentation, Russell Rumberger discussed a recent project report, Solving California’s Dropout Crisis, 2008, that recommended that the state develop and report a set of indicators, including progress indicators on how well students in California are moving toward fulfilling the requirements of a high school diploma. One indicator might be a grade 9 to 10 promotion rate, to measure how many students earned enough credits in grade 9 to be promoted to grade 10, since research has shown that failing grade 9 courses is predictive of dropping out (see Chapter 5). Other indicators could identify at-risk students in middle school, such as those failing grade 6 English or mathematics courses or those having high rates of absenteeism. Such early indicators could be used by schools to identify students in need of support services to help them keep on track toward high school graduation. Balfanz also suggests that such data could also be used to provide additional resources to schools with high percentages of students at risk of dropping out.
Rumberger also described how a more robust state education data system could be used to initiate and evaluate intervention efforts aimed at improving dropout and graduation rates. Such systems could be used to measure the existing readiness, will, and capacity of schools and districts to better target improvement strategies, determine the amount and kinds of support needed to implement the strategies, and build capacity to sustain them. The systems could also monitor the implementation of the improvement strategies so that they can be compared with benchmarks and timelines. This might include progress indicators to provide information on the initial impact of the improvement efforts in order to identify and facilitate midcourse corrections in programs (see Rumberger, 2009).11 Robust data systems can also help to evaluate the effectiveness and the cost-effectiveness of improvement strategies so that successful interventions can be replicated in other schools and districts (Levin, 1988; Levin, et al., 2007).
Local Schools and Districts
Local schools and school districts have direct responsibility for collecting and reporting the data that states require and for implementing policies and programs that states mandate. With the help of comprehensive data systems, local school districts gain the ability to conduct their own research on dropout and completion rates, factors associated with dropping out or graduating, and the effectiveness of interventions.
The local school system is where intervention efforts happen, and education data systems can provide valuable information to improve programs and practices. In a study of district-wide school improvement in Duval County, Florida, Supovitz developed a model for a district-wide data system that serves three functions (Supovitz, 2006, Chapter 5):
Collecting and disseminating data on individual student performance so that teachers can make informed decisions about the most appropriate content and instructional strategies for each student;
Collecting data on programs and policies that can be used by district administrators to determine the types of support teachers and school leaders need; and
Collecting and reporting data to the public and to state and federal officials for accountability purposes.
Supovitz maintains that developing and using data provides two important benefits. It creates a sense of collective responsibility in the schools and the district, and it helps create the conditions for organizational learning, which is critical for sustaining educational improvement (p. 150).
Another model for using data to improve practice at the district level is provided by the Consortium for Chicago School Research (Roderick, Easton, and Sebring, 2009). Instead of providing simplistic answers to complex problems—the way traditional models of research attempt to influence policy and practice—this model seeks to build capacity in the district by
Developing key indicators for school improvement;
Providing support in identifying strategies for improvement; and
Identifying the theory of action behind new district-wide policies and examining how these policies fare in practice (pp. 23-29).
An illustration of this approach was the development of grade 9 on-track indicators that are now used to judge the performance of Chicago high schools (see Chapter 5).
A district-wide data system could help teachers and school counselors identify students at risk for dropping out and some of their particular needs, so students are provided with the appropriate support and services. The system could
also help district and school leaders monitor the progress of such students and the effectiveness of the programs and services being provided, allowing them to make adjustments if students fail to improve. Finally, the system could also be used to report to the public as well as to state and federal officials the progress of such students and the effectiveness of efforts to improve their performance.
The emerging efforts to use early warning systems to support more effective interventions have several implications for policy and practice. First, the strategy of using early indicators to prevent students from reaching off-track status suggests a possible middle ground between waiting for fully operational statewide early warning systems that enable classroom-level analysis and having each school try to design its own system. States or districts could focus on establishing valid and reliable predictors for different school populations and then provide updates to schools on a regular basis (i.e., every marking period). Schools could then use simple early warning indicators (e.g., missing more than two days in a month, getting two or more office referrals, failing two quizzes) to identify students in need of additional support. The ultimate effectiveness of the approaches used by the school could then be established by tracking the extent to which the number of students reaching off-track status declines across marking periods.
A second implication is that considerable effort will need to be invested in training and supporting school personnel to use off-track indicators to develop better interventions and more effectively target them. This will include both mission building and cultural change. School personnel will need to be reoriented from thinking of their primary task as ensuring that their students meet gradelevel objectives, to seeing their responsibility as keeping students on-track to high school graduation. In practical terms, the challenge is to create the conditions, supports, and rationales that promote rapid intervention after the first signs that students are in danger of falling off-track, rather than assuming or hoping that what might be seen as relatively small struggles will self-correct. Support and efforts will also have to be provided to enable teachers and administrators to take a more proactive role in evaluating the effectiveness of interventions.
Finally, states and local school districts should take the lead in integrating information from agencies that work with youth outside the school system, in particular, agencies that oversee juvenile justice, foster care, and child protection (abuse and neglect). It will be useful for both the school system and the agencies to know the extent to which students involved with the agency exhibit off-track behaviors and the extent to which they demonstrate these behaviors prior to involvement with the agency. This could lead to more integrated and effective supports for the students in need of them. It could also potentially provide the basis for the agencies to help fund and hold accountable school-based prevention efforts designed to keep students from exhibiting off-track behaviors. Philadelphia has developed such a system, known as the Kids Integrated Data System (KIDS, see http://www.gse.upenn.edu/child/projects/kids).
Balfanz noted that his work in Philadelphia with an integrated data set revealed three important findings. First, students involved with the juvenile justice, foster care, and child protection agencies, as well as young women who gave birth during high school, had very high dropout rates, ranging from 70 to 90 percent. Second, although these dropout rates were very high, agency-involved youth accounted for a relatively small portion of all dropouts. Third, two-thirds of the young men who would be incarcerated in high school and two-thirds of the young women who would give birth in high school had an off-track indicator in grade 6 (Neild and Balfanz, 2006b).
All levels of government have a vested interest in improving education data systems to help solve the problem of school dropouts, and each level can play a part. The committee makes a number of recommendations on what each level of government can do.
The federal government can do much to support the development of quality data systems. Currently, support is provided through the Statewide Longitudinal Data System Grant Program established by Title II of the Educational Technical Assistance Act of 2002 and through the 2009 American Recovery and Reinvestment Act. We applaud the federal government’s efforts along these lines and therefore recommend:
RECOMMENDATION 7-1: The federal government should continue to support the development of comprehensive state education data systems that are comparable, are interoperable, and facilitate exchange of information across state boundaries to more accurately track enrollment and completion status.
Improving the graduation rates in this country requires much more than simply reporting accurate and valid rates. It requires taking actions that will improve outcomes for the nation’s youth. A number of steps can be taken to improve policy and practice in this area. We first endorse states’ efforts to develop comprehensive longitudinal data systems. These data systems should incorporate the information needed both to calculate rates and to improve policy and practice, such as by identifying the factors associated with dropping out, using these factors to identify at-risk students, and undertaking and evaluating interventions intended to improve outcomes for these students. The approach taken by the California Dropout Research Project provides an example of the ways that states can make use of national data sets to conduct their own research, identify precursors to dropping out, and evaluate the effectiveness of interventions. In addition to identifying individual factors associated with dropping out, this endeavor was able to identify school characteristics
associated with lower dropout rates, which may be pursued in greater depth in future research. We make two recommendations with regard to the kinds of actions that states should take to improve policy and practice:
RECOMMENDATION 7-2: State governments should develop more robust education data systems that can better measure student progress and institutional improvement efforts.
RECOMMENDATION 7-3: State governments should support reform efforts to demonstrate how districts can develop and effectively use more comprehensive education data systems to improve dropout and graduation rates, along with improved student achievement.
Finally, we think the federal government should play an active role in this area by collecting data on the precursors of dropping out. This would allow for indicators of progress toward graduation at the national level and enable comparative studies on early indicators of dropout across states and localities. We therefore recommend:
RECOMMENDATION 7-4: The federal government should collect aggregate-level indicators of student progress toward high school graduation at the federal, state, and local levels. Such aggregate-level indicators should be collected by grade level in the middle grades (6 through 8) and by year during high school (first year, second year, etc.). These indicators should include variables, such as the number of students missing 10 percent or more of school days, average number of days absent, average number of course failures, number of students failing one course or more, mean GPA, and indicators of behavior problems.