Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 85
--> Chapter 6 Adequate Yearly Progress In addition to requiring states to set standards for student performance, the 1994 Title I statute also calls on states to determine whether schools are making “adequate yearly progress” in bringing students up to the standards they have set. Specifically, the law states that adequate yearly progress must be defined “in a manner that (1) results in continuous and substantial yearly improvement of each school and local education agency sufficient to achieve the goal of all children…meeting the state's proficient and advanced levels of achievement; [and] (2) is sufficiently rigorous to achieve that goal within an appropriate timeframe.” In this aspect, as in many others, the law represents a substantial departure from past practice. To be sure, Title I has long required some demonstration of improvement in performance. The Hawkins-Stafford Amendments of 1988, for example, required school districts to identify schools that failed to demonstrate progress and to develop improvement plans for such schools (Natriello and McDill, 1999). However, these provisions required schools only to show an upward trend, not to set a goal of enabling all students to reach challenging standards. And in many cases the requirements for improvement were modest; in some districts, any improvement at all was considered adequate. The new law, by contrast, requires states to set a clear goal for all students, and requires evidence of progress toward that goal. Moreover, the requirement for the “appropriate timeframe” suggests that small steps toward the goal may not be enough. Steady, substantial improvement toward reaching the standards is necessary. Defining and measuring adequate yearly progress poses enormous challenges. Because the concept is central to accountability—schools that fail to demonstrate adequate yearly progress will be subject to intervention or other remedies—determining when progress is adequate and measuring it accurately
OCR for page 86
--> and fairly become critical. Improper designations or inaccurate measures could mean that schools that are making progress receive intervention inappropriately, or that students in schools that need help may not get the assistance they require. Adequate Yearly Progress Findings The most common method states and districts have used to determine adequate yearly progress is to set a goal for school performance, determine how long it will take to meet the goal, define progress toward the goal, and determine how school results will be structured so that the state or district could evaluate a school's rate of progress (Carlson, 1996). One of the best-known examples of this approach is the method used in Kentucky, which has been applied in some form in a number of other states and districts. Under Kentucky's system, the state set the overall target for all schools at the level at which all students perform at the proficient level and called this level 100. They then determined each school's baseline performance, based on the results of the initial administration of the state test—giving greater weight to students at the proficient and distinguished (advanced) levels than to those at lower levels of performance—and subtracted that score from 100. They then set each school's two-year target at 10 percent of the difference between the initial score and 100. At that rate, state officials reasoned, every school would reach the target within 20 years. This approach depends heavily on the quality of the measures of school performance. As noted in Chapter 4, using average scores to determine school performance can provide misleading inferences. (Although Kentucky uses a weighted average, assigning different values to students at different points on the distribution, it fails to disaggregate the results or to account in some other way for the student population in each school.) The risk of misleading inferences is significant in measures of growth. As Willms (1998) points out, schools with high initial test scores tend to grow at a faster rate than those with lower initial scores. In part this phenomenon reflects the fact that high performance tends to be associated with high levels of parental support, fewer disciplinary problems, and high teacher quality—all of which can contribute to performance improvement. At a minimum, this finding suggests, comparisons of growth rates that do not take into account the composition of the school's student body may be misleading. A second factor in the “gap-closing” model, as the Kentucky approach is sometimes called, is a theory about the expected rate of growth. The Kentucky
OCR for page 87
--> method appears to assume a linear rate—each school will grow at a 10 percent rate every two years. There is little evidence to suggest that this assumption is valid, or indeed what rate might be expected. Kentucky's own experience shows that, after initial gains, improvement appears to have reached something of a plateau. Without evidence about the rate of progress that schools are capable of demonstrating, particularly schools with high proportions of low-income students, a gap-closing model might set up unrealistic expectations and could provoke a backlash among schools that fail to meet such expectations. Another design issue in the development of measures of progress is related to the frequency of assessment. Kentucky elected not to test students in every grade level and instead relies on cross-sectional measures. That is, in determining progress, the state compares this year's 4th graders with last year's. This may be misleading, particularly in small schools, since the population of students in a school may differ significantly from one year to the next. Kentucky attempted to deal with this problem by gauging schools over a two-year period; year-to-year fluctuations in student populations could be ironed out over two years. An alternative is to use longitudinal measures, which show the performance of one group of students over time. This approach is expensive, since it requires annual testing of each student and tracking of students who move from school to school (Carlson, 1996). And it tends to rely on traditional forms of testing, because of cost and the scaling of results. Performance measures tend to be more expensive than traditional multiple-choice tests, and annual testing of each student with performance measures would add up. In addition, performance measures often rate student performance according to qualitative characteristics, which are difficult to place on a linear scale—yet a linear scale might be needed to show growth from year to year (Baker and Linn, 1997). A final design issue relates to the use of multiple measures. The Kentucky model uses an index that combines scores from all subject-area assessments, plus other data (such as dropout rates and attendance rates) into a single number. This method has the advantage of incorporating information from a range of indicators, so that judgments about progress do not rest on a single test. Schools can compensate for weak performance in one area by showing strong progress in another. Yet this system is highly complex, and few people understand how the index is compiled (Elmore et al., 1996). It fails to include the more detailed information about the data that constitute the index, to provide clues to educators about what to do to improve the next time. Moreover, the index approach may exclude other data that may be useful in determining school progress toward standards. As noted in Chapter 5, data about classroom practices and the conditions of instruction are critical pieces of information in an educational improvement system. For one thing, they provide a context for the performance data, by showing whether any performance gains are accompanied by improvements in practice and support for instruction. In addition, the information about the conditions of instruction also can serve
OCR for page 88
--> as “leading indicators” that provide evidence of progress in advance of progress on tests and other performance measures, in the same way that data on factory orders show growth in the economy in advance of increases in the employment rate. Recommendations Measures of adequate yearly progress should include a range of indicators, including indicators of instructional quality as well as student outcomes. Measures of adequate yearly progress should include disaggregated results by race, gender, economic status, and other characteristics of the student population. The criterion for adequate yearly progress should be based on evidence from the highest-performing schools with significant proportions of disadvantaged students. Questions to Ask Are data on the conditions of instruction as well as student outcomes collected and reported in the measures of school progress? Are these data disaggregated by race, gender, economic status, and other factors? Are data collected on school performance over time from high-performing schools with significant proportions of disadvantaged students to determine expectations for adequate progress for all schools? Criteria Moving the Distribution. The goal should be to enable all students to reach the desired level; therefore, any definition of progress should include success in reducing the number of students at the lower levels of achievement as well as increasing the number attaining the standards. Continuous Progress. Progress measurements should encourage all schools to improve continuously; however, states should acknowledge schools that reach high levels of achievement. Reduction of Error. If states in their adequate progress measures use cross-sectional measures of achievement—comparing this year's 4th graders to last year's—they should measure progress over at least a two-year period, in order to reduce the sampling error that could occur because of shifts in student populations in schools. If states assess each student each year and measure
OCR for page 89
--> progress annually, they should measure performance of all students, not just those who happened to remain in a school from year to year. Use of Multiple Measures. Because of the limitations of test scores, measures of progress should not rely on single tests only, but should combine information from a range of sources. However, this information should be combined in ways that are transparent and understandable to schools and the public. Regular Review. In order to ensure that the criteria for determining progress remain valid and that the method for determining school progress remains sound, states and districts should regularly review the reliability, validity, and utility of the overall system and revise the technical specifications and performance expectations when appropriate. Examples The following examples are from two states that meet some, but not all, of the committee's criteria for adequate yearly progress. North Carolina's system uses evidence from past performance in determining whether schools are eligible for recognition or for assistance. However, the state's criteria rely solely on test performance, rather than on the use of multiple measures, and it judges school performance based on average performance, rather than on the performance of subgroups within schools. Missouri's system for determining adequate progress, meanwhile, explicitly encourages schools to narrow the achievement gap between high-performing and low-performing students, not just raise the overall average. But the state's system relies only on test performance and does not base its targets on evidence from successful schools. North Carolina judges the progress of schools by examining scores on the state's End of Course tests and compiling a “growth composite” that is based on three factors: statewide average growth, the previous performance of students in the school, and a statistical adjustment which is needed whenever test scores of students are compared from one year to the next. The state provides cash awards to schools that show substantial gains in performance. Schools gaining at the “expected” rate, based on the state formula, receive awards of up to $750 per certified staff member and $375 per teaching assistant. Schools that register “exemplary” gains, or 10 percent above the statewide average, can receive up to $1,500 for each certified staff member and $500 per teaching assistant. Schools can use the money for bonuses for teachers or for school programs. Schools must test at least 95 percent of their students (98 percent in grades K-8) in order to be eligible for recognition.
OCR for page 90
--> In 1998, Missouri began to implement a new assessment system, known as the Missouri Assessment Program (MAP), that is designed to measure progress on the state standards. The program consists of assessments in mathematics, communications arts, and science; social studies, health and physical education, and fine arts are expected to be added in the coming years. The state board of education has designated five levels of performance on the assessment—“advanced,” “proficient,” “nearing proficient,” “progressing,” and “step 1” (lowest). To meet the criterion for adequate yearly progress under Title I, schools must reduce the number of low-performing students. Specifically, schools must achieve one of the following: At least a 5 percent increase in the composite percentage of students in the upper three performance levels and at least a 5 percent decrease in the percentage of students in the bottom performance level; A 20 percent decrease in the percent of students in the bottom performance level, in schools in which at least 40 percent of a class group is represented in the bottom level; The percentage of students in the bottom performance level is 5 percent or less.
Representative terms from entire chapter: