Skip to main content

Currently Skimming:

5 Considerations for Policy Makers
Pages 55-68

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 55...
... Many participants emphasized that value-added models have the potential to provide useful information for educational decision mak ing, beyond that provided by the test-based indicators that are widely used today. These models are unique in that they are intended to provide credible measures of the contributions of specific teachers, schools, or programs to student test performance.
From page 56...
... Regardless of which evaluation method is chosen, risk is unavoidable. That is, in the context of school accountability, whether decision makers choose to stay with what they do now or to do something different, they are going to incur risks of two kinds: (1)
From page 57...
... As explained in Chapter 4, most workshop participants thought that fixed-effects models generally worked well to minimize the bias that results from selection on fixed (time-invariant) student characteristics, whereas models employing student characteris tics and teacher random effects worked well to minimize variance.
From page 58...
... What Is Needed to Implement a Value-Added Model? Workshop participants talked about the different capacities that a statewide or district system would need to have in order to properly implement, and to derive meaningful benefits from, a value-added analysis, for example: • a longitudinal database that tracks individual students over time and accurately links them to their teachers, or at least to schools (if the system will be used only for school and not for teacher accountability)
From page 59...
... For example, a state official might consider simply reporting school test results to the media, without any sanctions attached to the results, to be low stakes; but a teacher or a principal may feel that such public reporting amounts to high stakes, because it affects her professional reputation and negative results can cause her embarrassment. When there is uncertainty about how different stakeholders will perceive the stakes associated with the results of a value-added system, decision makers should err on the side of assuming that the stakes are high and take the necessary precautions.
From page 60...
... That's why we apply the beyond reasonable doubt standard in criminal cases. It's why criminal courts typically refuse to admit polygraph tests.
From page 61...
... high-performing or lowperforming teachers to inform teacher improvement strategies. Brian Stecher suggested that a value-added analysis could provide a preliminary, quantitative indicator to identify certain teachers who might employ pedagogical strategies or exhibit certain behaviors to be emulated, as well as teachers who might need to change their strategies or behaviors.
From page 62...
... 7 Scott Marion and Lorrie Shepard described Damian Betebenner's work on the reporting system for Colorado as a good illustration of how status and value-added models might be combined, although this system includes a growth model, not a value-added one. The Colorado Growth Model offers a way for educators to understand how much growth a student made from one year to the next in comparison to his or her academic peers.
From page 63...
... Using this model it is possible to determine, in terms of growth percentiles, how much progress a student needs to make to reach proficiency within one, two, or three years.9 In addition to calculating and reporting growth results for each stu dent, school, and district, the Colorado Department of Education produces school and district reports depicting both growth and status (percentage proficient and above) results in what has been termed a "four quadrant" report.
From page 64...
... Judith Singer observed that the term transparency was used during the workshop to refer to several different but related ideas: one meaning relates to fairness -- people desire a system that is equitable and cannot be "gamed" and that rewards the teachers and schools that truly deserve it. The second meaning relates to methodology -- that is, methodologists could "inspect" the model and the estimation machinery in order to evaluate them in relation to professional standards.
From page 65...
... When a school with many disadvantaged students performs at the state average with respect to a status model, most observers would be inclined to judge the school's performance as laudable, although they probably would not do so if the student population were drawn from a very advantaged community. And if the above-average performance were the outcome of a value-added analysis, one would be unsure whether
From page 66...
... However, since randomized experiments are rare in education -- and those that are conducted take place in special circumstances so that generalizability can be a problem -- it will be hard to ever be fully confident that the application of a particular statistical model in a specific setting produces essentially unbiased value-added estimates. Moreover, precision is an ongoing prob lem and, as Linn pointed out, there is a great deal of measurement error in the test results fed into these models, which in turn induces substantial uncertainty in the resulting estimates.
From page 67...
...  CONSIDERATIONS FOR POLICY MAKERS that value-added indicators might be tried out in high-stakes contexts, as long as the value-added information is one of multiple indicators used for decision making and the program is pilot-tested first, implemented with sufficient communication and training, includes well-developed evaluation plans, and provides an option to discontinue the program if it appears to be doing a disservice to educators or students.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.