The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Getting Value Out of Value-Added: Report of a Workshop
difficulties are worked out before making use of such methods. Decisions about schools and teachers are being made, and, as Jane Hannaway noted, there is enormous demand from the policy side, to which the testing and research communities need to respond as quickly as possible. Some of the technical problems may never be resolved, as is the case with current status models, but many participants asserted that value-added methods can still be used, albeit with caution.
At the same time, throughout the workshop, participants raised a number of questions that they thought were important for policy makers to be asking if they are considering using value-added indicators for evaluation and other purposes.
A RANGE OF VIEWS
Compared to What?
Kevin Lang suggested that, when deciding whether to use value-added methods, one question for decision makers to ask is “Compared to what?” If these models are intended to replace other indicators, will they provide information that is more useful, accurate, or fair than what is currently available? If they are being considered as an additional indicator (in conjunction with others), will the incremental gain in information be substantively meaningful?
Dale Ballou reminded the group that every method for evaluating effectiveness with respect to student achievement (e.g., status, growth, value-added) has risks and rewards. So the question “Compared to what?” is also important to ask about the risk-reward trade-off associated with different test-based evaluation strategies. Many of the concerns about value-added models—including concerns about the models themselves (e.g., transparency and robustness to violations of assumptions), concerns about the test data that feed into the models (e.g., reliability, validity, scaling), and concerns about statistical characteristics of the results (e.g., precision, bias)—also apply to some extent to the assessment models that are currently used by the states. Value-added models do raise some unique issues, which were addressed at the workshop.
Regardless of which evaluation method is chosen, risk is unavoidable. That is, in the context of school accountability, whether decision makers choose to stay with what they do now or to do something different, they are going to incur risks of two kinds: (1) identifying some schools as failing (i.e., truly ineffective) that really are not and (2) neglecting to identify some schools that really are failing. One question is whether value-added models used in place of, or in addition to, other methods will help reduce those risks.