National Academies Press: OpenBook
« Previous: 5 Considerations for Policy Makers
Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×

References

Aaronson, D., Barrow, L., and Sanders, W. (2007). Teachers and student achievement in the Chicago public schools. Journal of Labor Economics, 25, 95-135.

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: Authors.

Ballou, D. (2005). Value-added assessment: Lessons from Tennessee. In R.W. Lissitz (Ed.), Value-added models in education: Theory and application (pp. 272-297). Maple Grove, MN: JAM Press.

Ballou, D. (2008). Value-added analysis: Issues in the economics literature. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Ballou, D., Sanders, W., and Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37-65.

Braun, H. (2005). Using student progress to evaluate teachers: A primer to value-added models. Princeton, NJ: Educational Testing Service.

Brennan, R.L., Yin, P., and Kane, M.T. (2003). Methodology for examining the reliability of group difference scores. Journal of Educational Measurement, 40, 207-230.

Briggs, D. (2008). The goals and uses of value-added models. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Briggs, D.C., Weeks, J.P., and Wiley, E. (2008). The sensitivity of value-added modeling to the creation of a vertical score scale. Paper presented at the National Conference on Value-Added Modeling, University of Wisconsin-Madison, April 22-24.

Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×

California State Board of Education. (1997). Mathematics content standards for California public schools, kindergarten through grade twelve. Sacramento: California Department of Education. Available: http://www.cde.ca.gov/be/st/ss/documents/mathstandard.pdf [accessed January 2009].

Center for Educator Compensation Reform. (no date). Teacher incentive grantee profiles. Available: http://cecr.ed.gov/initiatives/grantees/profiles.cfm [accessed January 2009].

Clotfelter, C.T., Ladd, H.F., and Vigdor, J.L. (2007). Teacher credentials and student achievement in high school: A cross-subject analysis with student fixed effects. Working paper 13617. Cambridge, MA: National Bureau of Economic Research.

Easton, J. (2008). Goals and aims of value-added modeling: A Chicago perspective. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Graham, S.E., Singer, J.D. and Willett, J.B. (in press). Longitudinal data analysis. In A. Maydeu-Olivares and R. Millsap (Eds.), Handbook of quantitative methods in psychology. Thousand Oaks, CA: Sage.

Hanushek, E. (1972). Education and race. Lexington, MA: D.C. Heath and Company.

Harris, D.N., and Sass, T. (2005). Value-added models and the measurement of teacher quality. Paper presented at the annual conference of the American Education Finance Association, Louisville, KY, March 17-19.

Holland, P. (2002). Two measures of change in gaps between the CDFs of test score distributions. Journal of Educational and Behavioral Statistics, 27(1), 3-17.

Isenberg, E. (2008). Measuring teacher effectiveness in Memphis. Washington, DC: Mathematica Policy Research.

Jha, A.K. (2008). The impact of public reporting of quality of care: Two decades of U.S. experience. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Koedel, C., and Betts, J. (2009). Value-added to what? How a ceiling in the testing instrument influences value-added estimation. Available: http://economics.missouri.edu/working-papers/koedelWP.shtml [accessed September 2009].

Ladd, H.F. (2008). Discussion of papers by Dale Ballou and by Daniel McCaffrey and J.R. Lockwood. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Linn, R.L. (1993). Linking results of district assessments. Applied Measurement in Education, 6, 83-102.

Linn, R.L. (2008). Measurement issues associated with value-added models. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Lockwood, J.R., and McCaffrey, D.F. (2007). Controlling for individual heterogeneity in longitudinal models, with applications to student achievement. Electronic Journal of Statistics, 1, 223-252 (electronic). DOI: 10.1214/07-EJS057.

Lockwood, J.R., McCaffrey, D.F., Hamilton, L.S., Stecher, B.M., Le, V., and Martinez F. (2007). The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement, 44(1) 47-67.

Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×

Martineau, J.A. (2006). Distorting value-added: The use of longitudinal, vertically scaled student achievement data for growth-based, value-added accountability. Journal of Educational and Behavioral Statistics, 31(1), 35-62.

McCaffrey, D., and Lockwood, J.R. (2008). Value-added models: Analytic issues. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

McCaffrey, D., Lockwood, J.R., Koretz, D.M., and Hamilton, L.S. (2003). Evaluating value-added models for teacher accountability. Santa Monica, CA: RAND Corporation.

McCaffrey, D., Sass, T.R., and Lockwood, J.R. (2008). The intertemporal effect estimates. Paper presented at the National Conference on Value-Added Modeling, University of Wisconsin-Madison, April 22-24.

Mislevy, R.J. (1992). Linking educational assessments: Concepts, issues and prospects. Princeton, NJ: Educational Testing Service.

Murnane, R.J. (1975). The impact of school resources on the learning of inner city children. Cambridge, MA: Ballinger.

National Research Council. (1999). High stakes: Testing for tracking, promotion, and graduation. Committee on Appropriate Test Use. J.H. Heubert and R.M. Hauser (Eds.). Board on Testing and Assessment, Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment, J. Pellegrino, N. Chudowsky, and R. Glaser (Eds.). Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

National Research Council. (2005). Systems for state science assessment. Committee on Test Design for K-12 Science Achievement. M.R. Wilson and M.W. Bertenthal (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

National Research Council. (2007a). Ready, set, SCIENCE!: Putting research to work in K-8 science classrooms. S. Michaels, A.W. Shouse, and H. A. Schweingruber. Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

National Research Council. (2007b). Taking science to school: Learning and teaching science in grades K-8. Committee on Science Learning, Kindergarten through Eighth Grade. R.A. Duschl, H.A. Schweingruber, and A.W. Shouse (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

National Research Council. (in press). Incentives and test-based accountability in public education. Committee on Incentives and Test-Based Accountability in Public Education. M. Hout, N. Chudowsky, and S. W. Elliott (Eds.). Board on Testing and Assessment, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

Organisation for Economic Co-operation and Development. (2008). Measuring improvements in learning outcomes: Best practices to assess the value-added of schools. Paris: Author.

Popham, W.J. (2007). Instructional insensitivity of tests: Accountability’s dire drawback. Phi Delta Kappan, 89(2), 146-150.

Public Impact/Thomas B. Fordham Institute. (2008). Ohio value-added primer: A user’s guide. Washington, DC: Thomas B. Fordham Institute. Available: http://www.edexcellence.net/doc/Ohio_Value_Added_Primer_FINAL_small.pdf [accessed January 2009].

Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×

Reardon, S., and Raudenbush, S.W. (2008). Assumptions of value-added models for measuring school effects. Presented at the Conference on Value-Added Modeling, University of Wisconsin-Madison, April 22-24.

Reckase, M.D. (2008). Measurement issues associated with value-added models. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Rogosa, D.R., and Willett, J.B. (1983). Demonstrating the reliability of the difference score in the measurement of change. Journal of Educational Measurement, 20, 335-343.

Rothstein, J. (2009). Student sorting and bias in value-added estimation: Selection on observables and unobservables. Working Paper No. w14666. Cambridge, MA: National Bureau of Economic Research.

Sanders, W., and Horn, S. (1998). Research findings from the Tennessee value-added assessment system (TVAAS) database: Implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12(3), 247-256.

Sanders, W., and Rivers, J. (1996). Cumulative and residual effects of teachers on future student academic achievement. Knoxville: University of Tennessee Value-Added Assessment Center. Available: http://www.cgp.upenn.edu/pdf/Sanders_Rivers-TVASS_teacher%20effects.pdf [accessed June 2009].

Schmidt, W.H., Houang, R.T., and McKnight, C.C. (2005). Value-added research: Right idea but wrong solution? In R. Lissitz (Ed.), Value-added models in education: Theory and applications (Chapter 6). Maple Grove, MN: JAM Press.

Tong, Y., and Kolen, M.J. (2007). Comparisons of methodologies and results in vertical scaling for educational achievement tests. Applied Measurement in Education, 20(2), 227-253.

U.S. Department of Education. (2009, January 12). Growth models: Non-regulatory guidance. Washington, DC: Author. Available: http://www.ed.gov/admins/lead/account/growthmodel/index.html [accessed June 2009].

Willett, J.B., and Singer, J.D. (in preparation). Applied multilevel data analysis. Harvard University, Cambridge, MA.

Willms, J.D. (2008). Seven key issues for assessing “value-added” in education. Paper prepared for the workshop of the Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Educational Accountability, National Research Council, Washington, DC, November 13-14. Available: http://www7.nationalacademies.org/bota/VAM_Workshop_Agenda.html.

Wright, S.P., Horn, S.P., and Sanders, W.L. (1997). Teacher and classroom context effects on student achievement: Implications for teacher evaluation. Journal of Personnel Evaluation in Education, 1(1), 57-67.

Xu, Z., Hannaway, J., and Taylor, C. (2007). Making a difference? The effects of Teach For America in high school. Washington, DC: Urban Institute. Available: http://www.urban.org/UploadedPDF/411642_Teach_America.pdf [accessed January 2009].

Young, M.J. (2006). Vertical scales. In S.M. Downing and T.M. Haladyna (Eds.), Handbook of test development (pp. 469-485). Mahwah, NJ: Lawrence Erlbaum Associates.

Zumbo, B.D., and Forer, B. (2008). Testing and measurement from a multilevel view: Psychometrics and validation. In J. Bovaird, K. Geisinger, and C. Buckendahl (Eds.), High stakes testing in education: Science and practice in K-12 settings. Washington, DC: American Psychological Association Press.

Zurawsky, C. (2004). Teachers matter: Evidence from value-added assessment. Research Points: Essential Information for Educational Policy, 2(2).

Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×
Page 69
Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×
Page 70
Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×
Page 71
Suggested Citation:"References." National Research Council. 2010. Getting Value Out of Value-Added: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/12820.
×
Page 72
Next: Appendix A: Workshop Agenda and Participants »
Getting Value Out of Value-Added: Report of a Workshop Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Value-added methods refer to efforts to estimate the relative contributions of specific teachers, schools, or programs to student test performance. In recent years, these methods have attracted considerable attention because of their potential applicability for educational accountability, teacher pay-for-performance systems, school and teacher improvement, program evaluation, and research. Value-added methods involve complex statistical models applied to test data of varying quality. Accordingly, there are many technical challenges to ascertaining the degree to which the output of these models provides the desired estimates. Despite a substantial amount of research over the last decade and a half, overcoming these challenges has proven to be very difficult, and many questions remain unanswered--at a time when there is strong interest in implementing value-added models in a variety of settings.

The National Research Council and the National Academy of Education held a workshop, summarized in this volume, to help identify areas of emerging consensus and areas of disagreement regarding appropriate uses of value-added methods, in an effort to provide research-based guidance to policy makers who are facing decisions about whether to proceed in this direction.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!