References

Blanc, S., Christman, J.B., Hugh, R., Mitchell, C., and Travers, E. (2010). Learning to learn from data: Benchmarks and instructional communities. Peabody Journal of Education, 85(2), 205-225.

Bulkley, K., Christman, J., Goertz, M., and Lawrence, N. (2010). Building with benchmarks: The role of the district in Philadelphia’s benchmark assessment system. Peabody Journal of Education, 85(2), 186-204.

Christman, J., Neild, R., Bulkley, K., Blanc, S., Liu, R., Mitchell, C., and Travers, E. (2009). Making the Most of Interim Assessment Data: Lessons from Philadelphia. Philadelphia, PA: Research for Action.

Chudowsky, N., and Chudowsky, V. (2007). No Child Left Behind at Five: A Review of Changes to State Accountability Plans. Washington, DC: Center on Education Policy.

Clune, W.H., and White, P.A. (2008). Policy Effectiveness of Interim Assessments in Providence Public Schools. WCER Working Paper No. 2008-10, Wisconsin Center for Education Research. Madison: University of Wisconsin.

Cronin, J., Dahlin, M., Xiang, Y., and McCahon, D. (2009). The Accountability Illusion. Washington, DC: Thomas B. Fordham Institute.

Elmore, R.F. (2003). Accountability and capacity. In M. Carnoy, R.F. Elmore, and L.S. Siskin (Eds.), High Schools and the New Accountability (pp. 188-209). NewYork: Routledge/Falmer.

Ferrara, S. (2009). The Maryland School Performance Assessment Performance (MSPAP), 1991-2002: Political Considerations. Paper prepared for the Workshop of the Committee on Best Practices in State Assessment Systems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies.org/BOTA/Steve_Ferrara_Paper.pdf [accessed May 2010].

Fuller, B., Gesicki, K., Kang, E., and Wright, J. (2006). Is the No Child Left Behind Act Working?: The Reliability of How States Track Achievement. Working Paper No. 06-1. Berkeley: University of California and Stanford University, Policy Analysis for California Education.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 65
References Blanc, S., Christman, J.B., Hugh, R., Mitchell, C., and Travers, E. (2010). Learning to learn from data: Benchmarks and instructional communities. Peabody Journal of Education, (2), 205-225. Bulkley, K., Christman, J., Goertz, M., and Lawrence, N. (2010). Building with benchmarks: The role of the district in Philadelphia’s benchmark assessment system. Peabody Journal of Educa- tion, (2), 186-204. Christman, J., Neild, R., Bulkley, K., Blanc, S., Liu, R., Mitchell, C., and Travers, E. (2009). Making the Most of Interim Assessment Data: Lessons from Philadelphia. Philadelphia, PA: Research for Action. Chudowsky, N., and Chudowsky, V. (2007). No Child Left Behind at Fie: A Reiew of Changes to State Accountability Plans. Washington, DC: Center on Education Policy. Clune, W.H., and White, P.A. (2008). Policy Effectieness of Interim Assessments in Proidence Pub- lic Schools. WCER Working Paper No. 2008-10, Wisconsin Center for Education Research. Madison: University of Wisconsin. Cronin, J., Dahlin, M., xiang, Y., and McCahon, D. (2009). The Accountability Illusion. Washing- ton, DC: Thomas B. Fordham Institute. Elmore, R.F. (2003). Accountability and capacity. In M. Carnoy, R.F. Elmore, and L.S. Siskin (Eds.), High Schools and the New Accountability (pp. 188-209). NewYork: Routledge/Falmer. Ferrara, S. (2009). The Maryland School Performance Assessment Performance (MSPAP), 11-2002: Political Considerations. Paper prepared for the Workshop of the Committee on Best Practices in State Assessment Systems: Improving Assessment While Revisiting Standards, Decem - ber 10-11, National Research Council, Washington, DC. Available: http://www7.national academies.org/BOTA/Steve_Ferrara_Paper.pdf [accessed May 2010]. Fuller, B., Gesicki, K., Kang, E., and Wright, J. (2006). Is the No Child Left Behind Act Working?: The Reliability of How States Track Achieement. Working Paper No. 06-1. Berkeley: Univer- sity of California and Stanford University, Policy Analysis for California Education. 

OCR for page 65
 BEST PRACTICES FOR STATE ASSESSMENT SYSTEMS, PART I Goertz, M.E. (2009). Oeriew of Current Assessment Practices. Paper prepared for the Workshop of the Committee on Best Practices in State Assessment Systems: Improving Assessment While Revisiting Standards, National Research Council, December 10-11, Washington, DC. Available: http://www7.nationalacademies.org/BOTA/Peg_Goertz_Paper.pdf [accessed May 2010]. Goertz, M.E., Olah, L.N., and Riggan, M. (2009). Can Interim Assessments Be Used for In- structional Change? CPRE Policy Briefs: Reporting on Issues and Research in Education Policy and Finance. Available: http://www.cpre.org/images/stories/cpre_pdfs/rb_51_ role%20policy%20brief_final%20web.pdf [accessed May 2010]. Gong, B. (2010). Innoatie Assessment in Kentucky’s KIRIS System: Political Considerations. Pre- sentation to the Workshop of the Committee on Best Practices in State Assessment Systems: Improving Assessment While Revisiting Standards, National Research Council, December 10- 11, Washington, DC. Available: http://www7.nationalacademies.org/BOTA/Brian%20Gong. pdf [accessed May 2010]. Hambleton, R.K. (2009). Using Common Standards to Enable Cross-National Comparisons. Presen- tation to the Workshop of the Committee on Best Practices for State Assessment Systems: Im- proving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies.org/bota/Ron_Hambleton.pdf [accessed May 2010]. Ho, A.D. (2008). The problem with “proficiency”: Limitations of statistics and policy under No Child Left Behind. Education Researcher, (6), 351-360. Jennings, J., and Rentner, D.S. (2006). Ten big effects of No Child Left Behind on public schools. Phi Delta Kappan, (2), 110-113. Kirst, M., and Mazzeo, J. (1996). The rise, fall, and rise of state assessment in California: 1993-1996. Phi Delta Kappan, (4), 319-323. Koretz, D., and Barron, S. (1998). The Validity of Gains on the Kentucky Instructional Results Information System. Santa Monica, CA: RAND. Koretz, D. Mitchell., K., Barron, S., and Keith, S. (1996) Perceied Effects of the Maryland School Performance Assessment Program. Final Report, Project 3.2 State Accountability Models in Action. Washington, DC: U.S. Department of Education, National Center for Research on Evaluation. Krajcik, J., McNeill, K.L., and Reiser, B. (2008). Learning-goals-driven design model: Developing curriculum materials that align with national standards and incorporate project-based peda - gogy. Science Education, 2(1), 1-32. Krajcik, J., Stevens, S., and Shin, N. (2009). Deeloping Standards That Lead to Better Instruction and Learning. Presentation to the Workshop of the Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies. org/BOTA/Joe%20Krajcik%20and%20Shawn%20Stevens.pdf [accessed May 2010]. Lai, E.R., and Waltman, K. (2008). The Impact of NCLB on Instruction: A Comparison of Results for 200-0 to 200-0. IARP Report #7. Iowa City: Center for Evaluation and Assessment, University of Iowa. Lane, S. (1999). Impact of the Maryland School Performance Assessment Program (MSPAP): Eidence from the Principal, Teacher, and Student Questionnaires (Reading, Writing, and Science). Paper presented at the annual meeting of the National Council on Measurement in Education, April 19-23, Montreal, Quebec, Canada. Lazer, S. (2010). Technical Challenges with Innoatie Item Types. Presentation to the Workshop of the Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Avail - able: http://www7.nationalacademies.org/BOTA/Steve%20Lazer.pdf [accessed May 2010].

OCR for page 65
 REFERENCES Marion, S. (2010). Changes in Assessments and Assessment Systems Since 2002. Presentation to the Workshop of the Committee on Best Practices for State Assessment Systems: Improv - ing Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies.org/BOTA/Scott%20Marion. pdf [accessed May 2010]. Mattson, D. (2010). Science Assessment in Minnesota. Presentation to the Workshop of the Commit- tee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http:// www7.nationalacademies.org/BOTA/Dirk_Mattson.pdf [accessed May 2010]. McMurrer, J. (2007). Choices, Changes, and Challenges; Curriculum and Instruction in the NCLB Era. Washington, DC: Center on Education Policy. Mislevy, R. (1998). Foundations of a new test theory. In N. Fredericksen, R.J. Mislevy, and I.I. Bejar (Eds.), Test Theory for a New Generation of Tests (pp. 19-38). Hillsdale, NJ: Lawrence Erlbaum Associates. Mislevy, R.J., and Riconscente, M. (2005). Eidence-centered Assessment Design: Layers, Structures, and Terminology. Menlo Park, CA: SRI International. National Research Council. (1995). Anticipating Goals 2000: Standards, Aassessment, and Public Policy: Summary of a Workshop. Board on Testing and Assessment, Center for Education. Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. National Research Council. (1996). National Science Education Standards. National Committee on Science Education Standards and Assessment. Washington DC: National Academy Press. National Research Council. (1999a). Embedding Questions: The Pursuit of a Common Measure in Uncommon Tests. Committee on Embedding Common Test Items in State and District As- sessments. D.M. Koretz, M.W. Berthenthal, and B.F. Green (Eds.). Commission on Behavioral and Social Sciences and Education. Washington DC: National Academy Press. National Research Council. (1999b). Uncommon Measures: Equialence and Linkage Among Edu- cational Tests. Committee on Equivalency and Linkage of Educational Tests. M.J. Feuer, P.W. Holland, B.F. Green, M.W. Berthenthal, and F.C. Hemphill (Eds.). Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. National Research Council. (2005). Systems for State Science Assessment. Committee on Test De- sign for K-12 Science Achievement. M.R. Wilson and M.W. Berthenthal (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington DC: The National Academies Press. National Research Council. (2008). Common Standards for K-12 Education? Considering the Ei- dence: Summary of a Workshop Series. A. Beatty, Rapporteur. Committee on State Standards in Education: A Workshop Series. Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Olah, L., Lawrence, N., and Riggan, M. (2010). Learning to learn from benchmark assessment data: How teachers analyze results. Peabody Journal of Education, (2), 226-245. Perie, M., Marion, S., and Gong, B. (2007). The Role of Interim Assessments in a Comprehensie Assessment System: A Policy Brief. Aspen, CO: Center for Assessment, The Aspen Institute, and Achieve, Inc. Available: http://www.achieve.org/files/TheRoleofInterimAssessments.pdf [accessed March 2010]. Porter, A.C., Polikoff, M.S., and Smithson, J. (2009). Is there a de facto national intended cur- riculum? Evidence from state content standards. Educational Ealuation and Policy Analysis, 1(3), 238-268. Schmidt, W.H., Wang, H.C., and McKnight, C. (2005). Curriculum coherence: An examination of U.S. mathematics and science content standards from an international perspective. Journal of Curriculum Studies, , 525-559. Shepard, L. (1993). Evaluating test validity. Reiew of Research in Education, 1(1), 405-450.

OCR for page 65
 BEST PRACTICES FOR STATE ASSESSMENT SYSTEMS, PART I Shepard, L. (2010). Research Priorities for Next-Generation Assessment Systems. Presentation to the Workshop of the Committee on Best Practices for State Assessment Systems: Improv - ing Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies.org/BOTA/Lorrie_Shepard.pdf [accessed May 2010]. Shin, N., Stevens, S., and Krajcik, J. (in press). Using Construct-Centered Design as a Systematic Approach for Tracking Student Learning Oer Time. London, England: Routledge, Taylor & Francis Group. Smith, C.L., Wiser, M., Anderson, C.W. and Krajcik, J. (2006). Implications of research on chil - dren’s learning for standards and assessment: A proposed learning progression for matter and the atomic molecular theory. Measurement: Interdisciplinary Research and Perspecties, (1), 1-98. Smith, M., and O’Day, J. (1991). Systematic school reform. In S. Fuhrman and B. Malen (Eds.), The Politics of Curriculum and Testing (pp. 233-267). Philadelphia: Falmer Press. Stecher, B., and Hamilton, L. (2009). What Hae We Learned from Pioneers in Innoatie As- sessment? Paper prepared for the Workshop of the Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards, National Research Council, December 10-11, Washington, DC. Available: http://www7.nationalacademies.org/ bota/Brian_Stecher_and_Laura_Hamilton.pdf [accessed May 2010]. Stecher, B.M., Epstein, S., Hamilton, L.S., Marsh, J.A., Robyn, A., McCombs, J.S., Russell, J.L., and Naftel, S. (2008). Pain and Gain: Implementing No Child Left Behind in California, Georgia, and Pennsylania, 200 to 200. Santa Monica, CA: RAND. Stevens, S., Sutherland, L., and Krajcik, J.S., (2009). The Big Ideas of Nanoscale Science and En- gineering: A Guidebook for Secondary Teachers. Arlington, VA: National Science Teachers Association. Sunderman, G. (Ed.). (2008). Holding NCLB Accountable: Achieing Accountability, Equity, and School Reform. Thousand Oaks, CA: Corwin Press. Toch, T. (2006). Margins of Error: The Testing Industry in the No Child Left Behind Era. Washing- ton, DC: Education Sector. U.S. Department of Education. (2009). Race to the Top Program Executie Summary. Available: http://www.ed.gov/programs/racetothetop/resources.html [accessed January 2010]. U.S. Government Accountability Office. (2009). No Child Left Behind Act: Enhancements in the Department of Education’s Reiew Process Could Improe State Academic Assessments. GAO Report-09-911. Available: http://www.gao.gov/cgi-bin/getrpt?GAO-09-911 [accessed No- vember 2009]. U.S. Government Accounting Office. (2003). Characteristics of Tests Will Influence Expenses: Infor- mation Sharing May Help States Realize Efficiencies. GAO Report-03-389. Available: http:// www.gao.gov/cgi-bin/getrpt?GAO-03-389 [accessed November 2009]. Wiggins, G., and McTighe, J. (1998). Understanding by Design. Alexandria, VA: Association for Supervision and Curriculum Development. Wilson, M. (Ed.). (2004). Towards Coherence between Classroom Assessment and Accountability, 10rd Yearbook of the National Society for the Study of Education, Part II. Chicago, IL: The University of Chicago Press. Wilson, M. (2005). Constructing Measures: An Item-Response Modeling Approach. Mahwah, NJ: Lawrence Erlbaum Associates. Wilson, M. (2009). Deeloping Assessment Tasks That Lead to Better Instruction and Learning. Presentation to the Workshop for the Committee on Best Practices in State Assessment Sys - tems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies.org/bota/Mark_Wilson. pdf [accessed May 2010].

OCR for page 65
 REFERENCES Wise, L. (2009). How Common Standards Might Support Improed State Assessments. Paper pre- pared for the Workshop of the Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council, Washington, DC. Available: http://www7.nationalacademies.org/BOTA/Laurie_ Wise_Paper.pdf [accessed May 2010]. Zwick, R. (2009). State Achieement Comparisons: Is the Time Right? Paper prepared for the Workshop of the Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards, December 10-11, National Research Council. Wash - ington, DC. Available: http://www7.nationalacademies.org/bota/Rebecca_Zwick_Paper.pdf [accessed May 2010].

OCR for page 65