tion approach has been used by College Results Online,19 a Web site that allows users to view graduation rates for peer institutions with similar characteristics and student profiles. The second method is exemplified by Oklahoma’s “Brain Gain”20 performance funding approach that rewards institutions for exceeding expected graduation rates. These existing measures and programs with good track records could serve as models or pilots for other institutions, systems, or states.

While the panel is in no way attempting to design an accountability system, it is still important to think about incentives that measures create. Because institutional behavior is dynamic and directly related to the incentives embedded within the measurement system, it is important to (1) ensure that the incentives in the measurement system genuinely support the behaviors that society wants from higher education institutions, and (2) attempt to maximize the likelihood that measured performance is the result of authentic success rather than manipulative behaviors.

The evidence of distortionary and productive roles of school accountability is fairly extensive in K-12 education research, and there may be parallel lessons for higher education.21 Numerous studies have found that the incentives introduced by the No Child Left Behind Act of 2001 (P.L. 107-110) lead to substantial gains in at least some subjects (Ballou and Springer, 2008; Ladd and Lauen, 2010; Reback, Rockoff, and Schwartz, 2011; Wong, Cook, and Steiner, 2010), and others have found that accountability systems implemented by states and localities also improve average student test performance (Chakrabarti, 2007; Chiang, 2009; Figlio and Rouse, 2006; Hanushek and Raymond, 2004; Neal and Schanzenbach, 2010; Rockoff and Turner, 2010; Rouse et al., 2007). However, these findings have been treated with some skepticism because, while Rouse and colleagues (2007) show that schools respond to accountability pressures in productive ways, there is also evidence that schools respond in ways that do not lead to generalized improvements. For example, many quantitative and qualitative studies indicate that schools respond to accountability systems by differentially allocating resources to the subjects and students most central to their accountability ratings. These authors (e.g., Booher-Jennings, 2005; Hamilton et al., 2007; Haney, 2000; Krieg, 2008; Neal and Schanzenbach, 2010; Ozek, 2010; Reback, Rockoff, and Schwartz, 2011; White and Rosenbaum, 2008) indicate that schools under accountability pressure focus their attention more on high-stakes subjects, teach skills that are valuable for the high-stakes test but less so for other assessments, and concentrate their attention on students most likely to help them satisfy the accountability requirements.

Schools may attempt to artificially boost standardized test scores (Figlio and Winicki, 2005) or even manipulate test scores through outright cheating (Jacob


19See http://www.collegeresults.org/ [June 2012].

20See http://www.okhighered.org/studies-reports/brain-gain/ [June 2012].

21The panel thanks an anonymous reviewer for the following discussion of incentive effects associated with accountability initiatives in the K-12 context.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement