place and of its post-college value (represented by income, occupational status, or other measure) beyond that attributable to the certificate or degree itself.

Ignoring measures of learning outcomes or student engagement (while, perhaps, emphasizing graduation rates) may result in misleading conclusions about institutional performance and ill-informed policy prescriptions. Is it acceptable for a school to have a high graduation rate but low engagement and outcomes scores? Or are individual and public interests both better served by institutions where students are academically challenged and demonstrate skills and competencies at a high level, even if fewer graduate? Strong performance in the areas of engagement, achievement, and graduation are certainly not mutually exclusive, but each says something different about institutional performance and student development. One conclusion from Pascarella and Terenzini’s (1991, 2005) syntheses is that the impact of college is largely determined by individual student effort and involvement in the academic, interpersonal, and extracurricular offerings on a campus. That is, students bear a major responsibility for any gains derived from their postsecondary experience. Motivation is also a nontrivial factor in accounting for post-college differences in income once institutional variables such as selectivity are controlled (Pascarella and Terenzini, 2005).

A number of value-added tests have been developed over the years: the Measure of Academic Proficiency and Progress (MAPP) produced by the Educational Testing Service, Collegiate Assessment of Academic Proficiency (CAAP) produced by the ACT Corporation, and the Collegiate Learning Assessment (CLA) designed by RAND and the Council for Aid to Education. The CLA is most specifically designed to measure valued added at the institutional level between the freshman and senior years.29 This kind of quality adjustment is desirable at the level of the institution or campus for purposes of course and program improvement, but is unlikely to be practical anytime soon for the national measurement of productivity in higher education. It is beyond the scope of this panel’s charge to resolve various longstanding controversies, such as using degrees and grades as proxies for student learning versus direct measures of learning as represented by MAPP, CAPP, and CLA. Nonetheless, it is important to work through the logic of which kinds of measures are relevant to which kinds of questions.30

The above kinds of assessments show that even identical degrees may represent different quantities of education produced if, for example, one engineering graduate started having already completed Advanced Placement calculus and physics while another entered with a remedial math placement. Modeling approaches have been developed to estimate time to degree and other potentially


29The Voluntary System of Accountability (VSA), which has been put forth by a group of public universities, is a complementary program aimed at supplying a range of comparable information about university performance, but it is less explicitly linked to a notion of value added by the institution. Useful discussions of the merits of assessment tests are provided in Carpenter and Bach (2010) and Ewell (2009a).

30See Feller (2009) and Gates et al. (2002).

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement