linked to research, are also important; even the consumption component of college, including student enjoyment of the experience, is quite clearly significant.
The measurement complications identified above can be dampened by recognizing the diversity of missions across the range of colleges and universities and then segmenting institutions into more homogeneous categories along the lines of the Carnegie Classification system, or perhaps using even more detail. For many purposes, it is unwise to compare performance measures across institutions that have different missions. The first implication of this principle is that productivity measures must be designed to register outcomes that can be taken as equivalent to a degree or a fractional portion of a degree-equivalent. This may be especially important for community colleges, where outcomes include successful transfer to four-year institutions, completion of certificates, or attainment of specific skills by students who have no intention of pursuing a degree.
Additionally, for purposes of making comparisons across institutions, states, or nations, it is essential to take into account incoming student ability and preparation. Highly selective institutions typically have higher completion rates than open-access institutions. This may reflect more on the prior learning, preparation, and motivation of the entrants than on the productivity of the institution they enter. Therefore, in the context of resource allocation or other high stakes decisions, the marginal success effect attributable to this input quality effect should ideally be taken into consideration in performance assessments.
Because heterogeneity leads to measurement complications even within institutional categories, it is also important to account for differences in factors such as the mix of degrees and majors. Institution-level cost data indicate that the resources required to produce an undergraduate degree vary, sometimes significantly, by major. Variation in degree cost is linked to, among other things, systematic differences in the amount of time needed to complete a degree. Uninformed comparisons will result in some institutions appearing less efficient in terms of degree production (i.e., exhibiting longer time values), yet they may be functioning reasonably well, given their mission and student characteristics. Therefore, productivity models should include an adjustment for field of study that reflects, among other things, different course requirements, pass rates, and labor input costs.
It is possible, and perhaps even likely, that critics of this report will rebuke the idea of measuring instructional productivity because of the complications noted above and throughout this report. Our view is that this would be a mis-