National Academies Press: OpenBook

Improving Measurement of Productivity in Higher Education (2012)

Chapter: Appendix A: Commonly Used Performance Metrics for Higher Education

« Previous: References and Bibliography
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

Appendix A

Commonly Used Performance Metrics for Higher Education

This appendix elaborates on some of the proxy measures for productivity and efficiency that were described briefly in Chapter 2. As discussed, the proxies vary in their efficacy and, therefore, in their usefulness for accountability. They relate to the concept of productivity as discussed in this report, but they should not be confused with it.

GRADUATION RATES

Since being fixed in law by the Student Right-to-Know and Campus Security Act and established as a statistical reporting requirement in the Graduation Rate Survey (GRS) of the National Center for Education Statistics, graduation rates have become a staple of accountability reporting in higher education. As defined by the GRS, the standard graduation rate for four-year institutions is computed as the percentage of a starting fall term cohort of first time in college full-time attending students who have completed a bachelor’s degree within six years (150 percent) of college entry. The parallel rate for two-year institutions allows a three-year window for completion of an associate’s degree. All other graduation rate statistics are modeled on the GRS, but allow varying time frames, different degree designations, and different inclusion standards in the denominators that describe the cohort. Principal variations allow adjustments for part-time student starters and incoming transfer students in the cohort. Dropout rates are more colloquial and are not defined as consistently as graduation rates. Typically, they are calculated on the basis of the same tracking cohort and are defined as the percentage of the cohort that remains enrolled for at least one credit one year later (i.e., the following fall term) or one term later (i.e., the following spring term).

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

Although widely used, cohort-based graduation and dropout rates are subject to many limitations. For example, the GRS restricts the denominator to first-time, full-time students, which may represent only a small fraction of beginning students at institutions that enroll large numbers of part-time students and beginning transfers. Including these students in the cohort allows for more completeness, but causes further problems because part-time students have differing credit loads and transfer students bring in wide ranging numbers of previously-earned credits. This renders fair comparisons difficult because, unlike the first-time full-time population, not all subpopulations are starting from the same baseline.

Graduation data, such as that produced by IPEDS, thus penalize certain types of institutions since they do not account for differences in entering students’ characteristics or resources available to the college. Graduation rates also reflect admission standards, the academic strength of the enrolled students, and the resources institutions devote to instruction, to remediation, and to retention. Because of this heterogeneity in student types and institutional missions, any increase in the production of graduates (either through increased graduation rates or expanded enrollment) will likely happen at lower ranking institutions; highly selective schools are not going to change and are already operating at capacity.

These are legitimate issues, but if those were the only ones, a case could still be made for the public policy value of graduation rates, with appropriate caveats attached. The primary reason to de-emphasize IPEDS graduation rates in public policy is that, when used in the aggregate for a whole state or a group of institutions, the information that many believe is being conveyed simply is not. To illustrate, Table A.1 contrasts the average IPEDS graduation rate for community colleges nationally with the more comprehensive picture of student persistence and attainment from the Beginning Postsecondary Student (BPS) Survey. The data are for the same cohort of students; IPEDS includes all students who entered as full-time students at community colleges in fall 2003. The BPS results are from a comprehensive survey of a sample of the same students. As expected, the same-institution graduation rate is within the survey’s margin of error, at a little over 20 percent, but there is also much more information about what happened to the other 80 percent.

A number of institutions have taken steps to produce additional statistics that give greater context to graduation rate information. The Minnesota’s state college system maintains an “accountability dashboard”1 for each of its campuses. Beyond the number of students completing degrees), its variables indicate, for example, the condition of facilities and pass rates of graduates taking professional licensing exams. The California State University system attempts to approximate the value of degrees, posting online the median starting and mid-career salaries for graduates of each campus, as well as their average student loan debt.2

________________

1See http://www.mnscu.edu/board/accountability/index.html [July 2012].

2See http://www.calstate.edu/value/systemwide/ [July 2012].

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

TABLE A.1 Comparison of IPEDS Graduation Rate and BPS Persistence and Attainment for Fall 2003 First-Time, Full-Time, Degree-Seeking Students

IPEDS GRS 2003-2006 (%)   BPS 2003-2004/2009 (%)  
Graduated within 150% of normal time 21.5 Graduated in less than 3 years (by spring 2006) at the same institution 22.5
Unknown outcomes 78.5 Graduated in less than 3 years, but at another institution 1.9
    Graduated in 3-6 years, at different institution 14.0
    Graduated in 3-6 years, at same institution 6.0
    No degree after 6 years, but still enrolled at four-year institution 7.4
    No degree after 6 years, but still enrolled at less-than-four-year institution 10.5
    No degree, never returned 37.3

The IPEDS graduation rate correctly shows the proportion of full-time, first-time, degree-seeking students who started in a two-year public college and completed a certificate or degree within 150 percent of normal time at the same institution.3 Members of the higher education community generally recognize these subtleties but, in practice, the figures above are often condensed in public statements to “21.5 percent of community college students graduate,” which many leaders have come to believe is the entire story.

There is much more to it, however, as the results of the BPS survey show. Based on the BPS sample, about 44 percent of the same cohort of students had graduated by 2009. Many had transferred to four-year institutions and finished their bachelor’s degrees, or were still working on a bachelor’s degree in 2009. They skipped the associate credential entirely. Others took longer than three years to get an associate degree or certificate. If half of those still enrolled at two- or four-year institutions eventually complete a credential, the true graduation rate for the population probably approaches 50 percent. This is not a number to brag about, and the extended time many students require to complete is a significant policy issue in itself, but it has little to do with the IPEDS rate. The same would be true at most four-year colleges, although the gap between the IPEDS rate and the actual graduation rate would be smaller.

________________

3IPEDS/Digest of Education Statistics: see http://nces.ed.gov/programs/digest/d09/tables/dt09_331.asp [July 2012].

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

COMPLETION AND ENROLLMENT RATIOS

An alternative to cohort-based graduation rates is a ratio measure that divides credentials awarded by the total student population to create a rough measure of enrollment efficiency. This approach has the virtue of including all enrollments, in contrast to cohort-based measures that address a subset of students who begin their enrollment at the same time. There are no standard definitions of this measure, but a typical calculation counts undergraduate degrees (associate’s and bachelor’s) for a given academic year and divides by an unduplicated undergraduate headcount for the same period (Klor de Alva, Schneider, and Klagge, 2010). A common variation is to use undergraduate full-time equivalent (FTE) enrollment as the denominator, which shifts the perspective of the measure toward output per unit of instructional activity.

Although more inclusive than cohort-based statistics, ratio measures such as these are also subject to serious limitations. First, they are not truly valid because the degrees that are counted are not awarded to the particular students that constitute the population count. If the students constituting the numerator and the denominator have differing characteristics—e.g., levels of academic ability or demographic profiles—that affect their chances of graduating, the statistic will be misleading. More important, the measure is sensitive to changing population size. If enrollment is growing rapidly, for example, the ratio will understate student success because the degrees conferred in a given period are awarded to students from an earlier period characterized by smaller entering classes. Some approaches try to correct this defect by counting enrollments four to six years earlier, but the lag amount is arbitrary, so this correction is never entirely satisfactory.

TIME TO DEGREE

Another commonly used performance measure is average lengths of time required to earn a degree. This can be calculated for specific degrees at specific institutions or at more aggregated levels. This statistic can be forward-looking, applied to an entering cohort to estimate a graduation rate (like the GRS) by averaging the elapsed time from beginning of enrollment to the award of the degree. More commonly, it is backward-looking, selecting all those students who were awarded a particular degree in a given term and identifying each student’s first term of enrollment. A significant decision must be made between counting pure elapsed time (for example, the number of elapsed terms between entry and degree award) or the number of terms in which the student was actively enrolled. The first approach may more closely corresponds to most people’s understanding of the underlying concept, but it implicitly holds institutions responsible for outcomes over which they have no control, such as a student’s decision to take a

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

year off, or other exogenous factors that slow otherwise normal progress toward degree completion.

The major drawback to using time-to-degree measures is that they are difficult to interpret. Extended duration to degree may reflect shortcomings of the institution, such as unusually complex curricula or insufficient course offerings and scheduling complication resulting in lack of available courses and students ultimately taking far more credits than the minimum needed for graduation.4 Whether or not students can get into classes they need for prerequisites is a very important determination of time to degree.

Many other factors affect time to completion that are unrelated to an institution’s administrative and operational effectiveness. Among the most prominent are students electing to take smaller course loads, interrupt schooling, or pursue specialized accreditation requirements (for example, in engineering) that add to time to degree. Many institutions also serve predominantly part-time students and working adults, or underprepared student populations. Uninformed comparisons will result in these institutions appearing less efficient in terms of degree production (i.e., exhibiting longer time values), yet they may be functioning reasonably well, given their mission and student characteristics. The ways students finance their educations, and particularly whether they are employed while enrolled, also affects time to graduation.

Time to degree relates to and raises a number of policy issues and may provide insight into the broader value to individuals of college other than the degree. For example, only a very small percentage of students attempt to earn a B.S. or B.A. degree in three years, while many continue to complete coursework beyond the conventional four-year period. In valuing the college experience, it may be worth considering the benefits of social interactions and engagement to students, including their contribution to the learning process. And, for many, college undeniably includes a consumption component that has value (part of the multi-product essence of the sector): it can be an enjoyable experience. On the other hand, when students are pushed to a five-year plan, or if choice of major or other options are affected because of insufficient course offerings, this is closer to a productivity problem.

Time use studies indicate that students engage in patterns of homework and other time use behaviors at different levels of intensity depending, for example, on their majors or enrollment status. The American Time Use Survey provides some information on student input hours (Babcock and Marks, 2011). Enrollment status is important in other ways as well. Commuting students and residential

________________

4Bowen et al. (2009) rightly argue that time to degree is a serious policy concern, and demonstrate that students who take additional time to graduate often accumulate remarkable numbers of credit hours. This may be because they change majors or experience “start-and-stop” problems, but it may also be compounded by the schedule of course offerings by the institution.

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

students may have very different experiences and this may correlate with probability of success.

In summary, time to degree is not a measure that can be used to rank or compare all institutions along a meaningful quality or cost continuum. Adjustment factors that reflect mix of student populations need to be added to time-to-degree statistics in order for them to be used in cross-institution or cross-system (and, in some instances, departmental) comparisons. Failure to take enrollment status into account leads to problems when time-to-degree statistics are used as a performance metric.

COSTS PER CREDIT OR DEGREE

Some measures attempt to capture the cost of producing an academic credit or degree by documenting all the inputs involved in generating the credit and calculating their costs. Cost per credit or degree is generally produced for a particular setting, such as an academic program or department. All the credits generated by a program within a particular time period (for example, an academic year or term) are added to create the denominator. The numerator is produced by calculating the total cost of offering the program for the same time period. The two figures are then cast as a ratio. For example, among potential performance metrics being considered by the National Governors Association and some state higher education systems is “credentials and degrees awarded per $100,000 state, local, tuition, and fee revenues—weighted by STEM and health (for example, public research universities in Virginia produce 1.98 degrees per $100,000 in state, local, tuition, and fee revenues).”5

Despite their obvious use value, degree or credit cost measures present several problems. First, aggregating total costs and credits rather than summing from the lowest unit of activity (for example, individual classes) may obscure important differences across micro-environments and may lead to false conclusions because of the disproportionate impact of a few outliers.6 Second, costs do not necessarily reflect underlying relationship between inputs and outputs because similar inputs may be priced differently. For example, if one department or institution is staffed largely by tenured faculty with relatively high salaries and another staffed by low-cost adjunct faculty, differences in cost per credit between them may be considerable even though the same numbers of teaching hours are involved. Similar differences encumber cost comparisons across disciplines because of typically high salaries in some (e.g., business) and low salaries in others

________________

5National Center for Higher Education Management Systems presentation to the Virginia Higher Education Advisory committee, July 21, 2011.

6There are also smaller technical issues. For example, in the NGA measure, what is the proper time frame to assign the $100,000 expenditure? It is reasonable to assert that “this year’s degrees” should be attributed in some fashion to expenditures weighted across periods t, t – 1, … going back at least to t – 4.

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

(e.g., English). If these factors are ignored, the policy implication will always be to substitute low cost inputs for higher cost ones. Finally, the cost calculation itself is subject to the joint-use problem because the same faculty member may be doing more than one thing with his or her time (see Section 3.1).

Cost-per-degree and per-credit statistics are less controversial and perhaps more appropriate for tracking trends at the national level. At this level, the effects of student and institutional heterogeneity are diluted and the problem of transfers is eliminated. Still, sampling issues remain.

STUDENT-FACULTY RATIOS

Student-faculty ratios are similar to cost per credit in that they relate presumed teaching output (students taught) to presumed teaching input (faculty assignment). Like cost per credit, moreover, these ratios are constructed on an aggregate basis rather than being built up from individual classes. Because this measure is constructed using physical entities, it is not subject to many of the distortions associated with cost per credit, but it still suffers from the aggregation and joint-use issues noted above. Perhaps more important, student-faculty ratios can lead to serious misunderstandings about quality, as a high value is usually interpreted as a signal of efficient instruction while a low value is usually interpreted as a mark of quality. These contradictory conclusions are possible because the student-faculty ratio does not capture the actual output of the relationship, student learning. Yet both common sense and empirical evidence suggests that high student-faculty ratios are not invariably linked to poor academic outcomes. Larger classes can be made qualitatively different from smaller ones through the use of technology and altered pedagogy where students can learn from one another. Meanwhile, there is growing empirical evidence from the large course redesign projects undertaken by the Center for Academic Transformation that it is possible to increase enrollment in a course while simultaneously improving learning outcomes.

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×

This page intentionally left blank.

Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 137
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 138
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 139
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 140
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 141
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 142
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 143
Suggested Citation:"Appendix A: Commonly Used Performance Metrics for Higher Education." National Research Council. 2012. Improving Measurement of Productivity in Higher Education. Washington, DC: The National Academies Press. doi: 10.17226/13417.
×
Page 144
Next: Appendix B: Methods for Measuring Comparative Quality and Cost Developed by the National Center for Academic Transformation »
Improving Measurement of Productivity in Higher Education Get This Book
×
 Improving Measurement of Productivity in Higher Education
Buy Paperback | $47.00 Buy Ebook | $37.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Higher education is a linchpin of the American economy and society: teaching and research at colleges and universities contribute significantly to the nation's economic activity, both directly and through their impact on future growth; federal and state governments support teaching and research with billions of taxpayers' dollars; and individuals, communities, and the nation gain from the learning and innovation that occur in higher education.

In the current environment of increasing tuition and shrinking public funds, a sense of urgency has emerged to better track the performance of colleges and universities in the hope that their costs can be contained without compromising quality or accessibility. Improving Measurement of Productivity in Higher Education presents an analytically well-defined concept of productivity in higher education and recommends empirically valid and operationally practical guidelines for measuring it. In addition to its obvious policy and research value, improved measures of productivity may generate insights that potentially lead to enhanced departmental, institutional, or system educational processes.

Improving Measurement of Productivity in Higher Education constructs valid productivity measures to supplement the body of information used to guide resource allocation decisions at the system, state, and national levels and to assist policymakers who must assess investments in higher education against other compelling demands on scarce resources. By portraying the productive process in detail, this report will allow stakeholders to better understand the complexities of--and potential approaches to--measuring institution, system and national-level performance in higher education.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!