• If outputs have declined while resources have increased or remained stable, has quality changed correspondingly?
  • How do productivity trends in comparable states, institutions, or departments compare?
  • Have changes in education delivery mode, student characteristics or important contextual variables (economic, demographic, political, institutional) had a measurable bearing on the trends?
  • Are there clear indicators (spikes, dives, or other anomalies) that suggest data problems to be cleaned (as opposed to sudden changes in performance)?
  • What evidence or further research could be brought to bear to answer these questions in the next step of the conversation?

The more accustomed administrators, researchers, and policy makers become to conversations that incorporate these kinds of questions, the better the selection of metrics and the potential to understand them are likely to become, and the more evident the need for high-quality data.

A general strategy of implementing improved metrics for monitoring higher education begins with the following assertions:

  • Productivity should be a central concern of higher education.
  • Policy discussions of higher education performance will lack coherence in the absence of a well-vetted and agreed upon set of metrics.
  • Quality of inputs and outputs—and particularly changes in them—should always be a core part of productivity measurement discussions, even when it is not fully or explicitly captured by the metrics. Alternatively put, productivity measurement is not meaningful without at least a parallel assessment of quality trends and, at best, a quality dimension built directly into the metric.
  • Some elements relevant to measuring productivity are difficult to quantify. This should not be used as an excuse to ignore these elements or to avoid discussions of productivity. In other words, current methodological defects and data shortcomings should not stunt the discussion.

Additionally, in devising measures to evaluate performance and guide resource allocation decisions, it is important to anticipate and limit opportunities to manipulate systems. The potential for gaming is one reason many common performance metrics should not be relied on, at least not exclusively, in funding and other decision-making processes. Simply rewarding throughput can create distortions and counterproductive strategies in admissions policies, grading policies, or standards of rigor. Clearly, a single high-stakes measure is a flawed approach; a range of measures will almost always be preferable for weighing overall performance. We note, however, that monitoring productivity trends would not be

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement