reference value). Care must be taken, however, to aggregate only measures that are well correlated.11

Finally, it is important to remember that any number describing a research activity or application depicts it imperfectly. Thus, agencies should not rely exclusively on the score of a particular metric or suite of metrics. The context, definition of scores, and commentary are at least as important as the specific answer or score and should be included in the formal evaluation.

Cost of Evaluating Metrics

The cost of developing and evaluating metrics must be balanced against the needs and resources of the program. Time costs can be considerable to develop an effective combination of quantitative and qualitative measures and to adjust them as experience reveals which are the most useful.12 Professional training may even be required to develop qualitative measures that have validity and reliability. Collecting information to evaluate the metrics and normalizing and interpreting the data often take significant time, although time costs decline with subsequent evaluations. The highest time costs are for peer review evaluations.13 Rather than peer review for every component of the CCSP, such investments should be targeted to improve management and performance of key program elements. All of these costs must be factored into determinations of how often the program should be evaluated to capture its impact over time.

The committee believes that a system of metrics, developed through an iterative process and evaluated in consultation with stakeholders, could be a valuable tool for managing the CCSP and further increasing its usefulness to society. For these metrics to be of real value, they must be implemented in a constructive fashion, following the guiding principles outlined in this report. That will require a great deal of thought by individual CCSP agencies as well as by the CCSP as a whole. Then, it will take time to determine whether these metrics help create a stronger and more successful CCSP. Thus, this report should be viewed as the first step and not as an end.


Geisler, E., 2000, The Metrics of Science and Technology, Quorum Books, Westport, Conn., 380 pp.


Werner, B.M., and W.E. Souder, 1997, Measuring R&D performance—State of the art, Research Technology Management, March-April, 34–42.


Cozzens, S.E., 1997, The knowledge pool: Measurement challenges in evaluating fundamental research programs, Evaluation and Program Planning, 20, 77–89; Kostoff, R.N., 1998, Metrics for planning and evaluating science and technology, R&D Enterprise—Asia Pacific, 1, 30–33.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement