Jack Halpern, University of Chicago: I wanted to return to the 5 percent reduction question. I think this is the crux of the matter, at least as far as the level of investment in science is concerned. This issue—how much to invest in science—is what the government struggles with each year, whether to increase or decrease the budget by a few percent.

Having been through this exercise in the Academy for the last few years, of just trying to become more efficient, there are limits to what you can do. You soon reach a point where you really can't cut administrative costs much more. However, if you look then at the science support budget—what I have to say applies particularly to NSF, which is supporting extramural research and largely basic research—there are no strategic programs that you can measure against. Let's say NSF supports 20 percent of the requests that it gets. The real question is, What do we support at 21 or 19 percent? The managers at NSF are making arbitrary cutoffs, and it is not obvious what the consequences are of making a 1 percent shift from 19 to 20 or 20 to 21 percent—how detrimental the decrease, or beneficial the increase, is going to be.

It seems to me that the GPRA activities that we have been hearing about provide an opportunity to calibrate how sensitive the consequences are to the cutoff. Granted, it is very difficult to identify quantitative measures of performance, but NSF has identified a particular set of criteria that they are proposing to use in assessing the effectiveness of their programs. I agree that you can't do this project by project, but if you can do it across programs, I think it would be useful to take NSF's ranking of its projects and apply these criteria to the top 20 percent and the middle cut and the bottom 10 percent. This would allow you to do a couple of things. One is to assess the effectiveness of the criteria that you are using. If they don't distinguish between what you think are your most highly ranked programs and your most poorly ranked programs, then it suggests that your criteria are not very useful.

To turn that around, if you have faith in your criteria, or evidence that they allow you to assess the effectiveness of your peer-review ranking of programs, such a study would also allow you to see significant differences between your top-ranked programs and your lower-ranked ones. At the very least, it seems to me that you ought to be applying these criteria by strata, even if not to individual projects. It would also give you some appreciation about what will happen if you have to cut out the lowest 5 percent. How much less effective are they by your criteria than the others?

Judith S. Sunley: That is a very interesting idea. I will try to get someone to explore some of those ideas further.

Richard K. Koehn, University of Utah: I would like to expand on the point that Professor Halpern raised. Why wouldn't you include in your assessment those projects that were not funded? By selecting projects to fund and, implicitly, others not to fund, you have made choices of where to invest and where not to invest. It would be worthwhile to know that those projects you didn't invest in haven't in fact produced positive outcomes by your criteria.

Judith S. Sunley: I don't think we have attempted to do that in the past. One of the things we are very concerned about is the overall burden both on the scientific community and on NSF staff, as well as the expense, in exploring these options. So we would have to estimate what value we thought we could get out of going that much farther in terms of detail in the assessments.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement