asks us to address two questions. The first one is, Should the federal government be engaged in the various missions or businesses in which it has been? If the answer to that is yes, should our agency be engaged in those kinds of programs, or could they be handled more effectively in some other area of the federal government? When I first read this plan in the newspaper I thought, well, the answers to those questions are easy; they are “yes” and “yes.” But as we looked more carefully at the questions, we found them to be very interesting. We really put everything on the table and examined everything we do to answer both those questions. We did come up essentially with “yes” and “yes,” after going through all of the work, but subsequently with a much richer understanding of what NSF 's role is. We do believe that there is a federal role in funding basic research, and that is probably one of the least controversial things we do. But there is some disagreement even about that role. There are some who feel that industry should support even basic research.
On the education side, there are probably more who wonder why NSF is involved in education, especially at the K-12 level. We are involved only in science and math education, but we are involved in it at all levels. We focused more on this issue when it became clear that the production of future scientists and engineers was at stake. There is a critical need for this country to have a scientifically literate public and to have future workers who will be able to have command of specific technologies and a specific knowledge base. These are very important things in which NSF should play a role.
In that effort, as in many of our other efforts, we really see ourselves as making high-risk, high-gain investments. We do not want to do things that are low gain. We also feel that it is important for us to be playing a catalytic or stimulating role. We like to be working at the frontiers. We think that historically and into the future, that is where NSF operates best, and hope that we will continue to be able to do it.
We think of what we do as investments, and we make our research investments based on evaluating proposals up-front. It is much more difficult to track specific outcomes from everything we fund. Probably the closest to a real look at the outcomes of research is a number of studies that have examined the economic benefits of funding and research. We recently benefited from a review that Laura Tyson, the President 's economic adviser, did of all these studies, many based on industry research. She talked about a 30 percent specific (or private) economic gain. If you look at the broader (or social) economic gain, however, it was closer to 50 percent. That is a very good return on any investment. We think it helps document why investment in research should continue.
Industry is unlikely to spend much more than it already spends in funding basic research because it is very difficult to appropriate the results of those investments—because basic research usually has broad impact in many different fields. We therefore believe it is the particular role of the federal government to make those investments.
How we evaluate the outcomes or impacts of those investments in other ways is largely the focus of this workshop. We know we need to provide more evidence of these impacts. I believe we can do it. We just have to find the best ways.
There are ways of evaluating educational programs. For example, one outcome that many would be interested in is whether we have increased the achievement of school children in K-12 and whether we have better degree recipients coming out of our educational institutions at other levels. There surely are many others.
Another thing for us to look at is the effectiveness of different ways of awarding money to researchers. We talk about this in terms of modes of research support. We give grants to individual investigators. We make grants to aggregates such as centers, or sometimes to institutions for particular kinds of programs in an important area. We should be able to look at those outcomes as well, and perhaps even compare the relative effectiveness of different kinds of approaches. Right now, we have a mixed portfolio. My guess is we will want to continue to have a mixed portfolio, but it really would be helpful to know if there are clear differences in effectiveness among the different kinds of approaches.
We have some evaluations going on right now of some of our centers. The centers have a much broader mission than any particular grant does, so we cannot compare them directly, but it will be interesting to see if funding to centers really meets the goals we have for them. To put it simply, we want to continue to stimulate outstanding work and to support that work at the frontiers of science and engineering. We want to do a similar thing in education. And we want to assess the outcomes without stifling the very thing we are trying to stimulate. I think that is part of the difficulty here.
We could think of many methods of measurement, but we want to be sure that whatever we do does not have some negative effect. An example that always comes to mind is the use of publications as an index of individual faculty member productivity. That has led, at least in some institutions, to ridiculous results, in terms of whether it was really measuring something important or whether people simply started writing lots of articles about things that