National Academies Press: OpenBook
« Previous: 8 Additional Discussion by Workshop Participants
Suggested Citation:"9 Synthesis Viewpoint: What Did We Learn Today?." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

9

Synthesis Viewpoint—What Did We Learn Today?

John H. Moore

Six main categories of points can be summarized from this workshop discussion. The first is that industrial experience is relevant to the problems that NSF is facing. Much of the discussion has supported that view, although it is important not to push that too far because of the fundamentally different objectives and structure of industry and NSF. Has industry solved the problem of adequately measuring its research activities and performance? The answer is no. Progress has been made, however. From what we have heard, industry does not have a uniformly well-defined set of metrics. The Industrial Research Institute (IRI) is developing a set of metrics, but it is not clear that these are being used inside companies for their decision making. The conclusion is that developing useful metrics and using them appropriately is a difficult problem, but it is not impossible.

The second general point is that the issues are deeper than simply ensuring compliance with the Government Performance and Results Act (GPRA). This is an important point that came up mostly in the morning discussion. As a political matter, GPRA reflects a concern about the accountability of government in general that works its way through all elements, including science. Science cannot, indeed should not, escape that. Furthermore, science has its own problems that require thorough assessments. Numerous comments were made during the workshop about the image of science. Generally, science has a good image, although lot of people also see significant problems with it. For example, important questions have been raised about accountability in science, concerning the credibility of science, its contributions to society, and by extension, NSF's contributions to society.

Much of the workshop discussion focused on the fact that the end of the Cold War and the coming reductions in the federal budgets will require a cultural change in the research community. John Seely Brown's point about the historical evolution of research at Xerox from grounded research to open-loop research was very interesting and is indicative of the need for some kind of change in culture, both within academia and in NSF. NSF still very much mirrors the academic culture. Although NSF cannot impose a culture on academics, it can lead the way, and over a period of time it may be able to make a change. One of the main points in this context is that measurement alone will not be sufficient. Measurement without broader change and without a commitment to change is not going to work.

The third point is that NSF should try to see its role from what has been referred to time and again as “the customer's” point of view. The industry people participating in the workshop talked repeatedly about how they were going about ensuring that their research activities are responsive to the needs of the numerous stakeholders or constituencies affected. NSF, too, has numerous stakeholders, including (1) the scientific education and research community, (2) the Office of Management and Budget (OMB), reflecting the administration, (3) Congress, (4) industry, and (5) the public or the taxpayers, as has been emphasized several times. The interests of those constituencies will determine what parameters are used to evaluate performance. Different kinds of metrics may be required for each of them. For example, IRI has found that different types of measures are more relevant to certain categories of stakeholders than to others.

Suggested Citation:"9 Synthesis Viewpoint: What Did We Learn Today?." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

In the past, NSF has developed ways of discerning the wants of at least one group of customers, the scientific community and, perhaps, of industry and other constituents. But do these methods still work? Do they work well enough, and are they weighted correctly in NSF 's considerations? Such questions again raise the issue of cultural change. How is the advice that comes in through NSF's academic advisory committees weighted compared to the advice that may come in from industry, from Congress, or from other groups? It may be that, as a part of acknowledging a need for cultural change, the weighting of those sources of advice needs to be reexamined.

The fourth set of issues has to do with some points about measurement methodology. First, it was mentioned several times during the workshop that qualitative as well as quantitative measures should be considered, and that there should be a mix of the two types. Indeed, the general view was that those aspects of research performance that can be measured quantitatively are likely to be trivial.

There was some discussion as well about the behavioral impacts of measurements. If you tell people that they are going to be evaluated on the basis of some kind of measure, they will work to that measure. There is no doubt about it. Perhaps that is good, but maybe not. The point is that it is essential to understand the behavioral consequences of the measurements that are being used.

As was mentioned several times in the workshop, it would be desirable to use benchmarking. This was a major recommendation of a recent study of the National Academy of Sciences' Committee on Science, Engineering, and Public Policy (COSEPUP; Science, Technology, and the Federal Government: National Goals for a New Era, National Academy Press, Washington, D.C., 1994), which stated that the United States should be among the world's leaders in all areas of science and clearly leading in certain fields of science. Also, more attention should be paid to results, rather than to proposals. NSF should look at trends, as well as at absolute levels in the measurements. Finally, the measurements should relate to agency decision making, so that the metrics can be used as a means of explaining or making transparent the decisions in the allocation of resources.

The fifth issue concerns what can be measured and what it makes sense to measure. The workshop participants did not get very far with that issue, but a number of suggestions should be considered. The first set has to do with what the workshop participants agreed is most important, the education of scientists and engineers. Many aspects of human capital formation can be measured. NSF has done it for several decades. Its reports are full of statistics on numbers of people in different categories, but some aspects are not that clear in the data that NSF produces. One of these gray areas concerns where people trained as scientists and engineers go, what disciplines they become involved in, and what types of work they do. The purpose of such tracking is to ascertain the contribution to society in general— not just to the performance of academic research—of the training of the graduates of the programs that NSF funds. Another point for consideration is the flexibility or adaptability of these people. If you cannot retrain an electrical engineer to be a radio frequency engineer in six months, then something probably is wrong with the educational system. This deficiency may be difficult to measure, but it is something that should be considered.

Research, in some ways, is more difficult to assess. Many measures have been used—publications, citation counts, patents, and a number of other metrics suggested in this report. These may or may not be relevant to performance, in terms of NSF goals. One of the key tasks is to sort them out and identify the measures that are relevant and the ones that are not. There also should not be too many such measures. One participant suggested that the total perhaps should not exceed six, although it is likely that NSF will end up using a lot more than six.

The NSF should place greater emphasis on the contribution of programs to what the “customers” want, on outcomes rather than mere outputs. It is possible to evaluate the impact of research in broad fields, for example, condensed-matter physics. If one examines research in condensed-matter physics and considers the industrial applications that have relied on it, the overall magnitude of the impact will become apparent quickly. One can argue, however, that scientific research is a necessary condition for success in many fields, but not a sufficient condition. In the course of events occurring between a scientific discovery and its actual application and practice, an enormous number of other factors enter into the process—design, marketing, manufacturing, distribution, and so forth. That is one of the reasons economists have such trouble trying to measure the impact, in economic terms, of basic research.

It also is useful to describe the breadth of applicability of each field. This approach was described in the workshop when it was pointed out that Xerox has tried to look at all of the different business divisions in which research results are used. This is something else that could be done by NSF and others to better justify support for research.

Suggested Citation:"9 Synthesis Viewpoint: What Did We Learn Today?." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×

There is also the problem of the time frame for measuring the outcomes of basic research. There is generally a long gestation period for such research. That means that measurement must be a continuous process. It should not be done once; it should be done on a regular, long-term basis.

The final issue area is that this assessment process should look to the future as well as to the past. Some means should be developed for determining whether the right basis for budgeting and program planning is being established. The NSF needs to think in terms of the requirements for solving problems that could be identified by constituent groups, as it were, or that are important to constituent groups. This may lead to the identification of certain fields that are important; if so, it may be possible to develop some way of determining whether the nation's educational and research allocations are appropriate for building the basis needed to resolve those problems.

Suggested Citation:"9 Synthesis Viewpoint: What Did We Learn Today?." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 68
Suggested Citation:"9 Synthesis Viewpoint: What Did We Learn Today?." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 69
Suggested Citation:"9 Synthesis Viewpoint: What Did We Learn Today?." National Research Council. 1995. Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies?. Washington, DC: The National Academies Press. doi: 10.17226/9205.
×
Page 70
Next: Appendix: Workshop Participants »
Research Restructuring and Assessment: Can We Apply the Corporate Experience to Government Agencies? Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!