Summary

The first workshop of the recently established Chemical Sciences Roundtable (CSR), ''Assessing the Value of Research in the Chemical Sciences," was held in Washington, D.C., in September 1997. The topic of discussion was an issue of long-standing importance that has taken on even greater significance since the Government Performance and Results Act (GPRA) was enacted in 1993. This volume presents the results of that workshop.

As expected, the speakers at the workshop did not present a single set of assessment techniques that would apply across the governmental, academic, and industrial sectors. Instead, they shared their individual approaches and ideas in the hope that other participants in the workshop might identify useful concepts for assessing the value and future impact of the research activities in their own sectors.

Historical Overview

In the first session of the workshop, David A. Hounshell (Carnegie Mellon University) and Don E. Kash (George Mason University) established the context of the workshop by providing an overview of the problem of predetermining the value of research. Hounshell's opening presentation, "Measuring the Return on Investment in R&D: Voices from the Past, Visions of the Future," provided a historical account of the attempts within E.I. du Pont de Nemours & Company to establish a set of guidelines for assessing the value of its research investments. He noted that the problem of evaluating research has been with research managers and corporate executives as long as research has been recognized as a separate activity (even extending to medieval times, as noted subsequently by Trueman Parish). In the early decades of this century, DuPont managers intensely debated this issue. As hardheaded businessmen and the inventors of such financial tools as ROI (return on investment), they eventually concluded that it was simply not possible to develop an approach that was suitable for all types of research. Long-term research to understand in detail the mechanism of a particular chemical process required a different approach than did short-term research focused on important yet incremental improvements of a specific product. For the first type of research, fundamental research, they concluded that the most important metrics were the following:



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
--> Summary The first workshop of the recently established Chemical Sciences Roundtable (CSR), ''Assessing the Value of Research in the Chemical Sciences," was held in Washington, D.C., in September 1997. The topic of discussion was an issue of long-standing importance that has taken on even greater significance since the Government Performance and Results Act (GPRA) was enacted in 1993. This volume presents the results of that workshop. As expected, the speakers at the workshop did not present a single set of assessment techniques that would apply across the governmental, academic, and industrial sectors. Instead, they shared their individual approaches and ideas in the hope that other participants in the workshop might identify useful concepts for assessing the value and future impact of the research activities in their own sectors. Historical Overview In the first session of the workshop, David A. Hounshell (Carnegie Mellon University) and Don E. Kash (George Mason University) established the context of the workshop by providing an overview of the problem of predetermining the value of research. Hounshell's opening presentation, "Measuring the Return on Investment in R&D: Voices from the Past, Visions of the Future," provided a historical account of the attempts within E.I. du Pont de Nemours & Company to establish a set of guidelines for assessing the value of its research investments. He noted that the problem of evaluating research has been with research managers and corporate executives as long as research has been recognized as a separate activity (even extending to medieval times, as noted subsequently by Trueman Parish). In the early decades of this century, DuPont managers intensely debated this issue. As hardheaded businessmen and the inventors of such financial tools as ROI (return on investment), they eventually concluded that it was simply not possible to develop an approach that was suitable for all types of research. Long-term research to understand in detail the mechanism of a particular chemical process required a different approach than did short-term research focused on important yet incremental improvements of a specific product. For the first type of research, fundamental research, they concluded that the most important metrics were the following:

OCR for page 1
-->  Does the project have high scientific merit?  Does the principal investigator have a record of accomplishments?  Is the proposed work in a scientific area relevant to DuPont? Hounshell found that the technical competence and business perspective of the DuPont manager making the funding decision were also crucial. This individual bore tremendous responsibility, for it was his or her job to decide which research projects opened up new scientific and technological opportunities on which the future of the company would depend. If he or she did not understand the essence of the scientific or technical issues being addressed in the proposed projects, poor scientific and technical investments would be made. On the other hand, if he or she did not understand where the business was going in the next several years, poor business investments would be made. The same arguments can be made in other sectors, e.g., for government agencies—the technical competence and mission vision of the program managers are critical to the success of the agencies' research programs. Kash's presentation, "The Sources of Commercial Technological Innovation," emphasized that is important to approach the establishment of performance measures and metrics realistically. As Kash noted, the connection between research (especially basic research) and the public good is a crucial but general one. For any given commercial product, research may not be the most important link in the chain that leads from discovery to product, especially for complex products resulting from the integration of sophisticated technologies. Nonetheless, Kash noted that in industry research is widely recognized as providing the future of the business, for without new discoveries, fundamentally new products and processes are simply not possible. Thus, in setting performance measures and metrics for research, the role of research must be kept in perspective and no more benefits of it claimed than can be delivered. Assessing the Value of Research in the Chemical Industry Joseph M. Jasinski (IBM Research) presented the "Accomplishments" approach pioneered at IBM for assessing the value of exploratory research. A noteworthy aspect of this approach is the long-term perspective it provides. The process does not focus on just the previous year's accomplishments, but rather reevaluates the impact of discoveries made in earlier years. From the point of view of research, this long-term view is critical, because past discoveries are the basis for today's products. The institutionalization of the Accomplishments process served IBM Research well as the company was dramatically downsized in the early 1990s, for it had at hand the documentation needed to validate the importance of research both for IBM currently and in its future. This fact is now acknowledged in the definition of the role of IBM Research in the corporation: "Vital to IBM' s Future Success." Through the Accomplishments process, IBM management and the research scientists that they employ have also gained a better understanding of the impact of IBM Research on IBM corporate needs as well as on the scientific and technical community. James W. Mitchell (Lucent Technologies) described the evaluation of research in the context of improving the effectiveness of research and (in the case of industry) enhancing value for the corporation. He noted that "valued research will have been assessed by some type of method to measure its effectiveness and productivity"—a truism that is implicitly, although not always explicitly, recognized. At Lucent the most frequently applied method for measuring the effectiveness and productivity of research is to compile a matrix of outputs (patents, inventions, intellectual property, etc.) on which a value can be placed. However, this approach has several limitations when applied to "breakthrough" or longer-range research. To address this issue one must assess the effectiveness of the organization in managing its research portfolio, because the portfolio will exhibit, by necessity, a balance between

OCR for page 1
--> breakthrough research and shorter-range research. Finally, Mitchell noted that scientists at Lucent have found research self-appraisals to be useful, providing the research scientists with a better understanding of the "path to value." Trueman D. Parish (Eastman Chemical Company) discussed the approach to evaluating research that was developed by the Industrial Research Institute, a multi-industry organization that has invested considerable effort in defining an appropriate set of performance measures and metrics for research. This approach has been labeled the "Technology Value Pyramid" (TVP). The TVP provides a valuable model for developing a set of performance measures and metrics for research efforts that have a clear set of deliverables using well-defined technical approaches (as are found in industry and many government agencies). Although the presentations by Jasinski (IBM) and Mitchell (Lucent) did not specifically refer to the TVP, many of the concepts that they discussed could be tied to ideas presented by Parish. Parish stressed that performance metrics must be "credible," "relevant," and "reasonably simple'' if they are to be of use. It is also critical that they capture the essence of the enterprise, for history has shown that the mere existence of performance measures will alter the activities being undertaken. If the measures do not truly represent the values of the organization, the measurement process can undermine rather than strengthen the organization. The Linkage of Public Research and Patent Applications Francis Narin (CHI Research Inc.) discussed the linkage between research and specific public benefits as revealed through the scientific underpinnings of patents. He presented data illustrating the linkages between patents and publicly funded research and concluded that a large fraction of the scientific papers cited on the first page of a U.S. industrial patent originated with publicly funded science. This linkage is more important in some industries than in others. For example, biotechnology patents are more science driven than are automobile manufacturing patents, and have a strong nationalistic component, as illustrated by the heavy dependence of German-invented patents on German research, and of Japanese-invented patents on Japanese research. Assessing the Value of Research in the Academic Sector K. Barbara Schowen (University of Kansas) discussed the importance of research in undergraduate education, particularly in the context of the National Science Foundation's (NSF's) Research Experiences for Undergraduates (REU) program. On the basis of her experience during the 10 years of the REU program's existence, she argued that research internships are as important a part of the undergraduate chemistry curriculum as are lecture and laboratory courses. As noted in a report by a group of NSF-REU chemistry site directors at a workshop held in Washington, D.C., in 1990, "Chemistry is a dynamic experimental science for which research is an inherent component. Such a discipline requires 'learning by doing,' an inquiry approach, and an apprenticeship experience. A student's education in chemistry is incomplete without research experience." Jules B. LaPidus (Council of Graduate Schools) argued that scholarship is critical to the educational function of the university. The importance of research at the graduate level is usually taken as a given, but research is done in many places and clearly is not the defining characteristic of doctoral education. LaPidus argued that research is an integral part of graduate education because of the habits of mind (in other words, the process of scholarship) acquired by graduate students as they seek answers to questions that do not yet have answers. These are not the tidy questions posed in textbooks for which the answers are already known, but the messy questions that the new Ph.D. recipient will encounter in the real world.

OCR for page 1
--> It is the knowledge of "how to read and listen critically, define and analyze problems, determine what the important questions are, decide what research needs to be done and how to do it, understand what the results mean, and learn from the entire experience" that forms the "irreducible core of graduate education." Richard K. Koehn (University of Utah) discussed the interaction between universities and industry as well as the ambiguities that arise therefrom. He noted that for research universities, the process required to develop an appropriate set of performance measures and metrics may help resolve the conflicting set of measures and metrics being used (often implicitly) today. However, there is no universal metric that can be applied to the many missions of the research universities, which are expected to train students, educate students, bring in research grants, and create jobs. He noted that there is confusion over the first two goals (which are not the same) and pointed out that conflicts are inherent in this set of expectations. If the faculty do more research, they have less time to educate students. If they focus on educating students, they will not be able to help create jobs, because jobs are a spin-off of research. If one of the goals of the university is economic development, how can its success or failure be measured? These questions were posed by Koehn; much more thought will be required to resolve them. In the end, Koehn argued that it is best to keep the intent of performance measures and metrics in mind. They are for the use of research managers—not for corporate executives and government officials to decide who the winners and losers are. Their purpose is to help research managers understand the impact and relevance of the research portfolios for which they are responsible—to provide information on where they are succeeding and where they are failing, in order to celebrate the former and correct the latter. Assessing the Value of Research in the Government Sector Patricia M. Dehmer (U.S. Department of Energy) described efforts at the Department of Energy's Office of Basic Energy Sciences (BES) to assess the value of its research portfolio, to determine the tools and metrics by which that value can be quantified, and to assess the results of scientific research by using these tools and metrics. She noted that performance measurement and assessment have always existed in BES, but that GPRA, as well as other laws and executive orders, has given new impetus to these efforts. BES will evaluate performance in four areas: scientific excellence; relevance to the nation's energy future; stewardship, both of scientific user facilities and of scientists, disciplines, and institutions; and program management. Evaluations in these four areas ultimately determine the BES research portfolio, guide its funding choices, and provide a measure of the socioeconomic value of the program. BES measures performance in several ways: peer review; indicators or metrics (that is, things that can be counted); customer evaluation and stakeholder input; and other assessments (which might include cost-benefit studies, case studies, historical retrospectives, and annual program highlights). However, it is recognized that the relevance of each of these measures varies from area to area. Judith S. Sunley (National Science Foundation) discussed NSF's approach to responding to GPRA requirements. NSF has established its goals by determining what types of outcomes from its programs advance the progress of science and engineering. For research, the most relevant are discoveries at and across the frontier of science and engineering, connections between discoveries and their use in service to society, and a diverse, globally oriented work force of scientists and engineers. Because the timing of outcomes from NSF's activities is unpredictable and the annual change in the research outputs is not an accurate indicator of progress toward outcome goals, NSF has developed performance goals against which progress can be assessed on a continuing basis. A variety of mechanisms will be used to assess NSF's performance, but the process will rely heavily on the use of expert external panels.

OCR for page 1
--> Finally, Mary Groesch (National Institutes of Health) described the approach that NIH is developing to respond to GPRA requirements. NIH is considering two broad program outcomes for its research programs: to increase understanding of normal and abnormal biological functions and behavior, and to improve the prevention, diagnosis, and treatment of diseases and disabilities. A combination of qualitative and quantitative goals and indicators will be the most meaningful for gauging performance. For example, narrative descriptions of research accomplishments will be used to place specific incremental advancements into a larger context. They will describe what was previously known and unknown, the nature of the accomplishment, its contribution to understanding and improving human health, its significance for advancing science, next steps, and, when possible, the economic impact of the advance. Quantitative goals and indicators will be employed wherever feasible and appropriate, for example, in assessing progress in sequencing the human genome. Program management is also an important component of NIH's research programs. Activities that could be assessed include grants administration and peer review, communication of results, technology transfer, and management and administration.