National Academies Press: OpenBook

Assessing the Value of Research in the Chemical Sciences (1998)

Chapter: Panel Discussion: Introductory Session

« Previous: 2 The Sources of Commercial Technological Innovation
Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×

Panel Discussion: Introductory Session

Andrew Kaldor, Exxon Research and Development Corp.: Actually I have a question and a comment. I really like your definition of simple technology because it's workable. One of the problems in the industrial sector is that management systems are not rewarding the complex innovation system participants to the same degree as they do the single expert. That is a cultural problem that the U.S. industry in particular has a problem with.

I wanted to tell you about a study that we have done—but unfortunately it hasn't yet been released so I can only outline it for you—of a dynamic model for major innovations that we did collaboratively with nine other companies. It's a dynamic business model that was constructed to measure the performance of each company and integrate the results into a single model. Out of this study, six drivers emerged, and their impact is phenomenal in terms of the success rate of major industries. Some of them are the following:

  •  Interspersing business and technology—This includes the marketing aspect. You say that marketing doesn't show up in the R&D budget; I assure you that any economic assessment of a potential technology is in the R&D budget.
  •  Multiple approaches—This captures the notion that, if you're trying to develop something significantly new, making an early selection of the approach to use is absurd. You really have to explore parallel paths for a while.
  •  Constancy of purposes—Once you make a commitment, you must stay with it long enough to give it a real chance to succeed. The science base has got to be very strong, and it has to continue to grow.
  •  Extremely aggressive goals—Goals that you don't know how to achieve. This draws on concepts like integrated multidisciplinary teams, skunk works, and so on.

What is fascinating is the comparison of the performance of the dynamic model to the traditional approach. The dynamic model yields a 25-fold increase in major innovations. Furthermore, in the first 5 years there is often little indication that it is working. There is essentially no output during this period, although I am sure that this number will change/improve as we gain more experience with the model.

Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×

Another interesting feature of the dynamic model is that your ability to make incremental improvements is dramatically improved (as much as 100-fold) compared with investing in short-term research.

My question is: Is anybody else working on dynamic models like this? Do you think it's useful?

Don E. Kash: Yes, there are people working on such models. There is an exponentially increasing community of people who are studying complex systems, and the model you describe is roughly consistent with the themes that are being investigated. I have a 60-page bibliography, if you are interested.

David A. Hounshell: I have been studying the Rand Corporation in Santa Monica as one of the first instances of a think tank, a model for many other nonprofit research organizations that have proliferated since 1945. In 1954, Rand undertook a formal research project on the economics of R&D. By 1956 or 1957, the group, which was led by economist Burton Klein and included other well-known economists such as Richard Nelson and Kenneth Arrow, reached essentially five of your six points, drawing the same conclusions. Since they were focused heavily on military R&D, the only conclusion your group reached that they did not was the importance of the interaction between business and technology. The five other factors were there in their conclusions in 1956-1957.

Eric C. Beckman, University of Pittsburgh: In Professor Hounshell's presentation he noted that DuPont in the 1930s and 1940s had an instinct that research was good. They couldn't put a number on it, but their instinct was that it was important to the company. Somewhere along the line their instincts changed, as represented by the recent spate of downsizing in large chemical R&D organizations, including DuPont. Can you maybe describe how this happened, and will it reverse itself'? If we agree that research is good, then how do we reconcile this statement with the current loss of research jobs? Is it just communication?

David A. Hounshell: In the post-World War II period, DuPont had too many resources, or at least it allocated its resources in the wrong way vis-à-vis research. Taking the path they did, they substantially weakened the connection between research and the business units. By making such a heavy commitment to basic research, the central research unit lost focus and had no support in the business units for any new developments that they might make. The scientists in central research generated many good ideas and did excellent academic-style research, but there was no mechanism for nurturing their discoveries to a commercial product. Because of their experiences in World War II, fueled by the Cold War and the massive government infusion of research funds, they simply overinvested in research.

In 1959 Richard Nelson published what is now the fundamental paper in the economics of innovation, ''Simple Economics of Basic Scientific Research." This paper was partially a response to the crisis that had developed with the launching of Sputnik in 1957. Nelson made a statement in that paper that has become an economic truism—among economists, anyway—that firms, because they cannot capture the full benefits of their investment in basic research, will underinvest in research. So the nation overall will systematically underinvest in research.

What he was arguing was very consistent with what the scientific community was saying in response to Sputnik in 1957: namely, that the reason for the missile gap was that the nation had not invested enough in basic research. We had moved away from basic research. Nelson explained in simple economic terms why the nation had systematically underinvested in basic research. The conclusion is obvious: the public sector needs to make up this difference, because there's a different incentive—it's the public good, social welfare.

Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×

If you read one of the footnotes in his paper though, he says that economic theory suggests that there should be overinvestment in basic research but notes that the evidence of Sputnik suggests the contrary. In fact, if you look at economic theory, you can see there's a complexity there—there may be too much investment in basic research and not enough in downstream research.

In short, I am saying there are alternative reasons behind your observation that we once invested heavily in research, and, because of changing circumstances, we're not investing enough now and need to go back to our earlier practices. The jury really is still out on that. There is the possibility that the United States actually invested too much in research. I'm not advocating that position; I'm just noting that it's theoretically possible.

Don E. Kash: There are a couple of points I would like to make. First, if you look at research expenditures by industry, the numbers are increasing rapidly. There may be a downsizing, but that is an organizational construct. It is at least reasonable to speculate that part of the downsizing is associated with moving research into fabrication facilities, production facilities, and so on. In no small part, that is tied up with the fact that, if you make very large investments in long-range research in these complex areas, you repeatedly find instances such as the Japanese commitment to analog high definition TV (HDTV). The whole enterprise can succumb to an invisible enemy, in this case digital electronics. In many of these complex industries, there are powerful reasons for not trying to go too far down the road. That's one of the manifestations of complexity.

Andrew J. Lovinger, National Science Foundation: I'm trying to draw common threads between the talks of our two speakers. One thing that stands out is that industry did not question the value of research when things were going well. For DuPont, for example, this was the 1930s, 1940s, 1950s, when it was developing nylon, Teflon, polyesters, and so on. The golden age of Bell Laboratories was when AT&T was a monopoly in telecommunications.

Now, in Japan, we see that investment in research and development is skyrocketing precisely because Japan has a very favorable balance of trade in terms of complex/complex technologies. Are there examples and precedents when either times were bad or companies were in economic sectors where the outlook was not so rosy, where they did not question the value of R&D? Where they did not try to establish metrics and evaluate the risk they took strictly? Where they took that risk and prevailed and were able to demonstrate to people at the time that it is worth taking that risk, even in adverse circumstances? That would be useful to all of us as we plan in the present environment.

Don E. Kash: I've not made any systematic look at that in the United States. What I can tell you is that in Japan research expenditures until very recently were not affected by how well the companies were doing. I remember one time interviewing at Hitachi when their profits—and I never know what "profits" means in Japan—had been cut in half in one year. I asked the president of Hitachi if they were going to cut R&D and he said, oh no, that's critical to my profits going up in the future.

There is another very consistent pattern. If you want high-level R&D performance that disregards what's going on with the economy, you want to invest your money in companies that are 10 percent owned by a single family, and where that family is involved in the management. These are cases where you get high performance with regard to R&D. We pay little attention, particularly if the companies have people like Pierre du Pont.

David A. Hounshell: At the coffee break, one of the things I talked about that I had not mentioned this morning is a very important trend that is in part driving shorter-term outlook in industry vis-à-vis R&D, although again not consistently. This is the drive for better performance on the part of institutional

Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×

investors, such as my own TIAA-CREF retirement fund as well as many other large pension finds. These funds have contributed to much more emphasis on quarterly earnings and performance reports. To some extent, this has undermined longer-term commitments. How you rectify the situation, I don't know.

Robert L. Lichter, Camille and Henry Dreyfus Foundation: I come at this whole question from maybe a somewhat different perspective, because, as we all do, we approach these questions from our own experience and backgrounds. For me, the question that was brought up earlier that was alluded to as being strictly semantic—what we mean by research—is really at the heart of the issue. It's more than just semantic. We're talking about a type of behavior. We've used the expression "research," and we've used the expression "R&D" as if they're synonymous. As we know, they're not. When we talk about 68 percent of the expenditures for R&D coming from industry, most of that is in the "D," not so much in the "R.'' But, we still haven't discussed what "R" is.

Dr. Kash said that research was whatever people who were doing it told them it was. I'm reminded of my mother who had two sons—one was an engineer, one was a scientist (presumably me). When she was asked what they did, she said they did research because they were scientists and engineers. So the definitions can become quite vague, yet the distinctions can be very real. I work with colleges and universities and, as we all know, students (both graduate and undergraduate students) are involved in research, "doing" research. But when you get at what the undergraduates are doing—well, they may be having a research "experience," but is that really research? I think that we really need to talk about what we mean when we refer to research.

I'm going to be provocative and give my definition of research. Research is any activity that produces new knowledge that is subject to critical review by experts and peers in the disciplines. Both components are important. To me that's very simple and straightforward, and that's separate from what the drivers are for research (i.e., why one does the research). Professor Kash commented that one of the main outcomes for universities is the production of people who are well trained and educated. I share that view. We're going to hear about research in academic institutions and the assessment of and the metrics for it in the session tomorrow.

We talk about basic and applied research. Some of you may know or have known Donald Stokes at Princeton University, whose book, called Pasteur's Quadrant (which has yet to appear), challenges the linear model of research-development-commercial product and arrives at a much more complex two-dimensional model. The question of the drivers for research is something that also needs to be talked about but kept separate from the concept of research and how one assesses its value in the context of this workshop.

That brings me to my last comment, which is that both talks have suggested that there's no point in pursuing the exercise, at least in my mind, of developing metrics for the assessment of research. I have trouble accepting that. I'm willing to be convinced that my interpretation of the talks is totally off base. But nonetheless, I felt it might be useful to put these out on the floor and see where the discussion goes.

David A. Hounshell: Your definition of research is one that I fully accept and use quite regularly, that is, the production of new knowledge. You further add the idea of peer review. One could have market tests rather than a peer-review process as part of your definition.

Clearly, I think Professor Kash is right. His experience is that, in many respects, evaluating research is very much like evaluating beauty—it's really in the eyes of the beholder. You could adopt a rigid definition of research (namely, the production of new knowledge). But how much further will that get you in terms of implementation, in terms of evaluating investment in research within a firm or across an industry or within a nation? I'm not at all convinced that it would help.

Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×

Also, I think there has always been a great deal of tension, at least in policy making circles, which is borne out in part in the 1960s with the HINDSIGHT and the TRACES study about fundamental (basic) versus applied research. Professor Stokes has mounted the richest challenge to the linear model to date and I think his new model has moved away from a lot of the ideological positions that people have staked out. Professor Kash is absolutely right, these relationships always have had strong ideological components. I don't see a way around that either.

The final point is that I was not saying that the exercise is not worthwhile. I'm saying that the danger with GPRA in particular is that when you implement monitoring systems, either ex ante or ex post, you have to recognize that you have instituted a very dangerous activity. You have to recognize human behavior. You are going to get people "cooking" the numbers to present them in the most favorable light.

Robert A. Lichter: With respect to the Government Performance and Results Act (GPRA), I fully agree. I'm trying to look at this from a broader perspective. GPRA is just one particular constraint.

David A. Hounshell: My point is: It's symptomatic, even within the private sector.

Don E. Kash: I would agree with that wholeheartedly, and I would go a step further and say the enterprise that you're involved in not only is useful, it's absolutely necessary, but it is not necessary because it's going to demonstrate any good way to quantitatively measure this relationship. It is essential, because the political system requires us to go through this about on the same cycle as DuPont's 14-to 16-year cycle. That is real. The key point is that in going through what is a politically, socially, and perhaps financially mandated requirement, don't buy into something where the numbers get "cooked" and then come back and kill you.

Andrew Kaldor: Professor Kash, you lumped the world of R&D into your four categories. I guess my take-away message is that complex/complex is growing, and the Japanese are doing it better than we are. But in terms of our mission today, I'm not sure how we can handle this in terms of getting a measure for the effectiveness of research. It seems like a more productive approach would be to work backwards from a well-known or valuable product or development of some kind and trace the innovative process through the development stages back to the research phases and then back to "eureka." Gathering a database like this might help us better understand this process and come up with an effective measure.

Don E. Kash: We've done that with seven cases. One of them is a chemical case where the innovation clearly came out of the laboratory, the central laboratory. Now, we also have done it for a blade on a high pressure turbine on a jet engine. Here the ball game gets very mixed. It's a complex world out there—it goes back to the water wheel—and at least some of the engineers you talk to refer to the "black art" in casting.

One of the things that's really been terribly important is that much of the "black art" has been converted from tacit knowledge into explicit knowledge because of a whole new technology: computer-aided design. It is absolutely fascinating when you get these old engineers who know how to do things without understanding why, and put them at the front end of the process, inputting their knowledge into the computer. My point in this connection is that a lot of technology seems to take place without any understanding. It is surely not overwhelmingly based on explicit scientific research. In fact, an awful lot of scientific research is explaining what technology has done in advance, and so it has been for at least 400 years.

Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×
Page 28
Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×
Page 29
Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×
Page 30
Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×
Page 31
Suggested Citation:"Panel Discussion: Introductory Session." National Research Council. 1998. Assessing the Value of Research in the Chemical Sciences. Washington, DC: The National Academies Press. doi: 10.17226/6200.
×
Page 32
Next: 3 Assessing the Value of Research at IBM »
Assessing the Value of Research in the Chemical Sciences Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book captures the messages from a workshop that brought together research managers from government, industry, and academia to review and discuss the mechanisms that have been proposed or used to assess the value of chemical research.

The workshop focused on the assessment procedures that have been or will be established within the various organizations that carry out or fund research activities, with particular attention to the Government Performance and Results Act (GPRA). The book presents approaches and ideas from leaders in each area that were intended to identify new and useful ways of assessing the value and potential impact of research activities.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!