Trueman D. Parish
Eastman Chemical Company
The Industrial Research Institute (IRI) is not made up entirely of $30 billion companies, but even has some little $5 billion companies like ours. So what I am going to present has a broad range of applicability. Why do we measure research? In a nutshell, because inquiring CEOs want to know: Are we worth our salt? The bottom line in a profit-making organization (and perhaps this extends over into the public sector as well) is: Are we delivering value to our shareholders, our sponsors, or whoever is paying for us? So I do not think this is just something that a CEO asks; I'm confident it's what Congress is asking, and it is a legitimate question.
So the first question we must ask is: Are we really adding value in excess of what we cost? But before we proceed very far down that path, we have to ask ourselves if we are aligned with our company and business objectives. In a broader context, are we lined up with the direction our political leaders expect the country to go? Where the public wants the country to go? That, of course, requires leadership, which means that our leaders have some idea about where we want to go. I can say that, at a corporate and business level, this is a mixed bag. Surprisingly, some companies do not really know what their strategy is. But in any event, if our technology projects are going to be effective, they had better line up with the corporation direction, or we could end up thinking we are delivering value, but the corporation is unable to exploit our work.
Then there is the question of what our technology is worth. There is a fairly easy way to think about this. Suppose your company or your organization were to be acquired. In the corporate world, what you would acquire would be the physical assets, the list of customers, some marketing rights and brands, and so forth, but at least you would hope that an acquirer would pay more than the sum of these. What the acquirer would also be paying for would be your patents, your other intellectual property, and so forth. But probably most important of all would be the expertise of the people in the company. So one of the questions we must ask ourselves is: What is our technology worth—what is the value of our technology assets?
Finally, those of us who are engineers in particular worry about efficiency and effectiveness. You
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 50
--> 5 The Technology Value Pyramid Trueman D. Parish Eastman Chemical Company Introduction The Industrial Research Institute (IRI) is not made up entirely of $30 billion companies, but even has some little $5 billion companies like ours. So what I am going to present has a broad range of applicability. Why do we measure research? In a nutshell, because inquiring CEOs want to know: Are we worth our salt? The bottom line in a profit-making organization (and perhaps this extends over into the public sector as well) is: Are we delivering value to our shareholders, our sponsors, or whoever is paying for us? So I do not think this is just something that a CEO asks; I'm confident it's what Congress is asking, and it is a legitimate question. So the first question we must ask is: Are we really adding value in excess of what we cost? But before we proceed very far down that path, we have to ask ourselves if we are aligned with our company and business objectives. In a broader context, are we lined up with the direction our political leaders expect the country to go? Where the public wants the country to go? That, of course, requires leadership, which means that our leaders have some idea about where we want to go. I can say that, at a corporate and business level, this is a mixed bag. Surprisingly, some companies do not really know what their strategy is. But in any event, if our technology projects are going to be effective, they had better line up with the corporation direction, or we could end up thinking we are delivering value, but the corporation is unable to exploit our work. Then there is the question of what our technology is worth. There is a fairly easy way to think about this. Suppose your company or your organization were to be acquired. In the corporate world, what you would acquire would be the physical assets, the list of customers, some marketing rights and brands, and so forth, but at least you would hope that an acquirer would pay more than the sum of these. What the acquirer would also be paying for would be your patents, your other intellectual property, and so forth. But probably most important of all would be the expertise of the people in the company. So one of the questions we must ask ourselves is: What is our technology worth—what is the value of our technology assets? Finally, those of us who are engineers in particular worry about efficiency and effectiveness. You
OCR for page 50
--> can have great technology assets, you can be lined up with your corporation' s strategy, and you can even be delivering value, but are you delivering value in the most effective way possible? What does this mean? It means: Do I have processes in place that lead me through projects quickly and that allow me to use my intellectual property efficiently and effectively? Or do I have barriers between my organizations so that effectiveness is impaired? Is my staff well motivated? All of those are important metric issues. We would really like to measure all of them. Metrics may not date all the way back to the Stone Age, but when we embarked on this project, we did a bibliographic search—as any good scientist would—and discovered that there were references to metrics that go back at least 400 years. There were R&D metrics 400 years ago, when the princes of Europe were sponsoring R&D. They were asking the question: Are we getting our money's worth? So any of us who are sitting in this meeting today who are feeling picked on can take some comfort in the fact that we have been picked on for a long, long time. But what's happened to metrics since those days? I would suggest that most of the metrics have ended up in the landfill of history. Why is that? There are three crucial tests that metrics must pass if they are to be supported by our wider customer base. By customer base, I mean the people who pay the bills—customers and corporate sponsors in industry, and citizens and Congress in public institutions. One of those tests is the question of relevance. Is the metric that I am trying to use relevant to my organizational mission, objectives, strategies, and so on? This means that the metrics used vary depending on the type of organization you belong to. Though I realize that many leading companies do indeed think of publication as a critical metric, it is one that has not found much favor in the IRI. Surveys of IRI membership find this metric quite low on their list. The general opinion is that, if it is publishable, it is probably not giving much proprietary advantage. On the other hand, in an academic institution, this could be a highly relevant metric—it is important to make any knowledge gained known. In short, it is critical to align the metrics of value creation with the objectives of the organization. The second test is credibility. My favorite examples are the wonderful metrics (and I believe some of us in this room may have used them in the past) where we said, ''Let's do a self-assessment." So we got together with our chief scientists and rated each other on how we were performing on a scale from zero to 10, zero being "dumb as a stick" and 10 being "should have won the Nobel Prize, but the committee was unaware of my work." All the scientists rate each other 9.5 and then present this self-evaluation to the business unit, which says, "Yeah right." This process lacks credibility. There are a number of other metrics that have great potential for gaming, but I think you understand the issue. Credibility is a big issue, especially if you are trying to develop metrics that are meaningful to your customers. The last test is one that particularly appeals to me, and that is complexity. It is important for the metrics to be reasonably simple and easy to calculate. If they are not, we could end up having our whole research laboratory working on metrics rather than on science, and our preference is to have people working on science. I will add, however, that engineers tend to like complexity. We like to have a table of 60 numbers or more, to multiply this matrix, invert the matrix, and the like, but unfortunately this activity can be fairly destructive. Even if the metric is theoretically sound, it should be tolerably easy to calculate and, more important, intuitively easy for our customers to understand. In sorting through this maze, there are a number of bright lights to guide us. This light comes from a number of sources. Some very good academic research has been conducted in the last 10 or 20 years that has begun to uncover the factors that lead R&D projects to commercial success. Which new products have been successful, which have failed, and what's the reason for each? What are the practices in R&D (particularly in business R&D) that lead to success versus failure? The increased focus that has recently been put on this area is beginning to lead to some metrics that have value.
OCR for page 50
--> Developing the Technology Value Pyramid With this background, I can begin discussing the work at the IRI and the road to the Technology Value Pyramid (TVP). The work started in 1992. Dr. Jasinski has already noted that this was a grim time. It wasn't just at IBM that the earth shook! The start of this project in 1992 was no accident. The Research on Research Committee of the IRI does a survey each year to identify the most important issue facing R&D chief executives, and metrics came out number one in 1992. They needed some kind of metrics to guide their decision making. Why was 1992 such a traumatic year? It was a time when "right-sizing" (one of the terms used), downsizing, restructuring, and reengineering (to note other terms) turned into a feeding frenzy. CEOs were calling in their heads of manufacturing and asking them what they could do to cut costs. "Well," replied each manufacturing head, "you've got to have me, I make the products, so don't try to get rid of me." The marketing people said, ''You surely need us, because we sell the products." The R&D staff said, "Well, somewhere out there in the future, I think we do some good, and in another 10 years what we do now will be important." That argument didn't work. So there was a desperate need to develop some metrics for the CEO and for the board of directors that could establish the value of research. So IRI began looking at the development of metrics for R&D. As we worked on the metrics effort, a lot of other stakeholders indicated an interest as well. In addition to the CEO, we found that chief financial officers and boards of directors were also interested in our activities. Some of the business unit managers even said that our work might be relevant for them too, as did individual laboratory managers. There was interest even among individual scientists. As noted above, we began our work with a literature search. We found that the creative abilities of researchers were remarkable: They had developed well over 1,000 metrics for the value of R&D. After applying the tests of relevance, credibility, ease of use, and so on, we whittled this list down to about 50 metrics. This was not easy, because people protested vehemently when we threatened to throw out their favorite metric. Eventually, everybody agreed on the number 50. Now all we had to do was find companies that want to use 50 metrics. But what company would want to be saddled with tracking and evaluating 50 of them? Fortunately, we finally found a way to organize these metrics that made sense (see Figure 5.1). We called this organizational method the Technology Value Pyramid, and we made it available as a computer program from IRI. At the top of the pyramid is value creation. But in order to create value, we need a portfolio of projects. So right below value creation, we have portfolio creation. Portfolio creation is exactly what we do with our investment portfolios. Most of us do not invest our entire life savings in a single stock of a speculative company. Some of us might and may even get very rich, but most people will distribute their investments over a wider range, with a prayer that some of them will make money. Some of these investments will be long term for our long-term needs, and some of them will be short term; some will be high risk, and some low risk. So there are a number of parameters to be considered in portfolio creation. The same concept applies to the research portfolios of a corporation. I would suggest that is true for government agencies and academic institutions as well. Immediately below portfolio creation, we put in integration with the business, because the portfolio is not going to make much sense unless it is integrated with the company's business needs. So unless you are oriented in the way the company needs to go, you cannot really build the needed research portfolio. So now I have a bunch of promising projects that make a good portfolio and are integrated with the business. The next set of questions is: Do I have the right technology assets? Do I have the right R&D
OCR for page 50
--> FIGURE 5.1 Technology value pyramid. equipment? The fight R&D expertise? Does this line up with my patent positions? Do I need to acquire some technology assets? So, the next layer is value of the technology assets. And finally, underlying it all are the R&D processes. They really are the foundation on which all of the above lies. Unless you have the right processes in place, unless you have ways to effectively use your assets, you are still not going to be able to create value effectively. I will not go through all of the 50 metrics. I will only talk about three of about eight in the value creation layer: the new-sales ratio, the cost savings, and the present value of the pipeline. The new-sales ratio is generally defined as the sale of products introduced into the market during the last "X" years, divided by the total sales of the corporation; IRI typically uses a period of 5 years. I suppose if you were in the computer business, 5 weeks might be more appropriate; but for a lot of the IRI businesses, 5 years is fairly reasonable. Typical values of the new-sales ratios are in the low double digits, so somewhere around 10 percent of sales typically are from new products introduced to the market in the last 5 years. The question then is: Is this metric credible? Is it relevant? I would argue that it is relevant because corporations hope that their sales are profitable, and in general they won't be in the business if they are not profitable. It is relevant because, obviously, corporations like to grow. They like to see their earnings grow in particular; that helps the stock price grow as well as the value of the CEO's options (and that makes the CEO happy). Is it credible? Well, it also turns out it is. The reason is that the accounting systems of most modern corporations take in the model numbers or serial numbers of their products. Generally, they assign a new number as a new product is introduced. So if your accounting system works well, you can get this information almost automatically. But what is a "new product"? There tends to be little argument now, but in the past some people would say, "That product isn't really new," or "it is only a little bit new." You can get into semantic arguments, but generally within a company you can resolve the issues and get a consistent definition. As a rule, if the product brings a new benefit that the customer perceives, it is new. There are benchmarks
OCR for page 50
--> available for this metric from IRI. IRI gathers data for this metric from IRI members and reports averages of performance of these companies broken down by Standard Industrial Classification codes. I am not going to go into as much detail about the cost savings ratios. There are a number of ways they are defined. Generally speaking, this metric measures how many dollars in savings were generated, normalized to manufacturing costs, but the metric is very similar to the new-sales ratio. It is relevant, credible, can usually be generated fairly easily from accounting data, and is benchmarked by IRI. So, these metrics look terrific, but what's wrong with them? Typically, new products do not achieve substantial sales until they have been on the market for about 5 years. So the first problem is that products introduced 5 years ago are only now starting to affect this metric. If we now factor in the observation that the R&D to develop this product took 3 or 4 (or more) years, then I have developed a terrific measure of how my R&D process was working about 7 or 8 years (or more) ago. I don't know how you drive in Washington traffic. Sometimes I like to drive just looking in the rearview mirror, because what I see in front of me is pretty terrifying. But that is unwise. Unfortunately, that is what we are doing in this case: driving R&D by looking in the rearview mirror. I would not completely dismiss this approach, because I think there is some value in maintaining the record. Keeping that picture in front of you is important. I think it's important in dealing with business units, but it clearly does not give much guidance as to how to do things today if you want to accomplish something different tomorrow. So that leads me to the final metric, the present value of the pipeline. Without going into too much detail or using too many technical terms on this topic, in general what you do is examine the projects that are in the pipeline. In doing this, you should ignore basic research, since the results of this activity are so far in the future that it is very difficult to put any numbers on it. Besides, in industry, basic research is usually a small budget item. But for anything that is to the point where it is costing you lots of money, you will generally have some idea of the size of the market, the probability of success, the kind of earnings that can be expected, and when you expect the product to come to fruition. If you can do that, you can make a present-value calculation. Notice the variables: All we need is a good estimate of future earnings, when they're going to happen, and what the probability of success is. In other words, we need predictions. Here a quote by Niels Bohr is appropriate: "Predictions are very difficult, especially about the future." (Although the quote is usually attributed to Niels Bohr, I always thought it was from Yogi Berra. It sounded more like him.) This leads us to the metrics dilemma: "What's easy usually isn't important. What's important isn't usually easy." I don't know a way around this. I still think metrics are very important. I am not ready to give up hope, but it is certainly true that the important metrics are difficult to determine. Nevertheless, I would rather have an answer that is off by 50 percent, or even 100 percent, than no answer at all. I would rather have that answer that's off by a factor of two but tells me something to do, rather than one that is very precise and just tells me what I have done. Concluding Remarks Let me conclude. First, I believe you can use the metrics developed by IRI to judge the value of scientific research, at least for research directly connected to business needs. You have to think about R&D as part of the innovation process—a link in that innovation chain. Research cannot take credit for the whole chain. It can't really take credit for the social value of everything good that has happened in the country in the last 100 years either; but without R&D it would not have happened. On the other hand, without an entrepreneurial spirit it would not have happened, and without manufacturing it would not
OCR for page 50
--> have happened. So there are a lot of links in the chain. The R&D link has to be strong enough to hold up its piece of that total value chain. Second, there are commercial products available that will lead you through the process needed to obtain a reasonable estimate of how likely a project is to be successful or not, both commercially and technically. If you have such a tool, it would certainly help in project selection. A project should either have a big bang or be real cheap. That then leads you back into portfolio management and all of the various dimensions of portfolio management. If you can do all of this, you can start using the metrics that help improve and estimate R&D productivity. Productivity, by most people's definition, is what you get out for what you put in. It is pretty easy to tell what you put in (just ask your accountant), and some of these metrics help you know what you are getting out. The metrics of effectiveness deal with issues such as stage-gate usage and so forth. One of our earliest speakers spoke of this whole effort as potentially dangerous, and I share that view. Indeed, quality management, if it tells us nothing else, tells us that you tend to get what you measure. I would argue, then, that you had better be careful in selecting the metrics you use. In fact, that was one of the reasons that, when we built this program, we could not eliminate too many metrics, because we wanted corporations to be able to pick metrics that met their strategic needs. So, pick your metrics with great care, because if you use them, you will drive behaviors that get you what you are measuring. Nevertheless, I think that on balance, metrics are a benefit in terms of helping us all do our job more effectively, and so, the reward is worth the risk.