INTRODUCTION

BRONWYN HALL: This workshop, being conducted by the Board on Science, Technology, and Economic Policy, is part of a Committee on National Statistics study of the national portfolio of research and development statistics – a study commissioned by the National Science Foundation. We use the term R&D but actually we mean research, development, and innovation broadly defined. Its mandate is to examine the uses and types of data that are currently being collected and to determine how they might be improved to reflect what is generally considered a rapidly changing R&D enterprise. More specifically, we are exploring how to improve measures of the composition structure, performers, and the geographic location of R&D sponsored or paid for by industry and the federal government, which account for the majority of R&D activity.

The first panel will talk about who uses the data and what uses they make of them. Then we will hear from the National Science Foundation office responsible for the R&D surveys, the Office of Science Resources Statistics, what changes have been made in them recently. Then we will turn to some of the larger problems of classification, first the division of R&D into basic research, applied research, and development and the subdivision of basic and applied research into research fields and, secondly, the composition of industrial research and development. This is a particularly important issue in the service sector, where R&D and growing most rapidly.

In the afternoon we will consider how to capture the increase in research and development collaborations in our statistics. Finally, the geographic location of R&D has macro and micro aspects. R&D is done in other countries by U.S.-headquartered firms and R&D is performed in the United States by firms from other countries. How well are these patterns captured? Second, many, including members of Congress, would like a more detailed breakdown of R&D carried out in individual states and economic regions.

USERS AND USES OF R&D DATA

BARBARA FRAUMENI: : Today I am going to discuss, from the perspective of the Bureau of Economic Analysis, why we want R&D data and what kind of data we would like to have.

My coauthor Sumiye Okubo and I recently wrote a paper about the contribution of R&D to economic growth. We integrated R&D into the system of National Accounts by putting it on the product side and the income side, in good national income accounting practice. The major thing we did was treat R&D as investment. And we concluded that over the 40-year period from 1961 to 2000, R&D accounted for approximately 10 percent of growth in GDP. In other words, if GDP grew at 3 percent a year on average during that time period, R&D accounted for 0.3 of that growth rate. This roughly corresponds to the number that others have obtained for the contribution of computer hardware to growth in GDP during the wonder years of the second half of the 1990s, so it is an impressive number.

Second, treating R&D as investment raises the savings rate quite appreciably, from 19 to 21 percent. Capitalizing R&D does very little to change the rate of growth of GDP. So our conclusion is that we are not really misstating economic growth, but by incorporating R&D in the accounts we do have a better sense of what is bringing about growth. In short, we want R&D data because it is a very important source of economic growth and we need to understand the process better.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop INTRODUCTION BRONWYN HALL: This workshop, being conducted by the Board on Science, Technology, and Economic Policy, is part of a Committee on National Statistics study of the national portfolio of research and development statistics – a study commissioned by the National Science Foundation. We use the term R&D but actually we mean research, development, and innovation broadly defined. Its mandate is to examine the uses and types of data that are currently being collected and to determine how they might be improved to reflect what is generally considered a rapidly changing R&D enterprise. More specifically, we are exploring how to improve measures of the composition structure, performers, and the geographic location of R&D sponsored or paid for by industry and the federal government, which account for the majority of R&D activity. The first panel will talk about who uses the data and what uses they make of them. Then we will hear from the National Science Foundation office responsible for the R&D surveys, the Office of Science Resources Statistics, what changes have been made in them recently. Then we will turn to some of the larger problems of classification, first the division of R&D into basic research, applied research, and development and the subdivision of basic and applied research into research fields and, secondly, the composition of industrial research and development. This is a particularly important issue in the service sector, where R&D and growing most rapidly. In the afternoon we will consider how to capture the increase in research and development collaborations in our statistics. Finally, the geographic location of R&D has macro and micro aspects. R&D is done in other countries by U.S.-headquartered firms and R&D is performed in the United States by firms from other countries. How well are these patterns captured? Second, many, including members of Congress, would like a more detailed breakdown of R&D carried out in individual states and economic regions. USERS AND USES OF R&D DATA BARBARA FRAUMENI: : Today I am going to discuss, from the perspective of the Bureau of Economic Analysis, why we want R&D data and what kind of data we would like to have. My coauthor Sumiye Okubo and I recently wrote a paper about the contribution of R&D to economic growth. We integrated R&D into the system of National Accounts by putting it on the product side and the income side, in good national income accounting practice. The major thing we did was treat R&D as investment. And we concluded that over the 40-year period from 1961 to 2000, R&D accounted for approximately 10 percent of growth in GDP. In other words, if GDP grew at 3 percent a year on average during that time period, R&D accounted for 0.3 of that growth rate. This roughly corresponds to the number that others have obtained for the contribution of computer hardware to growth in GDP during the wonder years of the second half of the 1990s, so it is an impressive number. Second, treating R&D as investment raises the savings rate quite appreciably, from 19 to 21 percent. Capitalizing R&D does very little to change the rate of growth of GDP. So our conclusion is that we are not really misstating economic growth, but by incorporating R&D in the accounts we do have a better sense of what is bringing about growth. In short, we want R&D data because it is a very important source of economic growth and we need to understand the process better.

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop When the System of National Accounts (SNA), which is used by almost every country in the world, was last revised we decided not to capitalize R&D. However, there is an international meeting coming up shortly in the Netherlands, and it may decide that R&D should be capitalized. Once that happens there will be a flurry of activity on R&D. As far as I know the United States has the only complete set of satellite accounts incorporating R&D. Israel has a partial set of accounts, and France has a set of accounts that encompasses information technology but doesn’t capture other parts of R&D. If the decision is made to capitalize R&D others will seek our advice about how to do it. At the meeting in the Netherlands participants will consider how R&D might be capitalized in the System of National Accounts and the relationship between SNA and the Frascati Manual, the OECD’s guide to the collection of R&D data. Let me turn to our R&D data wish list. My coauthor and I spent very little time looking at the quality of the data; we took them as given. But as we proceeded we did observe some things, and we did reach some judgments about what it would be desirable to have. First, most people really want to know what kind of R&D performed in what industries contributed most to the increase in economic growth, but we could not begin to answer that. Missing from our satellite account is information on industries and the intermediate as well as capital and labor inputs to the production process, including all the inputs that enter into an R&D activity. We were working on a performing basis but we knew nothing about the industry composition because we can’t get enough data from NSF to perform this sort of analysis. So our foremost need is for industry detail, particularly information on services. We also were concerned about things like outsourcing. Who does R&D for IBM? Is it IBM in its manufacturing area? Is it another company outside manufacturing? Is it in the services? IBM was doing R&D somewhere, but where was it? So we do need information on services, not just manufacturing. If you look at the R&D data for manufacturing, the industries that you want to look at most carefully have missing entries because of reporting problems because there are only a few companies in those cells. Secondly, inputs and outputs in the national accounts work on an establishment basis. A company like IBM has a wide variety of establishments with a large percentage of the activity in services rather than computer hardware. So if you really want to know who is doing what to get a true picture of R&D, you want an establishment basis so you can clearly identify companies that are doing a significant amount of R&D, but have a large proportion of other activities. We want even more information about industrial activity. There is a big difference between basic, applied, and development R&D -- a difference in the time before it reaches the economy. In recent years there has been a shift toward development, in large part because there has been a marked increase in business performance of R&D relative to the government. As that shift proceeds, there is also a shift away from basic towards applied research. Later today some of my colleagues are going to come here to speak about the international aspect of R&D. We only have a few years of data and only for certain types of companies. We would like more of this because a lot of the R&D could be performed abroad. Also we would like to know what sort of R&D is imported and what are the spillovers of R&D. International boundaries do not mean a great deal in this context. A continuous time series is important. We know that there has been at least one series break as result of a substantial change in the survey. If I recall correctly it had to do with the number of companies that were surveyed, and it occurred sometime in the latter part of the 1980s or the early part of the 1990s, periods that people are very interested in. We were told by NSF

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop that their attempts to bridge this problem were unsuccessful. I would like to see that bridge completed because we really do want to have a continuous time series to enable fair comparisons over time. Next, the issue of industry classification system, NAICS versus SIC. We are in the throes of converting to a new industry classification system, and as we do this, in many cases we are dropping the historical time series because we are finding it difficult to link industry definitions and trace them back through time. It is important that we be able to do this. The spillover question. A lot of R&D tends to be concentrated in certain areas. I heard a paper by a Ph.D. candidate the other day who was trying to determine to what extent physical distance mattered; he was looking at R&D by state. Well, we have little information on the geography of R&D. What is the relationship between R&D performers? For that matter, what is the relationship between funders and performers? We go to great effort in our satellite account to get everything on a performance basis. Nevertheless, it can matter who the funder is and there may be associations between performers. Micro data. In my dreams I want to be able to go to NSF and use data that cannot be put in a publication because of confidentiality concerns, much in the same way that you can go to Census Research Centers and access the micro data under strictly controlled conditions. This would be a way of getting around the problem of missing data because of industry sensitivity. It will take time to bring something like this to pass. Finally, bring on another Mansfield! Ed Mansfield did several studies a number of years ago to ascertain the benefits of R&D. It is important in our paper in the context of who is receiving the spillover benefits of R&D. I would love to have another Mansfield emerge to give us more information on the nature and types of benefits arising from R&D. The next speaker is Andrew Wyckoff from the Organization for Economic Co-operation and Development. ANDREW WYCKOFF: Our presentation is in two parts, the first part by me and a second by my colleague in the Directorate for Science, Technology, and Industry, Dominique Guellec, who heads the unit responsible for the Frascati Manual and some of the methodologies that underpin what we are discussing today. A word about the OECD and what we do. We are a 30-member organization of industrialized countries headquartered in Paris. We may be best known for trying to produce reasonably harmonized data in various areas, including R&D. We deal with the details of methodology and try to arrive at some agreement about how to proceed. The heavy lifting is done by the member country delegates to the committee of National Experts in Science Technology Indicators (NESTI). From where we sit we see a number of different types of users and uses of our data, but by far the most popular is simply basic comparisons of R&D levels or R&D intensities, normalized by the size of the population or the size of the economy, the GDP. That is by and far the most common use of the data, but we also see comparisons of industrial structure, particularly targeting on what are thought of as strategic technologies. In addition, there is a lot of interest in how the 30 countries, which are roughly similar in stage of economic development, differ in both the funding and the performance of R&D. Equally important are the human resources associated with R&D, but I will not talk as much about that as about R&D expenditures. The topic du jour of the OECD on R&D has got to be the targeting of various R&D

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop intensities by some of our member countries; most notably, the European Union has set a target of 3 percent R&D intensity they hope to be approaching by 2010. But you can see a lot of other countries are doing this type of targeting activity, some of them making explicit references to our data, which makes us a bit nervous because there is imprecision in this data that we will talk about today. Especially as the EU becomes more integrated in its policies, the most relevant comparison for the EU economy is to that of the United States. Not only is the United States the benchmark in terms of R&D intensity, but also in terms of the performance of R&D. There is an effort in Europe to make their funding structure look similar to that of the U.S., particularly by emphasizing the business funding of R&D. Let me shift now from the policy arena to some of the data needs that we see. I suspect we are going to hear these repeatedly today. One is the problem of common definitions of R&D, particularly with respect to borderline activities, such as software development, where some of the activity is R&D, and some of it is not. And military equipment prototypes are a big item in the United States and handful of other countries that invest heavily in defense R&D. There are other trouble spots. Industrial coverage may differ across member countries, particularly with respect to how thoroughly they survey the service sector. We do not know as much as we would like about countries’ treatment of multinationals. We are looking carefully at that issue. There are also important institutional differences. An illustration is the very large and important health sector. If you use government budget appropriations and you break it down by objective, the U.S. appears to be an outlier with a very large bias towards health. But in fact part of this represents just cultural and institutional differences across the OECD. When you aggregate some of the categories and bring them together so they are more comparable, the U.S. is actually less an outlier than it originally appeared. This is the type of work that we have to do at the OECD. DOMINIQUE GUELLEC: What is the OECD doing to satisfy some of the needs that are on this workshop’s agenda? As we have said we in the OECD are in the ambiguous position of being on the demand side as user, but also we are part of the supply chain in helping countries to set statistical and data collection standards. So let us first look at the field of science classifications, which is a disaggregation of R&D expenditure by major fields such as physics and biotechnology and so on. There is a clear need for such data at the country level, since governments want to check the consistency of their science policies with the needs articulated by their industries. If a country has a very strong chemical industry while R&D funding is going primarily to physics, you may have an inconsistency that the government may be willing to correct. Also at the international level we may want to compare countries’ respective contributions to particular fields such as biotechnology and others. Over the past several years NESTI has arrived at an agreed-upon definition of what biotechnology is, or really two definitions. On this basis international data will become available in the next year or two. It is already available from certain countries. More broadly, we have begun to develop a new classification of scientific fields. The one we have in the Frascati Manual now dates back to the 1960s, and it came from UNESCO. It was good for its time but science is changing. We have sent a questionnaire to countries asking about their thinking on the proper classification of science, and we have begun to receive responses. Second, the breakdown of R&D by type. Basic research / applied research / development is the principal breakdown and it is useful We have the notion that different types of R&D will

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop affect economic growth over different time scales; this is important to know for monitoring policy. A well-known difficulty, however; is that the borders of these categories may be different across countries and across industries. I believe that a more careful reading of the definitions in the Frascati Manual, both by respondents to surveys and by the statisticians who carry them out, would help clarify these concepts. For example, there is a common notion that basic research has no application. That is not what the Frascati Manual actually says. It considers basic research without application as a particular type of basic research that we call non-oriented basic research. So it is not always our methodologies that are lagging but sometimes our understanding of these methodologies. At the moment approximately one-half the OECD member countries collect basic research data. Next, R&D by industry is needed for understanding the dynamics of productivity. It is rather difficult to collect good data in this respect. There are multi-industry groups and specialized R&D companies, and both phenomena are increasing to the point that they really represent their own industry. And this R&D industry is, of course, serving other industries and its output is affecting productivity in these other industries, making it difficult to know what the input/output relationships among them are. In response, we at the OECD are emphasizing the industry of use. Many software firms, for example, are developing products for the automobile industry. Their work, according to Frascati, should be classified as serving the automobile industry not the software industry. That makes sense, of course, if you are interested in productivity effects. Actually, we have been collecting and harmonizing industry data for some time and now we have a database of R&D expenditures for 19 OECD member countries dating back to 1973. It is not completely harmonized but one can, for example, see that the contribution of different industries to the growth of R&D in countries is extremely diverse; and it is far better than using individual countries’ data. Internationalization of R&D is certainly an important economic trend and one we would like to capture, but it is not one that lends itself to collecting good data. The OECD is drafting a Globalization Manual that deals with broader issues than R&D, but it has a chapter on science and technology. This manual is nearing completion. We are also collecting data on the Technology Balance of Payments (TBP), which is one component of these R&D imports and exports. To be candid, we are not happy at all with this TBP data because the sources are very different across countries, the classifications are different, and the quality is highly variable. This is something that we might start working on in the coming years if our NESTI delegates consider it important. It is certainly a big problem for our data at the present time. I want to say a word about the relationship of R&D and national accounts. In my view there is a need for learning on both sides. Not only should those responsible for the national accounts learn from the R&D statisticians but also the R&D statisticians need to increase their exposure to the national accounts officials. The treatment of the data is not always the same. From the point of view of people having to decide a government’s budget for R&D the relevant question is how much money was spent last year. For economists interested in the contribution of R&D to economic growth this is not always the best way to treat the data, and some notions such as cost are not necessarily relevant. Finally, even for R&D policy, it is important to go beyond the R&D data themselves. For example, when you are reporting to your boss in the ministry of research and trying to convince him or her of the best uses of the money allocated, you need some kind of output indicators. That effort is drawing a lot of attention and effort, focused currently on two major

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop sources -- innovation surveys and patents. Further work in this area may affect our R&D statistics. The next speaker is David Trinkle from the Office of Management and Budget. DAVID TRINKLE: OMB is the budget office of the White House charged with preparing the executive branch budget, overseeing day-to-day management of agencies, and addressing policy issues that arise. I provide OMB oversight of the National Science Foundation and am responsible for R&D issues across agencies. OMB is both a data provider and a data consumer. OMB updates the definitions of R&D across agencies; we typically deal in what agencies are authorized to spend but we also report what they obligate each year. We report R&D numbers in a chapter of the budget devoted to R&D across the government. It includes historical tables going back 40 years or so. My side of OMB does not examine the economics of R&D, the performance side. There is an economic shop within OMB and also a couple of White House offices that address that. First as data provider, OMB is responsible for Circular A-11 setting out all the definitions, for example, of applied development, R&D equipment, and R&D facilities. I will also discuss the federal science and technology budget, a compilation that differs somewhat from research and development spending. We also produce cross-cutting analyses of R&D in broad fields across agencies -- information technology R&D, nanotechnology R&D, and climate change- related R&D. In the budget we show the cost of any R&D tax credit proposals. The last two budgets have addressed the revenue cost of permanently extending the research and experimentation tax credit. And in recent years we have also taken a cut at how various agencies allocate their funds – whether they are subject to congressional direction, for example, or make awards by competitive peer review. We also are asked for certain other types of information, typically by Congress but also by other sources. This poses a challenge unless we can readily obtain it from other sources. For example, the field of science distribution of spending. We are very familiar with the National Science Foundation Federal Funds Survey data and refer to it quite often, but we not in a position to track that ourselves in real time. From time to time we are called upon to provide specific targeted analyses for which we must rely on the agencies. The analysis of homeland security R&D actually started out as a special call to agencies outside of the budget cycle; now we are collecting that and presenting it as part of the budget. There are other sources for some of these data. The RAND Corporation’s RADIUS database is one example. As a user of data, OMB has a number of different needs and ways of encouraging agencies to use the data. In considering broad agency budget proposals or initiatives across agencies, we encourage proponents to support their requests with data. NSF, for example, identified the need for larger grant sizes to support their graduate scientists and engineers, and they conducted a study that was persuasive. Over the past few years, as a result, we have been pushing in that direction. We have also encouraged agencies to articulate and measure the benefits of their programs and to provide data on performance and impact. In fact, as part of the President’ s Management Agenda we have asked R&D agencies to justify the relevance, quality, and performance of their programs in making new proposals or continuing existing programs. Although I am talking about R&D data, I should also refer to the federal science and technology budget which came about in part as a result of a National Research Council recommendation that we needed a better estimate of the investment in new knowledge creation as distinct from the development of new end products or prototypes, for example, military equipment. Federal S&T is closer to what we are talking about when we speak of research

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop investments. We can track this compilation during the appropriations process; we are not able to do that with R&D. Finally, in preparing the budget’s R&D chapter this year we included a number of charts that derived from other data sources. We used NSF data to look at some of the international comparisons. We used data from the Chronicle of Higher Education to track the phenomenon of congressional research earmarks. We use these data to help describe the context for the current budget requests, articulate some of the concerns that motivate the requests, and anticipate some of the results we hope to achieve with our budget proposals. Now as a user of R&D data, we clearly hope to make better informed budget and policy decisions. In this role we also see several of the limitations of the data. One problem is alignment to the structure or level at which decisions are made. Again, for the federal R&D budget, the data are collected at the very last second and nobody makes policy decisions with respect to R&D across the government. The so-called federal R&D budget is really just a compilation of individual program levels so it is something of an illusion to talk about R&D government-wide. There is also a problem with fields of science. Some agencies have large shares of particular fields; for example, the Department of Energy supports much of the physics research in the government. DOD and NASA support much of the engineering R&D in the government; but support of other fields is much less concentrated. So it is hard for us to approach issues in fields by looking across the agencies. Sectors involve a similar issue—universities versus industry. There is a problem of time lags. We love the NSF data but because they are collected by survey, they are already a couple years out of date by the time we see them. By the time that we are trying to affect policy through our funding decisions, we have to assume what happened over the past year or two plus a year to two before the budget kicks in. So it is hard to know if the concerns we were trying to address are still valid and how we will affect them by the decisions we make this year. There are obviously problems of consistency across agencies even though the definitions are supposed to apply to the entire government. In each agency those definitions may fit well in some cases and not so well in others. Some agencies find it very easy to draw a distinction between basic and applied research, others seem to have a more distinct line between applied research and development, but that is not uniform. Furthermore, I have concerns about data quality, not so much with respect to NSF data but with regard to data the agencies generate for their own budget justifications. We see arguments that we accept and are trying to address and others about which we are skeptical. The physical sciences have received a declining share of total R&D, a very common complaint these days, while the life sciences’ share has been increasing. We are addressing the physical sciences in the current budget. Yet there are other arguments that we believe are debatable and that we have not accepted as premise of budget policy. It may be that the decline in support of a particular field is relative to a peak that occurred for a reason, and the decline is not a major problem. Nor does the fact that there has been a shift in support mean that it must be shifted back the way it was 5 or 10 years ago or to some baseline the agency argues is appropriate. It is very hard to say at what point everything was just right and we should be moving back toward that point. Different interest groups will all point to different times to which they would like to return, but we cannot move in different directions at once. To summarize, although it is important to have R&D data, they will not always be available and often they will not be available in time to inform a decision. Yet we want to make

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop the best decisions possible, so the more data we have and the timelier they are the better from our perspective. The last speaker in the first session is Gregory Tassey of the Planning Office of National Institute of Standards and Technology. GREGORY TASSEY: I approach this subject from the point of view of an R&D policy analyst. Although they are not followed systematically anywhere, there are four steps entailed in doing R&D policy analysis. The first step is to document and explain the role of technology in the economy. You might think that this is unnecessary, but policy makers are not entirely in agreement about the importance or the specific roles of technology, especially in times of slow economic growth when companies are using R&D and the resulting technology to cut costs and to avoid hiring new workers. Even in the first part of the 1990s and into the mid-1990s, members of Congress were actually questioning whether R&D spending should be increased or in some cases maybe cut. So one of the first uses of R&D data is relating it to the impacts of technology on the economy. Second, a policy analyst looks for indicators of under-investment. If you do find indications of systematic under-investment then you go to step three, you have to do some kind of cause and effect analysis to relate impacts to something that appears to be wrong with the patterns of R&D investment. And by patterns I mean the adequacy of the amount of R&D and also the composition. If you do your homework here you can then match causes with appropriate policy responses. Getting to step four requires a lot of analysis. It is multidisciplinary and certainly the amount and quality of R&D data are critical. I am going to highlight briefly some of these points, with emphasis on steps two and three. Any R&D agency, especially one that has industry as its primary client, has to go through this sort of circular flow of economic analysis. One has to find evidence of and demonstrate that there is a pattern of systematic under-investment relative to some optimum, which of course is difficult to define. And then and only then can you provide the economic rationale for the existence of a federal R&D program. Intellectually this is the most challenging use of economics in R&D policy and although economists have made some progress in characterizing and measuring under-investment, we certainly have a long way to go. If you do make that rationale and then have an opportunity to implement it, more and more OMB and the Congress have told R&D agencies that they have to be much more systematic in the way in which they propose and develop program initiatives. So R&D data have become more important in the strategic planning phase. This is basically sector- or technology-level analysis. If you do your homework then you get a budget. That used to be the end of it, but now more and more you have to come back after the fact and look again at the relationships between R&D investments, including in this case the federal component, and show the kinds and magnitudes of economic impacts that have been realized. And then to close the loop that information feeds back into a restatement and refinement of the original economic rationale for the program’s existence. One of the main hurdles for R&D agencies in accomplishing this is the lack of any consensus model of innovation. Because GDP is the key policy variable in the minds of members of Congress, the R&D policy analyst has to start there and work backward. The problem with this model is that when you work back to the technology investment component, and technology is characterized as a homogeneous entity, it is very difficult to make a clear case for many of the federal roles in supporting technology. Technology is looked upon as a proprietary good in contrast to the science base, which is recognized as a public good. Many

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop believe that if the latter is provided, the private sector will provide the technology. So this causes policy analysts a lot of grief. It is still a model widely adhered to. For our purposes in R&D agencies, we need to disaggregate that technology box into a set of elements that are not arbitrary but represent distinctly different investment centers, because unless you do it that way you will not get to the right policy prescription. We at NIST have come to use are the following three elements. First is the science base, which is not particularly controversial from a policy point of view because it is considered a public good and therefore government funds virtually all of it. The only issues are how much and how the funds are distributed across fields of science and to whom. The next element is the technology platform or generic or fundamental technology; it goes by different names. It constitutes the platform from which the many market applications represented by the proprietary technology box evolve. Next is something I call infratechnology, which is a collection of infrastructural technologies present in any high technology industry and that are essential to the actual conduct of R&D and eventually the control of the production. For an agency like NIST, we are obliged to collect our own data to develop and rationalize programs that affect either the generic technology box or the infratechnology box. As an example of how one would use R&D data at a highly disaggregated level, which is necessary to apply this model, consider biotechnology. First is the science base that has many elements. The infratechnologies, too, are numerous. Generic technologies fall into two categories, product and process. Finally, there are proprietary technologies. The analytical challenge for policy is what types of data you need to support measures that ensure that the industry grows and prospers. The mechanisms that work well in one area of the technology-based economic process may not work particularly well in others. At the national or macroeconomic level, NSF provides a lot of very useful R&D data, but that is only the point of departure for managing the research portfolio and programs of an R&D agency. At one level the R&D intensity of the U.S. economy has not really changed significantly over a 40-year period, during which many countries around the world have significantly increased their R&D capacity. And as we heard, other countries including the European Union have as their policy goal to greatly increase their R&D intensities. So at a very aggregate level you might say that this indicates there should be some concern about the level of R&D investment in the U.S. economy. In terms of composition, the federal share of national R&D has declined while the industrial share has increased significantly. This is a trend that is going on in all OECD nations. However, unless you believe that industrial R&D is a perfect substitute for federal R&D, you have to be concerned about this gap as it continues to widen. But, at this level, there is not much more you can say. So we look for indicators at lower levels of aggregation. And here we break the economy into two basic parts, manufacturing and nonmanufacturing, the latter being primarily services. And you can develop indicators that show that manufacturing, which has been largely ignored by policy makers and continues to decline in size, nevertheless accounts for a large share of R&D. This still is not sufficiently disaggregated to allow the type of R&D policy analysis that I have described. So we go down one level further. Now we are approaching the level at which R&D agencies need to analyze trends and begin to plan their programs. Within manufacturing there are so-called high technology industries with much higher R&D intensities than the manufacturing average of about 3 percent. At this level it appears that the U.S. economy has a very skewed pattern of R&D conduct across industries. It turns out spending R&D is also highly skewed geographically. Down to this point

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop all of the data are available through NSF. Beyond it, the question becomes how disaggregated are the R&D data needed to effectively plan and manage an R&D program and how they are obtained at reasonable cost? All of my comments so far have had to do with the issue of data adequacy. However, the composition issues are at least as important, and here we have even less data. A recent study that shows the importance of looking at compositional changes in R&D over time involved a survey of high technology. It confirmed that most product launches are derived from the current generic technology. In this survey such technology accounted for 86 percent of launches, which in turn drove 62 percent of revenue and 39 percent of profits. The relatively small investment in next generation technologies accounted for only 14 percent of launches but 38 percent of revenue and 61 percent of profits. So the point you make to policymakers is that attention needs to be given to the patterns of investment in next generation technology. What kind of data do we have? Again going back to the national level, these NSF data show that over the past 10 years development, which is where most industrial funding goes, increased over 70 percent. Basic research increased almost as much, but that is misleading because most of the increase is the result of growth in federal funding for health science. As you probably know from the debate currently going on about doubling the NSF budget, it had not grown much at all in this time period. In between is the transition phase between basic science and applied research and development. It has grown at a much slower rate. So here again is a rough indication of some national under-investment, but it is necessary to disaggregate not only by technology or industry but also within applied research. The scope of what is defined as applied research is too broad to really match up with the investment trends and the organizational structures of R&D firms. One of the few examples of an indicator to help answer this question about composition is provided by the Industrial Research Institute (IRI). In their annual survey they ask member firms, accounting for about 40 percent of U.S. industry’s R&D, about their planned investment in directed basic research, which is the first phase of technology research, the attempt to apply science. A few years ago the IRI instituted what they call a sea change index for all of their indicators, including this one. I went back 11 years and computed it just to have a longer time series. It is derived basically by subtracting the percent of respondents planning a decrease in directed basic research from those planning an increase greater than five percent. The zero to 5 percent respondents are omitted because that amounts to standing still when you adjust for inflation. Now what you can note here is that every single year this index is negative. This is not an ideal indicator. None of the indicators that I have discussed is ideal, which is why you need a lot of different indicators. You also need to maximize the quality of the data, as well as have the right conceptual framework to define the indicators. In summary, these are the types of impact measures that an R&D agency like NIST tracks. The broad categories are obviously familiar, but in doing either a strategic planning study or a retrospective economic study, we have to target the selection of metrics to the nature of the program that is being analyzed. At the microeconomic level no one can expect NSF to collect these data because the task is very costly. However, benchmarks of some type would certainly be useful. But we are left with trying to do this pretty much on our own, and that is where I am concluding these remarks. If we are going to do good R&D policy analysis then we have to be very conscious of the amount and the quality of R&D data at these different levels of aggregation.

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop RECENT DEVELOPMENTS IN NSF’S R&D DATA PORTFOLIO JOHN JANKOWSKI: My task to describe everything we have been doing to improve our R&D statistics. I can only highlight some of the steps we have taken. In terms of the industrial R&D survey, which is so often the focus of these discussions, we have made some changes to the survey form itself over the last decade. We dropped a few items. For example, we used to collect information on applied research and development by product field and on basic research support by science and engineering field. We no longer collect these statistics from the industrial performers for the very simple reason the item response rate was so low on those survey questions. However, we have expanded the questionnaire to address policy issues. For example, we have moved to collect more information on the foreign R&D activities of industrial performers and on the amount of R&D that they contract out. The contracting out question serves two purposes. It gives us a better handle on some of the outsourcing activities. It also allows us to look at a difference in what the federal agency reports in terms of their R&D support to industrial performers versus what industry reports as R&D support obtained from the federal government. The Department of Defense reports a lot more R&D going to the industry performers than industry reports receiving from DOD. Maybe this is a case of contracting by one industrial performer to another, and then you lose track of those dollars if the secondary recipient of the funds views them as from another industrial firm rather than from DOD. Also in the industrial survey we have expanded the number of questions for which reporting is mandatory. This is a unique survey in the sense that most of the questions on the survey are voluntary for respondents. Historically there have been four mandatory questions --sales, employment, total R&D, and federal R&D. This year we have made the state distribution of the R&D activity a mandatory reporting item. This helps us answer one of the most basic questions about R&D activity, which is where it is done. This year we have received approval to mandate the entire survey form, so for the first time in 50-odd years, all of the questions must be answered by industrial firms. We have added some questions about collaborative alliances to the survey form and questions about activities involving new technologies -- biotechnology, software development, nanotechnology, and new materials. We also have been conducting in-house analysis and working with our colleagues at the Bureau of Census to achieve better estimation, for example, of the state-level distribution and composition of the R&D, whether it is basic, applied, or development. These are some of the questions we are most frequently asked about industrial R&D activities. I want to turn now to university R&D. We serve several masters in collecting and reporting these data. In the university and college survey we have added questions about pass-through funding--when one university received monies and passed it through to another institution. We did that to address the difference between what federal agencies report giving to performers versus what performers report to us having received. It has the added value for giving us a better sense of what kinds of collaboration are occurring in university research. We also have made changes in the academic survey in terms of non-science and engineering R&D activities. We did that not only because our university respondents were interested in such things but also because other countries include not only the science and engineering R&D but the non-science and engineering R&D in their totals. This will improve

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop Japanese pharmaceutical companies are relatively small because Japan at that point in time was somewhat weak in pharmaceuticals. Whenever a firm declined to cooperate, I just went to the next largest company. According to my calculations, the sample captures actually a significant share of R&D worldwide in these two industries. There is an article in the Research Policy (1999) that summarizes some of these findings in more detail and another one in the Journal of International Business Studies that looks at this taxonomy. The firms in the sample established a total of 155 R&D sites abroad, an average of five per firm. This is a substantial managerial challenge for a firm. How to ensure that the relationship between the laboratory in Grenoble and Xerox’s home base works well is a big challenge. From 1975 let’s go forward to 1985 and then 1995. The number of foreign locations has increased. Second, the relative share of home base augmenting facilities is increasing. This is a game largely being played amongst industrialized countries, with the exception of China where you see some home base exploiting facilities, and India and Southeast Asia, where you also see some home base exploiting activities. Developing countries, at least for these two industries, are not on the map. If you included motor vehicles you would find some home based exploiting activities in South America -- exploiting, not augmenting --, because innovation in motor vehicles is primarily done at the supplier level, and suppliers are located in industrialized countries. This study was quite resource intensive and probably could not be replicated for a much larger sample, because it really required me to visit these firms and talk to at least three executives there, sometimes more than once to clarify issues and get additional data. But it is interesting that such a study simply didn’t exist at that point. Available data that I examined and sometimes used for other papers existed at the country level and some at the state or county level even though I am not at all surprised that it took an enormous effort to get this for San Diego County. There was of course data at the industry level and data at the firm level but there was very little data at the establishment level, which is needed to understand spillovers and design policies for these spillovers. :Rob Atkinson’s Naval Research Center is a great example. If you don’t know what an establishment really does then you cannot design effective policies for it. The private data collected by consulting organizations, as Josh Lerner indicated, is often quite good, but it is generally very narrowly focused. A less good example is the directory of American research and technology which is published every year primarily for vendors who sell to R&D labs. The data on R&D lab size are simply inaccurate. They list the number of employees; it is not very reliable but it gives you some indication. It is definitely desirable to have more detailed data on R&D -- location, activity level, and focus are very important. But doing this at an internationally comparable level will be an enormous challenge because other countries, including quite a few industrialized countries, have a political problem providing the resources necessary to collect data at the level of detail that we need. I agree that it is difficult to classify R&D data at the sub-company level simply because companies are very differently structured. A group that could provide useful advice on this issue is management scholars who study industrial organization. They might suggest ways of collecting such data consistently although they often differ on how firms should be organized. Finally, I think providing useful information back to respondents is an important way to encourage cooperation. There is an opportunity to do a much better marketing job for data collection from the private sector.

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop MR. WINDHAM: The Bureau of Economic Analysis has done a lot of work in this area, and we are pleased to have Ned Howenstine here. NED HOWENSTINE: I am in the international investment division at the Bureau of Economic Analysis, and unlike the rest of the Bureau, my division actually collects data that we produce. The rest of the Bureau basically is a user of data from the other statistical agencies in the federal government, but I am here as a data producer. Our programs mention that there is some international data on R&D available from the current NSF-Census RD-1 survey. I’m here to discuss some other international data that my agency collects on multinational company (MNC) R&D activity. This may not be quite as familiar to this audience as the RD-1 data. I also want to talk about a new proposed project that we hope will improve the international R&D data. BEA collects data for both U.S. MNCs and foreign MNCs in annual and benchmark surveys. We collect U.S. MNC data in our surveys of U.S. direct investment abroad, and in those surveys we collect data on U.S. parent activity in the United States and on foreign affiliates’ activity abroad. For foreign direct investment in the United States we collect data on the U.S. activities of foreign MNCs. Our benchmark surveys are carried out every 5 years. They are comprehensive, covering the universe of foreign direct investment in the United States and U.S. direct investment abroad. In the years that we don’t do a benchmark survey we do an annual survey, which is a sample survey that collects less data. Both types of surveys are mandatory and conducted on an enterprise basis. Of course our authority does not extend to the foreign operations of the foreign MNCs. Annually we collect total R&D spending. In our benchmarks we have some additional detail -- for whom the R&D is performed, whether it is performed by the parent or affiliate for themselves or for others, and in the case of U.S. parent companies whether it is performed for the U.S. government. We also have some information on R&D performed for the affiliates and parents by others outside the organization, and in the benchmark we get data on R&D employment. For foreign MNCs we have annual total spending on a performance basis and total R&D employment. And in the benchmark we have additional information on for whom the R&D is performed and on R&D performed for others for the affiliate. Our most recent data are for 2000, admittedly a bit out of date. U.S. parent companies in 2000 spent about $132 billion dollars on R&D, representing about 66 percent of total U.S. R&D spending. That share is high relative to the U.S. MNCs’ share of GDP, which is about 21 percent. The 66 percent share, by the way; it was 77 percent in 1994. The decline does not appear to be the result of parent companies’ shifting R&D abroad. Foreign affiliates spent $20 billion in R&D in 2000. If you add those two numbers together you get about 87 percent, of total U.S, R&D, about the same share as in 1994. Turning to foreign MNCs, they spent about $26 billion in the United States in 2000, about 13 percent of the U.S. total. That share has increased slightly, from about 11 percent as a consequence of the surge of foreign investment in the United States over the past few years. I just need to point out some overlap in the U.S. parent and the U.S. affiliate data. A U.S. parent company can be a U.S. affiliate. If Honda buys General Motors, General Motors becomes a U.S. affiliate of a foreign company. But because General Motors has its own foreign operations or foreign affiliates, it is also a U.S. parent company. In our R&D data, there is about a 14 percent overlap. There are some drawbacks with regard to the BEA data including questions about their comparability to the data produced by NSF. Partly because of these drawbacks we have a joint

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop proposal to try to enhance the international R&D data available. Specifically, we would attempt to link our data that we collect in the direct investment surveys to the data from the RD-1. At this point it is simply a feasibility study. It will cover our most recent benchmark survey years which were 1999 for U.S. direct investment abroad and 1997 for foreign direct investment in the United States. The project is being sponsored by the NSF. It was prompted by an NSF request to add some questions to our 2002 survey of inward foreign investment in the United States. We realized that those questions would impose an additional burden on respondents and further that some of the information being sought is available in the RD-1. It occurred to us to try to link our surveys together and avoid additional costs to the companies that report to us. If successful, we hope to update the results annually; and the linkage could be extended to other data sets. This could be done under two legal authorities, a 1990 act that allows BEA to link its data on foreign direct investment in the United States to establishment data collected in the economic censuses and a new 2002 statute on data sharing We are going to attempt the linkage first by doing a computer match of employer identification numbers that are reported to both BEA and Census. If we are not successful on the computer match then we will go to other information such as names and addresses. The first report of the project by our friends at the Census Bureau will discuss how successful we were in actually linking the data. It will talk about the numbers of tables that we think we can produce, and the types of tables that we think we can produce. Of course, here, confidentiality is a major consideration. It will discuss the feasibility of moving the link forward in time and if we think it’s feasible we hope to produce the methodology for doing that. Finally we hope to produce a number of tabulations on the data that we get out of the project. We hope eventually to have a better understanding of the international features of R&D, including the dimension of ownership and the funding. Specifically, for operations in the U.S., for U.S. parent companies and U.S. affiliates, we will have the core data that is available from the RD-1, such as a number of R&D performing companies, total R&D spending, R&D employment, sales of R&D performing companies, total employment of R&D performing companies, and state location of R&D activity. We should be able to tabulate this data by industry and for the U.S. affiliate data we will be able to tabulate it by country of foreign owner. For the overseas activities of U.S. MNCs we will be able to get counts of number of parent companies with R&D performing affiliates and various data for the R&D affiliates themselves --R&D spending, R&D employment, and sales of R&D performing companies and employment of R&D performing companies. We should be able to do tabulations by industry and by country of foreign affiliate. In addition to the core items we should be able to get various types of data that are available in the RD-1 that BEA doesn’t currently collect for MNCs such as biotechnology research. We do have some source of funding data in our surveys but not as detailed as in the RD-1, again by U.S. location, type of expense, and type of organization for non-company R&D performers. None of this data is currently available. By doing this matching exercise we expect to improve the quality of our data and the NSF-Census Bureau data. We should be able to uncover erroneous or missing data, issues of industry classification, and issues affecting reporting. There are issues of definition -- ours follow the NSF definitions fairly closely but not exactly – and consolidation. At what level of consolidation do enterprises report on each of the surveys? BEA surveys are conducted on a fiscal year basis. We don’t think that that has a very

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop significant impact on the data because most of our respondent companies’ fiscal years end at the end of the calendar year, but by comparing our results to those of the RD-1, which is on a calendar year basis, we will learn more about that question. We sample in our annual surveys so we have concerns about whether we might be missing R&D data in some industries or countries, so we will learn more about the effects of sampling on coverage. We expect to be able to improve both our sample frame and the Census Bureau sample frame. We expect to find cases where companies that are reporting to us are not reporting to the Census Bureau even though they should be reporting, and vice versa. PATRICK WINDHAM: For our final speaker we are fortunate to have Robert McGuckin, senior economist at The Conference Board. ROBERT MCGUCKIN: I first want to commend the NSF-Census and BEA data linking initiative. It is a great idea. In the years I was at the Census Bureau’s Center for Economic Studies we did a lot of linking of surveys and concluded that it generally yielded both methodological and analytical benefits. This project seems to me a good one to start with. I am going to discuss a work in progress, so these are not final conclusions but initial impressions. We anticipate a report in October. We are coming at the international R&D question in two ways. First, we are conducting very detailed company interviews. Second, we are developing R&D purchasing power parities (PPPs) to enable more accurate comparisons across national currencies. We are doing this at the country and industry level. We started out with 1997 but we soon pushed back to 1987, ensnaring us in some serious statistical issues dealing with PPPs done with different methodologies over time. We have completed interviews with 25 R&D executives, typically CTOs, and we will probably do 10 more interviews in four industries -- drugs, telecommunications, computers, and autos. We have also conducted five interviews outside those sectors as benchmarks. In the case of a few companies we spoke to both U.S. and European executives, for example IBM and Ford. First a couple of observations. R&D are quite different activities with different drivers and different inputs, production functions, and outcomes with different uncertainties. Second, we are observing enormous changes in business structures and management technology. Organizations are seeking to improve the productivity of R&D. Basic research is a non-starter for the business firm. They simply aren’t doing it anymore. The day of the regulated monopoly is gone. If they need basic research they do it through an alliance with a university or individual professor. PPP adjustments are very important. I’m often amused when I read that Japan is the world’s second largest economy. If you look at adjusted PPPs China and Japan are far behind. And it is also true in our numbers; PPPs make a large difference. Applied research generally entails some considerable technical risk and uncertainty about application. If I go to the development side the intended commercial application is generally known. There’s a fuzzy line here between this research and development, you can call it early stage development or applied research it is hard to draw that line. But market risk is the large risk in development, not risk about whether you will achieve a useful application. Also the time horizon once you get to the development stage is generally much lower – perhaps three years but two years on average. It is a very short process. The scale of resources required is also quite different. Most development is much more material intensive. The key organizational change that emerged in almost all of the interviews was the shifting of development to the business unit, to the operations unit. This reflected a focus on a commercial objective and the need to have the marketing people involved as well as take account

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop of the manufacturing process. So what are the implications? We expect these organizational changes to continue and proceed rapidly. That has to influence how we deal with data collection – the importance of the business unit, the emergence of matrix management, the involvement of non-R&D executives in decision making, the shift of resources from the research unit downstream, and the growth of outsourcing. The question that is hard to get a handle on is what is happening to productivity. If we think these organizational changes are productivity enhancing, which is my bias from studying other kinds of organization, we may be seeing some real improvements in R&D. And if information technology is driving research as it has been driving other parts of the business enterprise, we are seeing enormous productivity benefits. But the work so far is all in manufacturing firms so we need to study the service sector. Every business said that cost doesn’t matter very much as a factor in location. But in the United States the relative costs have been declining and we are seeing R&D shifting here. It has been shifting away from Japan and Germany and France where the cost is higher into Britain and into China and India, although the latter is more development than research. So cost may be an important factor and that is one of the things that we are trying to determine. Another factor in the expansion of U.S. R&D is that business has been expanding and with it the overall intensity of R&D. With research there is a clear home country bias at the level of the centralized research laboratory. It also has to do with the proximity of universities and non-corporate centers of research capability. A lot of companies are coming into the United States to access computer technology, especially in Silicon Valley. We found that U.S. firms’ acquisition of foreign research laboratories is generally associated with mergers. Companies pick up a laboratory in connection with a big merger and decide to integrate it. The location of development activities has a lot to do with local regulations, proximity to marketing expertise and to the suppliers for that marketing operation and manufacturing capacity. There are country-specific standards in telecommunications, local regulation of drug marketing, and safety regulations in every country. Often there is a preference of local models of computers and automobiles. These factors tend to run in the same direction as cost differentials, which are larger if you use R&D PPPs rather than the conventional GDP PPPs. Whether they are big enough to make a difference in policy remains to be seen. This is a brief overview of what we are being told by executives about R&D performance. ANDREW WYCKOFF: First a comment. The problem of timeliness that you face in the United States is not much different that some of the countries we are dealing with at the OECD. There may be some tricks that you can pick up from the Swedes and Finns. One advantage they have is that their economies are dominated by a few large firms, and I think they pay attention to the annual reports and you get a fair amount of information out of them. I wanted to ask a question. When we look at multinational enterprises and their global reach, one of the striking findings of BEA is the amount of intra-firm trade. I am wondering if you have a handle on the amount of intra-firm technology flow, which could theoretically be picked up in royalties or licenses going back and forth. NED HOWENSTINE: We do of course have data that shows up in the balance of payments on international transactions and royalties and fees. I think the problem is that charges for technology transfers are probably not always explicit and may be embodied in products and

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop in transfers of science and engineering personnel abroad to help with the R&D, transfers that are not measurable in the sort of aggregate statistics that we collect. ANDREW WYCKOFF: I read in a Wall Street Journal article that some of these patent portfolios were actually kept offshore as well, and I wonder what that does to your country location figures. NED HOWENSTINE: That is a problem. We’re concerned about inversions and related R&D activity, and we’re actually doing some studies of that ourselves right now. I’m not sure what the effect is. My colleague here Obie Whichard who specializes in services, and those kinds of transactions may have some insights. OBIE WHICHARD: It occurs to me that sometimes companies are required to allocate certain charges for tax purposes and that may be what is reflected. PARTICIPANT: There has a been lot of research on the tax figure; you might take a look at it because, at least until fairly recently, firms had quite a bit of discretion on whether they repatriated the research and the R&D, and that means using annual figures of any kind of transfer back to the United States. It’s not informative because you can affect your taxable revenue streams simply by where you move it. WALTER KUEMMERLE: I don’t think that firms generally make strategic decisions about this for tax purposes. This is managed. CHARLES DUKE: There are many different arrangements when you purchase R&D or when you have corporate organizations that have R&D, and the notion that a firm does one thing is a flawed notion. These arrangements are very specific to the value chains of the product lines involved, even within a firm. NED HOWENSTINE: The RD-1 has a question about total foreign spending for R&D without distinguishing whether it is R&D by an affiliated company or not. By combining that R&D RD-1 data with our data about affiliated relationships we might be able to learn a bit more about the MNC activities with regard to arms length versus affiliated R&D spending. BILL LONG: If state governments are promoting technology in order to have economic growth, why in 2003 are they slashing technology supports along with other programs? WALTER KUEMMERLE: The budget crisis will have various effects, New Jersey just eliminated or dramatically downsized their efforts, but other states such as Michigan are talking about increasing theirs even in the midst of budget deficits. I think we will see a net reduction in state R&D investments, but I think the real question is just how long is the state fiscal problem going to last. Regardless, some states believe they have to keep up their investments for competitive reasons. PARTICIPANT: I have heard that state spending on research, particularly industrial related research is cyclical. When times are good spending is up, when times are bad spending is down, which is exactly what you don’ t want to have happen. In contrast, federal spending of the same sort, limited though, tends either to be constant or actually a little bit counter-cyclical. It seems to me that timely data on states’ own spending is genuinely important. WALTER KUEMMERLE: NSF doesn’t collect it because of funding limitations. The closest we have is a six or seven year-old study by State Science and Technology Institute (SSTI). I guess it depends on what state you’re in and how bad their budget crisis is, but I can point out a number of states that are actually increasing funding right now, not cutting it. States are not going to go back to where they were. They have more or less embraced this new innovation driven model, although it will fluctuate according to budget cycles.

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop JOHN JANKOWSKI: NSF actually funded SSTI survey and how often to sponsor such a study is something on the agenda of the Academy committee to give us their advice. It should be an activity that we undertake on a more frequent or consistent basis. PARTICIPANT: This is more of a comment than a question and it follows from what you have just been discussing. In the absence of really good data at the MSA level we have regions coming to us on practically a weekly basis asking how to replicate San Diego in their neck of the woods. Governors and mayors and regional authorities all want to do biotechnology in their backyards. It is my contention that you cannot grow biotechnology in everybody’s backyard, but absent good data from NSF, regions are really in the dark. If they had better measures they would have less difficulty justifying their strategies. CONCLUDING OBSERVATIONS BRONWYN HALL: Before we conclude I want to recognize Al Johnson from Corning and from the Industrial Research Institute. AL JOHNSON: We have a strong interest in improving the reporting of industrial R&D spending. What sort of action does that mean? What level of disaggregation? I am not entirely sure, but at the next meeting of the IRI Finance Directors’ Network we will try to get opinions out onto the table concerning this issue and generate some options for moving forward in a cooperative way to provide more useful data for private and government planning while ensuring that companies’ proprietary data is not exposed to their competitive disadvantage. Bob McGuckin mentioned that companies don’t do fundamental research. That’s not quite true. What I’ve observed in participating in the IRI is that U.S. companies that do work in materials still do a substantial amount of fundamental research. Is the sample biased? I don’t know, but I do know from personal experience that a number of companies are doing fundamental R&D. What kinds of R&D data are now generated in companies? Well, people track the kind of data that you would expect for financial and managerial accounting. This is not a secret. The problem is that by rights different organizations organize that data differently; their taxonomies differ. So marching in and declaring to them that you want data in one way shape or form might not get you the results that you’re anticipating. Not because they’re trying to dodge you, although they may be, but because they stack it up differently in house. If an organization manages its R&D portfolio using a stage gate process – meaning that there is an inception and some design and engineering and some prototyping and so forth moving through toward production -- they may in fact track spending by project stage, and the disaggregation may in fact get quite difficult. If the Census asked for further classification should the level be line of business or some other category? The answer is I don’t know, but my subcommittee chairs have agreed to talk about it to work with you toward a productive result. BRONWYN HALL: The next speaker is Fred Gault from Statistics Canada, a member of the CNSTAT study panel. FRED GAULT: Something I have learned today is that the R&D enterprise does not operate in isolation. What is important is not just measuring R&D expenditure but understanding the sources of funding, where that money to do R&D comes from in a firm, whether it’s coming from government, the firm itself, or from other firms, perhaps classified to other industries, or whether it’s coming from abroad. Now we heard about the memorandum of

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop understanding to deal with foreign affiliates and the flow of funds, and I think that is an important step forward. What it doesn’t address, but John tells me is in hand, is foreign payments for the performance of R&D in U.S. firms. That is a number on which you cannot lay your hands at the moment. In Canada, one-third of business enterprise R&D comes from abroad, so that makes it a very interesting number because if it disappeared it would have implications for highly qualified human resources in the R&D enterprise. So those linkages are important, and also the payments that firms make for R&D to other firms, to private nonprofit organizations, even to governments. People rent time on wind tunnels and other interesting objects as part of doing their business. And when you trace those linkages you begin to build a picture of the R&D enterprise. It applies also to federal spending on R&D, grants and contracts, extramural spending, therefore and internal spending. Now people like our colleagues in RAND can take those numbers and produce state or sub state results, which I’m sure are both timely and useful. We produce them in Canada and the first thing that happens when those numbers are released is that people complain about not getting a fair share of federal spending and they want to know why. So there ensues a debate in the house, which mercifully goes away after a while. The unit of observation is clearly important. It is a firm level unit of observation for the RD-1, and that works for small firms, but it raises complications for large firms. We’ve tried to struggle with it and it is very difficult. We have a lot of R&D in wholesale trade, a large amount of it is pharmaceuticals, some is software, and you could argue that it is misclassified. Well, that’s where those firms happen to land when they come out of the business register and following my rules and guidelines I have to put them there. So there will be for some time to come a lot of R&D in wholesale trade, but watch it carefully. Timeliness. Well everybody wants the data immediately because they want to do their thing, whatever their thing is. And that raises an interesting question. There are ways of improving timeliness. In business R&D you could, because of the high concentration of research and development here and everywhere else, survey the top 50 firms and you could pick a sort of reduced set of variables that you could use in that survey in order to get the results out within a month or so of the reference period. R&D tends to be an annual thing because that’s the way people do budgets. But before you contemplate that, I suggest sitting down with the policy makers to ask them exactly what they’re going to do with these numbers and which variables are important to them and how much they’re willing to pay for the accelerated service. That will focus the discussion a good deal, but it would also focus the work of the data takers. From long and bitter experience it is my view that if you don’t have the policy makers with their checkbooks sitting at the table, it is not a meaningful dialogue because they have to pay for the numbers, and once they’ve paid for them they will use them because they have to justify that expenditure. A meeting like this is amusing. We sit around and say that we would like more of this and more of that but we are not paying for it. Outcomes of R&D are absolutely important. Notice I haven’t used the word innovation yet. And we have talked a lot about value chain and understanding how R&D fits into the value chain, but not until a recent session did I hear a discussion of size. Let me tell you a story. We do innovation surveying in Canada. One thing that preoccupies us is the source of ideas for innovation. One item on that list is government laboratories and government programs, but virtually no one uses government laboratories or government programs as an input to innovation. Now how depressing for the minister, but re-cut the data and look at large firms. If you look at large firms they all use government laboratories and government programs, because they all have

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop R&D facilities and they all have the capacity to sock every bit of intellectual property out of everything that they can find in the country, inside or outside the country. So size matters and a lot of what we were talking about in our value chain discussions apply to large firms. And there are policy differences with respect to how you deal with large firms or small firms by way of tax credits or tax benefits. That brings me to the issue of links, which has been a theme of our discussion. Intellectual property commercialization from government labs and, from university labs have been the subjects of very good surveys. We could make a greater effort to understand how government R&D gets out into small firms or gets licensed to large firms or moves around in the system. Other topics deserving more attention include alliances and networks and partnerships. There are outcomes to all of these activities. We do the R&D. The R&D has some sort of impact. It gets commercialized as we move along the value chain, and then things change. Perhaps the labor force in the firm has to be upgraded to deal with the new activity or downgraded or replaced, who knows, but there will be social and economic outcomes. Profits will change, market share will change. That is something that is difficult to see until you have good data linkage back to the MOU. We mentioned clusters. It is very hard for statistical offices to produce data at the level of a metropolitan center, very difficult indeed. We are experimenting with it but we have confidentiality problems. We would publish tables for a city with lots of X’s in them. They are not very useful when we have suppressed all of the data. But we have tried drawing maps and we are trying to apply the confidentiality rules to geographical cells, so if you need three observations in the cell in order to publish a number in a table perhaps we can keep expanding the geography until we have three observations from somewhere. This is all very experimental and it is driving us crazy but it does allow you to get some interesting sub-province information. Structure is an issue. Every OECD country is a service economy and for most of them that has been true since the 1950s. We seem to keep discovering this fact but it has been with us for a very long time. That is not just a consequence of information and communication technology development, but it is a major factor. We have laid out that infrastructure and now people are doing interesting things with it. They are developing knowledge products which are bought and sold and cannot be dropped on your foot. Have you bought any interesting financial instruments lately? Health diagnostics can be done using the internet. There are learning packages of which you could not have dreamed five years ago. And this raises a question about practices, not just technologies. Large firms are using a set of practices to manage their knowledge about their clients and about their suppliers and about the transformation process that converts inputs into outputs, and they are doing this at a very high level in a business environment which is changing hourly. But we are not addressing those issues in official statistics. There will be an OECD book on the subject coming out in the next couple of months. But what this underscores not just the importance of linkages and the transforming effect of alliances but the importance of social sciences and humanities, of which we have spoken very little today. That is an area that we are all going to have to address. JOHN ALIC: Let me pose a question to Fred Gault, because he brought up the practices question. My colleagues and I have done some field work in service companies in which it is apparent that they don’ t think of themselves as doing R&D and yet they use technology in some sense, not just infrastructural technology but technology in the sense of new and evolving knowledge to modify, alter, redesign their business practices, what the people in their organizations do on a day to day basis. They are reshaping how their organizations function and

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop S&T is a critical input to that but not the kind of S&T that we associate with laboratories and scientists and so on. As somebody who deals with the sources of the data, do you see a way to get a handle on that statistically? FRED GAULT: Thank you for that question. We had an interesting problem when the banking sector claimed an enormous tax benefit for doing research and development, and a lot of that was software. So software R&D is fine. We want to work that out. The issue of organizational change and whether or not it involves R&D takes us to the edge of the Frascati definition. But so long as there is uncertainty in the undertaking, you don’t necessarily know the outcome until you’ve got a development project that you can move with and then you think you know the outcome and you hope it works. I see no reason why that could not be classified as research and development. As it is I come from a country where our RD-1 equivalent deals only with natural sciences and engineering, and the R&D tax benefit goes only to firms engaged in natural sciences and engineering. So all the social science R&D in industry, which is really what you are describing, is not counted, and I think that is a serious gap in our statistical system and perhaps my distinguished colleagues in NSF could discuss how it could be addressed in this statistical system. I am trying to get out of the hot seat. CHARLES DUKE: Let me give you an example of how we use traditional R&D for development of service products. Xerox has a very multi-billion dollar business running print shops for service manuals. One of the major innovations now in process is called lead document production, which is based on the application of manufacturing technology concepts and modeling to the structure and scheduling in print shops. It doubles and triples their output. There is a classic example of how research and development of this sort – statistical analysis that first came out of MIT, is used routinely to refine the set of offerings that generate huge sums of money on a regular basis. At Xerox over half of our revenue and almost all of our profit is in services. We have the same conceptual phase and development process for our service offerings that we have for our product offerings. We have developed the facilities management services by exactly the same process that we developed a new copier or a new printer. So from our undoubtedly parochial point of view we apply very much the same discipline-oriented research to the development of new services as to the development of new tangible products, and this will become an increasingly important source of Xerox Corporation’s profitability. BILL LONG: In the manufacturing context there has always been product R&D and process R&D, and we have never had a problem calling both of them R&D. But then we get into the services, and it seems as if we have lost our mind with respect to the product/process difference. Yet the U.S. Patent Trademark Office issues patents on something called business methods or business processes. If you can get a patent on a business methods, doesn’t that imply that you can do research and development to create the technology on which you got the patent? There is a lot of controversy over whether there ought to be business method patents as there is about software patent rights, but they are technologies. JOHN JANKOWSKI: I don’t want to address that question of whether Xerox should or should not be including that in its RD-1 response, but we do give guidance to specifically exclude social science R&D from the industrial research totals. We had a serious concern that companies would start including market research in their R&D totals and we felt pretty confident that we did not want that to be reported on this survey. We do indeed collect and include in our statistics social science research when it is performed by university sector or within the government sector and on occasion estimate it for not profit organizations.

OCR for page 1
Research and Development Data Needs: Proceedings of a Workshop JOHN ALIC: Most of what we are talking about here is not really social science R&D. There is no more social science R&D in the services than there is in manufacturing. This is industrial engineering and modeling and simulation; the analogy is factory production systems, just in time logistics management and so on. CHARLES DUKE: My understanding is that if you guys want to get this straight, you need to go back and get your value chain straight because what you are really saying is that activity at the front end of the value chain -- how big is the market, would this product be successful if offered, etc. -- does not fit your scheme but the response to those market demands is allowable. That is my understanding of what has been said at this meeting. FRED GAULT: That is certainly consistent with the way in which we capture the data at Statistics Canada. PARTICIPANT: I want to address the confidentiality issue. In RADIUS we have what we call restricted but unclassified data, and only certain people can get to it. Have you in your collections from industry considered introducing this kind of concept? PARTICIPANT: I recently retired from the National Center for Health Statistics, a federal statistical agency. What they have been doing and what the Census Bureau has been doing is setting up what they call research data centers, and these allow qualified researchers access to data that are not released for confidentiality reasons. That model is now considered fairly successful.. BILL LONG: It is my understanding that the IRI CIMS database works a little bit like that. No company that contributes data to it can go in and get access to the raw data because then they would have access to competitors’ proprietary data. But almost any academic researcher could get access, subject to restrictions. BRONWYN HALL: Our thanks to the planning committee and to all of the speakers and participants for their contributions to the discuss ion today. The workshop is adjourned.