FIG. 1. Research input and output indicators I. All United States academic institutions (1980–93, log scale) (1). R&D is given in 1987 dollars. Paper numbers are based on more than 3500 journals, interpolated for even years.

focused on the measurement of the contribution of individual scientists or departments within specific fields [see Stigler (22) and Stephan (3) in economics and Cole and Cole (23) in science more generally]. Very few have ventured to use bibliometrics as a measure of output for a field as a whole. [Price (24) and Adams (25) at the world level and Pardey (26) for agricultural research are some of the exceptions.] The latter is bedeviled by changing patterns of scientific production and field boundaries and the substantive problems of interpretation implied by the growing size of scientific literature, some of which we will discuss below.

The Aggregate Story

Returning to the aggregate story depicted in Fig. 1, we note that the number of scientific papers originating in United States universities given in (S&EI) grew significantly more slowly during 1981–1991 than the associated R&D numbers. But reading the footnote in (S&EI) raises a clear warning signal. The paper numbers given in this source are for a constant set of journals! If science expands but the number of journals is kept constant, the total number of papers cannot really change much (unless they get shorter). United States academic papers could also expand in numbers if they “crowded out” other paper sources, such as industry and foreign research establishments. But in fact the quality and quantity of foreign science was rising over time, leading to another source of downward pressure on the visible tip of the science output iceberg, the number of published papers. If this is true then the average published paper has gotten better, or at least more expensive, in the sense that the resources required to achieve a certain threshold of results must have been rising in the face of the increase in competition for scarce journal space. Another response has been to expand the set of relevant journals, a process that has been happening in most fields of science but is not directly reflected in the published numbers. (The published numbers do have the virtue of keeping a dimension of the average paper quality constant, by holding constant the base period set of journals. This issue of the unknown and changing quality of papers will continue to haunt us throughout this exercise.)

We have been fortunate in being able to acquire a new set of data (INST100) assembled by ISI (Institute for Scientific Information), the producers of the Science Citations Index, based on a more or less “complete” and growing number of journals, though the number of indexed journals did not grow as fast as one might think (Fig. 2). The INST100 data set gives the number of papers published by researchers from 110 major

FIG. 2. Publications and Citations, Growth of Components, 1980– 1994, all “science” fields; 1980=1.0 (30).

United States research universities, by major field of science and by university, for the years 1981–1993. (See Appendix A for a somewhat more detailed description of these and related data.) It also gives total citation numbers to these papers for the period as a whole and for a moving 5-year window (i.e., total citations during 1981–1985 to all papers published during this same period). This is not exactly the measure we would want, especially since there may have been inflation in their numbers over time due to improvements in the technology of citing and expansion in the numbers of those doing the citing; but it is the best we have.

There are also a number of other problems with these data. In particular, papers are double counted if authors are in different universities and the number of journals is not kept constant, raising questions about the changing quality of citations as measures of paper quality. The first problem we can adjust for at the aggregate and field level (but not university); the second will be discussed further below. Table 2 shows that when we use the new, “expanding journals set” numbers, they grow at about 2.2% per year faster, in the aggregate. Hence, if one accepts these numbers as relevant, they dispose of about one-half of the puzzle.

Another major unknown is the price index that should be used in deflating academic R&D expenditures. NSF has used the gross domestic product implicit deflator in the Science and Engineering Indicators and its other publications. Recently, the Bureau of Economic Analysis (BEA) produced a new set of “satellite accounts” for R&D (27), and a new implicit deflator (actually deflators) for academic R&D (separately for private and state and local universities).§ This deflator grew significantly faster than the implicit gross domestic product deflator during 1981–1991, 6.6% per year versus 4.1%. It grew even faster relative to the BEA implicit deflator for R&D performed in industry, which is only growing at 3.6% per year during this period. It implies that doing R&D in universities rather than in industry became more expensive at the rate of 3% per year! This is a very large discrepancy, presumably produced by rising fringe benefits and overhead rates, but it is not fully believable, especially since one’s impression is that there has been only modest growth in real compensation per researcher in the academy during the last 2 decades. But that is what the published numbers say! They imply that if we switch to counting papers in the “expanding set” of journals and allow for the rising relative cost of doing R&D in universities, there is no puzzle left. The two series grow roughly in parallel. But

§  

See also National Institutes of Health Biomedical Research and Development Price Index (1993) (unpublished report) and Jankowski (28).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement