roughly 4000 journals in the sciences and 1500 journals in the social sciences. A second difference from the CHI data is that ISI counts multiple authored papers in different universities as whole papers up to 15 times, whereas CHI assigns equal shares of the papers to different universities based on the number of authors in different universities.

A final difference is that the CHI data follow the CASPAR fields to the letter, whereas the ISI data on papers and citations by university and field appear originally in a more disaggregated form than the biological and medical fields of our regressions. We combined “biology and biochemistry” and “molecular biology and genetics” to form biology. We combined “clinical medicine,” “immunology,” “neuroscience,” and “pharmacology” to form medicine.

The later ISI data contain more measures of scientific output in the universities and fields than the CHI data. There are two measures of numbers of papers: the number published in a particular year, and the number published over a 5-year moving window. Added to this are two measures of citations to the papers: cumulative total citations to papers published in a particular year through 1993 and total citations to papers published over a 5-year moving window over the course of that window. Each of these output measures has some limitations that stem from the concept and interval of time involved in the measurement. Numbers of papers do not take into account the importance of papers, whereas total citations do. Especially in the larger research programs it is the total impact that matters, not the number of papers, however small. Turning to citations, cumulative cites through 1993 suffer from truncation bias in comparing papers from different years. A paper published in 1991 has only a small part of the citations it will ever get by 1993, whereas a paper published in 1981 has most of them intact. The time series profile of cites will show a general decline in citations, especially in short panels, merely because earlier vintages of papers have decreasing periods in which to draw cites. The second measure available to us—the 5-year moving window of cites to papers published in the same window—is free of this trended truncation bias. However, there is still a truncation bias in the cross-section owing to the fact that better papers are cited over a longer period. Thus, total cites are to some extent understated in the better programs over the 5 years in comparison to weaker programs. This problem could be gotten around by using a 10–12-year window on the cites, but then we are stuck with one year’s worth of data and we would be unable to study trends.

Another point about the data used in the regressions, as opposed to the descriptive statistics, is that they cover an elite sample of top United States universities that perform a lot of R&D. The number of universities is 54 in biology, 55 in chemistry, 53 in mathematics, 47 in medicine, and 52 in physics. These universities are generally the more successful programs in their fields among all universities. Their expenditures constitute roughly one-half of all academic R&D in each of these areas of research. It turns out that, for the much larger set of universities that we do not include, the data are often missing or else the fields are not represented in these smaller schools in any substantive way. The majority of high-impact academic research in the United States is in fact represented by schools that are in our samples. Remarkably, and as if to underscore the skewness of the distribution of academic R&D, it is still true that the research programs in our sample display an enormous size range.

We are indebted to JianMao Wang for excellent research assistance and to the Mellon Foundation for financial support. We are also indebted to Lawrence W.Kenny for encouraging us to investigate the role of S&Es, as well as real R&D.

1. National Science Board (1993) Science and Engineering Indicators: 1993 (GPO, Washington, DC).

2. Adams, J.D. (1990) J. Political Econ. 98, 673–702.

3. Stephan, P.E. (1996) J. Econ. Lit., in press.

4. Griliches, Z. (1994) Am. Econ. Rev. 84 (1), 1–23.

5. Jorgenson, D.W. & Fraumeni, B.M. (1992) in Output Measurement in the Service Sectors, NBER Studies in Income and Wealth, ed. Griliches, Z. (Univ. Chicago Press, Chicago), Vol. 55, pp. 303–338.

6. Rosenberg, N. & Nelson, R.R. (1993) American Universities and Technical Advance in Industry, CEPR Publication No. 342 (Center for Economic Policy Research, Stanford, CA).

7. Henderson, R., Jaffe, A.B. & Trajtenberg, M. (1995) Universities as a Source of Commercial Technology: A Detailed Analysis of University Patenting 1965–1988, NBER Working Paper 5068, (Natl. Bureau of Econ. Res., Cambridge, MA).

8. Katz, S., Hicks, D., Sharp, M. & Martin, B. (1995) The Changing Shape of British Science (Science Policy Research Unit, Univ. of Sussex, Brighton, England).

9. Levin, R., Klevorick, A., Nelson, R. & Winter, S. (1987) in Brookings Papers on Economic Activity, Special Issue on Microeconomics, eds. Baily, M. & Winston, C. (0), Brookings Inst., Washington, DC. 783–820.

10. Mansfield, E. (1991) Res. Policy 20, 1–12.

11. Mansfield, E. (1995) Rev. Econ. Stat. 77 (1), 55–65.

12. Griliches, Z. (1958) J. Political Econ. 64 (5), 419–431.

13. Nelson, R.R. (1962) in The Rate and Direction of Economic Activity, ed. Nelson, R.R. (Princeton Univ. Press, Princeton), pp. 549–583.

14. Weisbrod, B.A. (1971) J. Political Econ. 79 (3), 527–544.

15. Mushkin, S.J. (1979) Biomedical Research: Costs and Benefits (Ballinger, Cambridge, MA).

16. Griliches, Z. (1964) Am. Econ. Rev. 54 (6), 961–974.

17. Evenson, R.E. & Kislev, Y. (1975) Agricultural Research and Productivity (Yale Univ. Press, New Haven, CT).

18. Huffman, W.E. & Evenson, R.E. (1994) Science for Agriculture (Iowa State Univ. Press, Ames, IA).

19. Griliches, Z. (1979) Bell J. Econ. 10 (1), 92–116.

20. Van Raan, A.F.J. (1988) Handbook of Quantitative Studies of Science and Technology (North-Holland, Amsterdam).

21. Elkana, Y., Lederberg, J., Merton, R.K., Thackray, A. & Zuckerman, H. (1978) Toward a Metric of Science: The Advent of Science Indicators (Wiley, New York).

22. Stigler, G.J. (1979) Hist. Political Econ. II, 1–20.

23. Cole, J.R. & Cole, S. (1973) Social Stratification in Science (Univ. Chicago Press, Chicago).

24. Price, D.J. de S. (1963) Little Science, Big Science (Columbia Univ. Press, New York).

25. Adams, J.D. (1993) Am. Econ. Rev. Papers Proc. 83 (2), 458–462.

26. Pardey, P.G. (1989) Rev. Econ. Stat. 71 (3), 453–461.

27. Bureau of Economic Analysis, U.S. Department of Commerce (1994) Surv. Curr. Bus. 74 (11), 37–71.

28. Jankowski, J. (1993) Res. Policy 22 (3), 195–205.

29. Griliches, Z. (1987) Science 237, 31–35.

30. ISI (1995) Science Citation Index (ISI, Phildelphia).

31. Quantum Research Corp. (1994) CASPAR, CD Rom version 4.4 (Quantum Res., Bethesda).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement