performance across institutions on these measures.93 These scholars cite a number of contributing variables including the presence or absence of a medical school, the public or private status of the institution, the structure of incentives for faculty to participate in the system, and technology transfer personnel compensation (e.g., the presence or absence of incentive pay). But there is very little consistency in results across studies apart from the very strong correlations between various output measures and the scale of universities’ research portfolios. For their sample of institutions, for example, Feldman and Bercovitz reported the following Pearson correlation coefficients between total research expenditures and invention disclosures (0.97), patent grants and applications (0.94), licenses (0.55), and start-ups (0.84), compared with office age, that is, experience (0.33 for invention disclosures) and the presence or absence of a medical or engineering school (not significant for any performance measure).

Kordal and Guice94 argued persuasively that it is “inappropriate to compare institutions with widely varying sizes” of research portfolios and that “institutions should be compared to their peers.” Grouping institutions in three categories—large, medium, and small—Kordal and Guice found just as large differences in revenue, invention disclosure and patenting rates, licensing, and start-up company activity within each of the three tiers as across them, suggesting that more fine-grain analysis could be revealing of ways to improve technology transfer performance based on the current set of metrics.

It would be most useful to know the extent to which such disparities among universities reflect differences in the organizational structure, staffing, and funding sources of technology transfer offices and their relations with research faculty, centers of entrepreneurial education, and other controllable variables as distinct from structural factors that are hard or impossible to change (e.g., scale and specialization of research portfolios, public versus private status, presence of certain academic units, historical reputations, mission or niche, and geographical proximity to potential investors and industrial partners). But this work for the most part remains to be done.

A more serious and challenging problem with the data regularly reported on university technology transfer activities is that they draw attention to the volume of technology transfer activity and away from its quality and efficiency (e.g., timeliness, extent of marketing outreach, character of relations with faculty


Inter alia, D.S. Siegel, M. Wright, and A. Lockett. 2007. The rise of entrepreneurial activity at universities: Organizational and societal implications. Industrial and Corporate Change 16(4):489-504; J.G. Thursby and S. Kemp. 2002. Growth and productive efficiency of university intellectual property licensing. Research Policy 31(1):109-124; R. Kordal and L. Guice. Op. cit.; D. Siegel, D. Waldman, J. Silberman, and A. Link. 1999. Assessing the Impact of Organizational Practices on the Performance of University Technology Transfer Offices: Quantitative and Qualitative Evidence. Paper presented to the NBER Conference on Organizational Change and Performance Improvement, Santa Rosa, CA; R. DeVol and A. Bedroussian. 2006. Mind to Market: A Global Analysis of University Biotechnology Transfer and Commercialization. Santa Monica, CA: Milken Institute; S. Belenzon and M. Schneiderman. 2007. Harnessing Success: Determinants of University Technology Licensing Performance. Centre for Economic Performance Discussion Paper No. 779; and M. Feldman and J. Bercovitz, op. cit.


Kordal and Guice, op. cit.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement