National Academies Press: OpenBook

The Positive Sum Strategy: Harnessing Technology for Economic Growth (1986)

Chapter: Basic Research in the Universities: How Much Utility?

« Previous: Technological Education
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 263
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 264
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 265
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 266
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 267
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 268
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 269
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 270
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 271
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 272
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 273
Suggested Citation:"Basic Research in the Universities: How Much Utility?." National Research Council. 1986. The Positive Sum Strategy: Harnessing Technology for Economic Growth. Washington, DC: The National Academies Press. doi: 10.17226/612.
×
Page 274

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Basic Research in the Universities: How Much Utility? DONALD KENNEDY The United States has placed on its universities a responsibility for basic research larger than that imposed in any other society. The result is a unique venture which tightly couples research and research training, improving the quality of both, and is heavily dependent on public funding. Now, because of the drop in govern- ment support of the capital infrastructure of university research and because of the need to spread technology transfer, the relationship between quality and utility in basic research is being explored anew, and rzew relationships between universities and industry are being tested. This renewed emphasis on Pliny is not without promise, but it should not be perrnztted to drain offthe energies of the best scientists or to sap the vigor of the university laboratories irz which Journeymen and apprentices work side by side at the bench. In his splendid chapter on innovation and science policy in this volume, Harvey Brooks has said much of what needs sayin:,. His characterization of the venture of American science spans its entire range, from the publicly funded basic research that begins the trajectory of innovation to the risk capital financing of product development at its end. He makes a point worth amplifying: the dramatic growth of public funding for science after World War II placed most of the responsibility for fundamental research on the nation's universities. The extent of that responsibility, in fact, exceeds what can be found in any other industrial democracy (In the United States, less than 15 percent of government research and development expenditures are made in govern- ment-run laboratones; the vast majority of the rest, including about two- ~irds of the basic research done in the nation, is spent in the research universities.) Things might well have taken a different course; the government could have formed a consortium with leading industries to develop indepen- dent, jointly funded research units; or it could have evolved a set of in-house, government-run research institutes. But it did not. 263

264 DONALD KENNEDY What is the most significant outcome of that self-denying ordinance? Surely it is the collocation of research and research training. Most of the basic science in America today is done by mixed groups of journeymen and ap- prentices; the result is that the nation's research trainees are being developed alongside the best scientists. That is the singular feature of our pattern of government support for basic science in the universities; to it, our most thoughtful European colleagues usually attribute our special success. In 1977 Sune Bergstrom, then president of the Swedish Academy, pon- dered why Americans had just swept all of the Nobel science awards. He decided that it was because of the "democracy of American science," by which he meant the fellowship of the laboratory bench. WlIY NEW UNIVERSITY-INDUSTRY RELATIONSHIPS ARE DEVELOPING During the periods of vigorous growth in the 1950s and 1960s, there was an adaptive mixing of objectives in He expenditure of federal funds. The primary objective was He support of research programs, but two important secondary goals were He support of graduate training and the funding of a stable capital infrastructure to underlie the university-based programs. The high-water mark for this consolidated approach was probably reached be- tween 1965 and 1967. After that, the gradual cutting back of the fellowship and training-grant programs began the decline in graduate support, and the end of the Health Research Facilities Act in 1968 signaled the onset of capital wasting. These two events have brought us to a very serious situation. Of the developments just mentioned, the capital cost disease is surely the more worrisome. Its several ramifications include the following: (1) Graduate students and postdoctoral fellows in many fields of science are working under severe equipment constraints and are emerging from their student days less able than they should be to work at the most creative edge of their disciplines (2) The vigor of the research effort itself is attenuated, as scientists either make do with what they have or spend more and more time searching for alternative ways to finance and equip their laboratories. There are collateral problems as well. As deficiencies in the infrastructure for university research worsen, strains emerge in odd and unexpected places. For example, equipment and buildings once paid for by the government are now paid for by private sources instead; this change accounts for the most significant element in the recent rise in the indirect cost rates at major uni- versities. Under the rules by which universities are reimbursed for research costs, depreciation and use charges on such facilities and equipment may be recovered through the indirect cost rate. At universities like Stanford, indirect costs associated win such capital facilities have been by far the fastest-rising

BASIC RESEARCH IN THE UNIVERSITIES: BOW MUCH HIM? 265 component of that rate over the past decade. Because Cat argument has been set out in greater detail elsewhere, ~ shall not pursue it here. There are two major reasons for seeking to enhance and improve He linkages between the research university and industry. The first is the need to fill the void created when the government abandoned its support of capital facilities and major equipment in the research universities. Turning to another source of capital assistance when the one failed, many institutions have been developing new relationships with industry. The second reason is the need- now broadly perceived—to spread the process of technology transfer. While- we have built a strong fundamental research base by establishing publicly supported basic science in the universities, many observers believe that our record for transferring discoveries from the laboratory bench into human service has been disappointing. It is hoped that new kinds of institutions built at universities with help from industry will improve technology transfer. At Stanford, we have used that argument in persuading 20 corporations to con- tribute $750,000 each to fund the Center for Integrated Systems, a research facility for the development of large-scale integrated microelectronic circuits. There are a number of other examples of such centers in biotechnology as well as in microelectronics. These undertakings, engendered by the capital cost dilemma in the research universities as well as by impatience with the rate of technology transfer, are full of promise. Buy they also resurrect an old debate among those concerned win science policy—a debate concerned with the proper balance between discovery and application, that is, between quality and utility. The rest of this chapter returns to some of those considerations and reexamines them in light of the modern developments in university-industry relations. THE QUALITY-UTILITY DEBATE Most of us in the university sector have believed firmly that as long as quaky is kept high, as long as pnucipal investigators are decently supported and permitted to follow their own noses, quality will beget discovery, and utilizer will probably follow. That notion, sometimes called the Columbus theory of research, is actually much older than most people thinly it is. The eighteenth-century mathematician and physician d'Alembert says in the in- troduction to Diderot's Encyclopedia of Science: "Another motive serves to keep us at such work: utility, which, though it may not be the We aim, can at least serve as a pretext. The mere fact that we have occasionally found concrete advantages in certain fragments of knowledge, when they were hitherto unsuspected, authorizes us to regard all investigations begun out of pure curiosity as being potentially useful to us." He understood grantsman- ship before there were grants. Nowhere is the qualit~r-utility issue more clearly encountered Can in heals

266 DONALD KENNEDY research. In that sector, we have seen a rising political consciousness of the cost of curative medical technology and increasing impatience about the long diffusion time between well-advertised fundamental science breakthroughs and the availability of clinical benefits. Other important elements include a new and growing scientific focus on preventive health and the disciplines relevant to its practice, and the recent appearance of strong commercial incentives for the application of new discoveries in molecular genetics. In 1976 the President's Panel on Biomedical Research, a group of scientists and medical administrators, presented President Ford with a report the Con- gress had commissioned two years earlier. Among its recommendations, the report strongly urged the continuation of federal funding for basic research in increasing amounts and with greater stability, arguing in a style perhaps best captured by the following example: "The remarkable science base of our nation . . . is an indispensable national resource; this science base pro- vides the only social basis for learning how to prevent and control diseases."2 This part of the report was significant not because it was novel, but because the time was ripe for it to usher in a sharp debate over the strategy and social purposes of medical research. At hearings held in 1976 by the Senate Sub- committee on Health and Scientific Research, a parade of distinguished academicians testified on behalf of He report and its conclusions. But other witnesses with equally sound credentials presented a different view. Kerr White, an epidemiologist Hen at Johns Hopkins University, argued that He emphasis on the "science base" might be too heavy; he pointed to the need to apply existing knowledge more effectively in the health care system, especially in He interest of preventive health: Are this country's academic medical centers to be concerned only with the provision of ''advanced medical care" for the major diseases that are a small segment of the burden of illness? What about the other eighty percent of the ills that beset maid? Who is to undertake the research, education and services that the public seem to demand or expect for these problems? On whose list of health problems are the behavioral and biomedical scientists of the country to work? Who draws up the list and on what is it to be based— the perceived needs of the public, the curiosity of the investigators, or a sensible balance between the two?3 The differences of interpretation that surfaced before the subcommittee presented the first serious challenge to a view of the utility of fundamental science that had dominated research policy in this country for three decades. The dichotomy of these views is captured in a brief passage from the hearing in which Senator Kennedy pressed the panel members on how funds should be allocated between basic and clinical research. He said to panel chairman Murphy: In your page 3, you say: "The primary mission of the NIH as constituted today is fostering and supporting and conducting laboratory and clinical research to the ultimate end of better

BASIC RESEARCH IN THE UNIVERSITIES: HOW MUCH UTILITY? 267 understanding of disease." The Public Health Service Act seems to describe the alienate end of the work not to be better understanding of disease, but to be diagnosis, treaunent, and control and prevention of disease. The Act, itself, is quite clear in this area.4 That fragment of history set the stage for a new political drama, one that could not have played a decade earlier when public faith in the capacity of science was still almost unrestricted. The failure of the War on Cancer began to erode public confidence in biomedical research, making it for the first time—susceptible to political challenge. The testimony also illustrates He different views of the state of science that were held by Hose having different relationships to it. Those who do science are, in general, convinced that it is damaged and made less effective by external direction. But, however impressive the accomplishments of un- guided basic science, one searches in vain for objective support of the view that it "provides the only . . . basis for learning how to prevent and control diseases." In contrast, those who have specific institutional responsibilities to the health care system especially through political roles are apt to demand more accountability from research and to be concerned that it be managed to produce specific ends. The difference between these two views is widening and becoming more public. CONSIDERATIONS IN FORMULATING RESEARCH POLICIES The issue of the relationship between quality and utility in basic research is a difficult one, chiefly because it involves attempting to define policy for a realm of activity that no one understands. Science has produced enormous gains for this society, but even when we employ so restrictive a definition of scientific progress as to measure only intellectual (and not technological) outcomes, we have great difficulty in discovering what makes it work. For example, does progress depend primarily on the contributions of a few ex- traordinary individuals or Is it the cumulative result of smaller efforts by a larger number of workers? Even so basic a question is hard to answer. The formal analysis of research productivity seems to show disproportionate con- tnbutions by a relatively small number of scientists, and the histories of disciplines always focus on a few giants.5 But retrospective examinations of many modern advances reveal a complex web of precursor influences in which dozens of workers have played essential roles. I do not believe that it is possible at this time to generate a hypothesis about the distribution of significant work that would be of much use in formulating research policy. Nor do we know how the presence of directive forces affects the research enterprise. Does utilitarian influence have a negative impact on quality? It is widely believed among basic researchers today that it does; but in the last century splendid science flourished under industrial sponsorship. Indeed, we do not even understand much about what motivates scientists

268 DONALD KENNEDY to do science. Is it the opportunity to provide some direct benefit to better the human condition? Is it the search for solutions to a major intellectual puzzle that impedes human understanding? With so little knowledge about why scientists do science and about what kind of guidance for research will therefore work best, what principles can be brought to the design of research policies that optimize quality and utility? Obviously I cannot supply a fully formed strategy, but following are some questions that will be important in developing that strategy. What Growth and Cost Features Must Be Considered? Science is an extraordinary growth enterprise, and always has been even when it was on tight rations. Well before the "golden age" of We 1960s, the rate of increase in the U.S. research and development budget was above 10 percent per year in real terms. For at least two centuries before ~at, the literature of science had been growing exponentially, at a rate of about 5 percent per year.6 Obviously, the commitment of new assets to science cannot indefinitely undergo proportional increases. But Were are good reasons for believing that the growth rates we have observed are driven by more than the expansion of resource opportunities. Max Planck observed that "with every advance in science the difficulty of the task is increased"; not only are the easier problems solved first, but new discoveries generate new questions that are inherently more difficult—and more expensive—to answer. For a fixed unit of meaningful output, then, there is a steady increase in cost. This principle has been recognized, implicitly or explicitly, in every modem analysis of the status of the major scientific disciplines. Estimates of the real value of this escalation range from 3.5 to 7.5 percent per year. Against that background, the "quality structure" of scientific production needs to be considered.7 A relatively small number of scientists produce a disproportionately large share of the work, and an even smaller number dominate the quality statistics. When the entire enterprise is growing, We highest-quality results will increase at an inherently lower rate than the av- erage for science as a whole. Developing a national research strategy that took these forces into account would be a complicated business. It would require cognizance of complex interactions among size, cost, and grown rates; and, because We distribution of quality across participants in the enterprise changes with size, any formula developed for blending quality and quantity would have to change with grown. How Is Quality To Be Recognized and Measured? In the end, history with the longest possible view is the most reliable judge of scientific quality. But the policymaker is seldom in a position to

BASIC RESEARCH IN THE UNIVERSITIES: HOW MUCH HIM? 269 take advantage of that perspective. The time of interest is the present and the filture, and the past is useful only for its general lessons about how quality is recognized and about how to determine the level of quality of an individual work. The task of evaluating, quality is made more difficult by our failure to agree on what criteria should be used in judging it. It is relatively easy, for example, to establish a consensus that a piece of work is elegant, but much harder to decide whether the problem itself or the avenue of approach IS Important. One of the authentic successes of modern science policy is the process of peer review, in which to employ the term literally—scientists examine and evaluate the research proposed by other scientists in their own quality cohort. Ironically, dunug the early days of "peer review," when it received the most active and enthusiastic support from the scientific community, the process probably did not fit that definition. Members of the early National Institutes of Health study sections and National Science Foundation panels were, for the most part, extraordinarily accomplished scientists, drawn from the very top of the quality spectrum; their judgments may have been respected in si=,nif~cant part because these scientists were viewed not as peers but as the very best. Now that peer review has become, more literally, review by peers, it is, perhaps not accidentally, being subjected to much sharper chal- lenge from within the scientific community. The populist criticism of peer review that it reinforces tradition even when it is maladaptive to do so and leads to growth in elegance at the expense of both importance and utility—contains elements of truth. Nevertheless, some system of peer review is the only means He scientific enterprise has yet found that permits contemporary judgment of the quality of a particular piece of research—as opposed to the quality of the researcher, which can (at least in principle) be judged historically. How Is Utility To Be Recognized and Measured? We need to know much more than we do about how the research process works in particular, about how different kinds of research interact and about what propositions and relations ought to be established between them. It is not easy, however, to distinguish "basic" research from the rest. Basic research is usually described as "seeking an understanding of the laws of nature without initial regard for utilitarian value" or as being undertaken "win no predetermined use in mind." In these and all over definitions of the term that I know, the intentions of the research play a significant role. It is easy to recognize some important social values in such work. There is a value attached to increasing human understanding and dispelling igno- rance. Extraordinary scientific accomplishments, irrespective of application, lift the imagination and provide important points of intellectual contact and consensus for societies that often have too little of both. Because research

270 DONALD KENNEDY activity contributes to the intellectual skills of persons who are often doing other things (e. ,., teaching) that have social utility of their own, He research may have "overhead" value. Although all these arguments have been advanced as rationale for the social support of research, never has such an argument played a significant political role in determining this support. Instead, in this society and in all others like it, the allocation of public funds has been based on the prospective social utility of research outcomes. Thus, He accumulated result of research initiated by independent investigators is viewed as a "knowledge bank" against which society may draw for useful applications. It is in these teas that basic research has always had to justify itself by showing, in effect, how quality begets utility. The traditional keystone of the argument for basic research is a version of the aforementioned Columbus theory: we must pro- ceed on all possible fronts, because (to quote Derek Bok's argument for basic research) "it is so difficult to perceive in advance what particular knowledge will prove important to the solution of a particular practical prob- lem."8 The difficulty is that, although the Columbus theory has widespread sup- port, the evidence for it is almost entirely anecdotal and usually concen- trates on a very few historic examples. For a long time, it was accorded almost theological respect by the Congress, especially when offered by dis- tingu~shed scientists; but, as indicated earlier, that attitude has chanced. Perhaps in response to the political harbingers of that change, there has been a growing tendency to cite more analytical or quantitative approaches. These are very few in number, but—despite conspicuous inadequacies- they have had a striking influence on the politics of research policy. The first was a 1969 study of weapons systems done by the Department of Defense in an effort to satisfy the Congress about the value of research and exploratory development. The study, called Project Hindsight, examined the development of 20 weapons systems and concluded that the critical events identified by the Deparunent of Defense participants were primarily the result of work in applied areas having specific systems requirements as objectives.9 The sys- tems were not selected using criteria established in advance, nor was the evaluation of critical events done by persons unconcerned with the outcome of the study. The result nevertheless had an important impact on defense research policy in the late 1960s and early 1970s. Comroe and Dripps, in an effort to improve the objectivity of such historical analyses, studied innovations in medicine that related to diseases accounting for over half the yearly deaths in the United States. A Groups of physicians and specialists nominated and then evaluated He top clinical advances in cardiovascular and pulmonary medicine and surgery in the preceding 30 years and selected 10; art independent group of consultants Den identified the "bodies of knowledge" essential for their development. Finally, a bibliog-

BASIC RESEARCH lN THE UN,IVERS=IES: HOW MUCH CILIA? 271 raphy of articles contributing to these advances was narrowed to 529 key papers that were then categorized by goal and type of research. About 62 percent of this underlying scientific work was classified as basic research, and in over 40 percent of the work there was no evidence of clinical interest on the part of the investigator at the time the research was done. The Comroe-Dripps study contains a number of features that should be followed in the design of future evaluations of basic research. The sample of important advances is generated by practitioners, not by the investigators or people concerned with demonstrating a connection to research An ex- ~aordinanly large sample of possible precursor events was examined, again by expert observers disinterested in the outcome. These ought to be the minimal standards for any such design. Further improvements could probably be made, but even without them the Comroe-Dripps design provides a means through which an objective assessment of the contribution of basic research to socially useful application can be judged. It deserves much wider appli- cation, but probably because it is extremely expensive and time-consum- ing it has scarcely been applied at all. Are Commercial Incentives Good Devices for Generating Utility From Quality? Whatever the status of the "science base" or "knowledge bank," it is clear from studies like the one by Comroe and Dnpps that the time delay between laboratory discovery and first practical application is often disn~rb- ingly lone,. Both government agencies and He universities have been urged repeatedly to reduce such applications delays, and much recent legislative attention has been given to incentives of commercialization including re- visions of the tax treatment of industrial contributions to university research. However laudable these efforts may have been, the emphasis on com- mercialization incentives is producing some farther-reaching institutional in- novations that should be examined carefully for side effects. No more vivid example can be found than in the fevered corporate activity surrounding genetic technology. To an unexpected degree, the commercial push behind that activity in- volves the scientists who are themselves responsible for the basic discoveries, and often the academic institutions to which they belong. That has raised problems both for He scientists and for their universities. Most institutions retain the rights to patents resulting from inventions made by faculty on university-compensated time in university laboratories. A few places give these rights to the faculty member; usually, as at Stanford, incentives are created to encourage the reassignment of these rights to the university through individual patent agreements. The university may then license them, usually nonexclusively if federal funds also contributed to the

272 DONALD KENNEDY support of the research. But neither tradition nor rules at most universities prevent the investigators from joining with others In a venture entirely outside We university—or the university from participating in that endeavor at the urging, of the investigators. And, of course, individual scientists are also involved in less formal relationships with the commercial sector via con- sulting and collegial interaction, which may stimulate the movement of ideas from the laboratory toward application. In the early phases of this new opportunity, most major research univer- sities adopted institutional arrangements to help them support continuing research activity by retrieving some of the rewards generated by the successful efforts of their faculty in the laboratory. The arguments in favor of this position are strong: the financial return is there and someone is going to get it; the universities have sponsored the research and nurtured the climate in which it took place, so a share should go to them in order to replenish their capacity to do more; and donors and trustees, who characteristically press hard for sound and aggressive financial management, insist Mat legitimate sources of income for these purposes be tapped. The spectrum of possible institutional solutions, beginning with the simplest, could be represented as follows: 1. University as licenser, collecting royalties directly. 2. Separate corporation as licenser, developer, and supporter of research; no relation to university except through agreed sharing of royalty income. 3. Separate corporation as licenser, developer, and supporter of research university faculty or administrators involved in governance. 4. Separate corporation as licenser, developer, and supporter of research; might also engage in final production. University faculty or administrators involved in governance; university has equity position. Nearly every major research university has a patent office and is active at level 1. A number have proposed or helped form special institutions, like that at level 2, through which research support could be undertaken on a venture basis and royalty income received by the university. At level 3 a measure of university control is added through participation of university faculty members (the researchers) or administrators in the governance of the corporation. The latter's work would stop at Me stage of development; there might be feasibility tests of production at Me pilot-plant level, but no income related to product sales. At level 4 there is a full-fledged production company win university participation in equity. Most universities have decided that levels 3 and 4 present problems of equity and conflict of interest that loom unacceptably large. But, particularly at level 2, there has been some interesting institutional innovation. For ex- ample, some nonprofit corporations have been created as independent re- search organizations with prof~t-making spin-offs, generating royalties Mat

BASIC RESEARCH IN THE UNIVERSITIES: LION MUCH UTILITY? 273 support basic research programs at one or a group of several universities. The governance of such entities can be clearly separated from that of He university or universities that benefit, so that real or perceived conflict of interest can be avoided. In addition, consortium efforts by companies have increasingly recognized the desirability of supporting more applied research in on-campus locations. That recognition has given rise to such ventures as the Center for Integrated Systems at Stanford. The support of on-campus university research programs by corporations is also increasing, and research-intensive firms from the energy, chemical, and pharmaceutical industries have all established capital and program support for laboratories at research universities. The combined impact of these new commercial incentives has been con- siderable. It has increased, Tough not by a great proportion, the total par- ticipation of private resources in fundamental research. It has provided some possible models for overcoming the impediments to rapid diffusion of basic research advances into human use. Thus, although I continue to worry about the variety of individual commercial arrangements being made by university scientists in biotechnology, I believe that most of He institutional responses to the new commercial incentives have been encouraging steps. Potentially, then, He answer to He question that opened this section—Are commercial incentives good devices for generating, utility from quality? is a qualified yes. . CONCLUSION In concluding, let me return to a point emphasized at the beginning of this chapter. The great strength of American basic science is the tight coupling of research and research training. The main Great posed by overemphasis on utility is to the integrity of that linkage: a set of utilitarian incentives can drain off the energies of the best scientists and sap the vigor of He university laboratories in which journeymen and apprentices work side by side at the bench. The universities should be especially vigilant guardians of the union between research and research training because Hey are its proprietors. But Hey are not its ultimate beneficiary; society is. NOTES 1. Donald Kennedy, Government policies and the COSt of doing research, Science 227(1985):480- 484. U.S. Department of Health, Education and Welfare, President's Biomedical Research Panel Report. Pub. No. (OS) 76-500 (Washington, D.C.: U.S. Government Printing Office, 1976). Basic Issues in Biomedical and Behavioral Research. Hearings before the Subcommittee on Health and Scientific Research, Committee on Labor and Public Welfare, U.S. Senate, June 16 and 17, 1976. Committee Print, p. 161.

274 L)ONALD KENNEDY 4. Ibid., p. 16. 5. A thorough account of this matter can be found ~ N. Rescher, Scientific Progress (Pittsburgh: University of Pittsburgh Press, 1978). Much of He original analysis is due to Derek J. de Solla Price. 6. Derek J. de Solla Price, Science Since Babylon (New Haven: Yale University Puss, 1961). 7. Rescher, Scientific Progress. 8. Derelc Bok, The critical role of basic research, Advancement of Science and Technology (Washington, D.C.: May 1976). 9. Office of the Director of Defense Research and Engineering, Project Hindsight: Final Report (Washington, D.C.: U.S. Department of Defense, 1969). 10. Julius H. Comroe, Jr., and Robert D. Dripps, Scientific approach to a national biomedical science policy, Science 192:(1976)105-111. r

Next: An Overview of Innovation »
The Positive Sum Strategy: Harnessing Technology for Economic Growth Get This Book
×
Buy Paperback | $155.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume provides a state-of-the-art review of the relationship between technology and economic growth. Many of the 42 chapters discuss the political and corporate decisions for what one author calls a "Competitiveness Policy." As contributor John A. Young states, "Technology is our strongest advantage in world competition. Yet we do not capitalize on our preeminent position, and other countries are rapidly closing the gap." This lively volume provides many fresh insights including "two unusually balanced and illuminating discussions of Japan," Science noted.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!