Lessons from History
The federal government has made significant contributions to the research base for computing technology. As detailed in Chapter 3, federal support has accounted for a substantial fraction of the total funding for computing research in the United States and the vast majority of all university research funds in the field. Such funding has supported both the development of new technologies and the training of students. The federal government has also paid for public research infrastructure, providing most of the funds for research equipment in university departments of computer science and electrical engineering, and has sponsored programs to provide access to and infrastructure for high-performance computing and networking. Such contributions did not single-handedly drive subsequent development of the nation's computing industry; rather, they formed part of the larger innovation system that combined the efforts of government, universities, and industry. They nevertheless played an important role in the industry's development.
What have been the results of federal investments? How can future federal programs be designed to enhance their effectiveness? The history described in this report can aid in answering these questions. History demonstrates by select examples the kinds of effects federal research funding has had on the innovation process in computing, and it illustrates some of the principles of sound project management. This chapter synthesizes the major lessons of this report. It attempts to characterize the effects of federal investments in computing research and to discuss the programmatic considerations that appear to have contributed to the suc-
cess of the field. In doing so, the chapter draws on case studies from other sections of the report as needed for examples. Readers are referred to Chapters 6 through 10 and Chapter 4 for a more complete elaboration of the case studies.
The Benefits of Federal Research Investments
Government research funding has had a profound influence on the development of the computing industry in the United States. Federal research support has provided a proving ground for testing new concepts, designs, and architectures in computing, and it has helped hasten the commercialization of technology developed in industry laboratories. This influence has manifested itself in a variety of ways: (1) in the creation of new products, services, companies, and billion-dollar industries that are based on federally funded research; (2) in the expansion of university research capabilities in computer science and electrical engineering; (3) in the formation of human resources that have driven the computing revolution; and (4) in the ability of federal agencies to better accomplish their public missions.
Quantifying the benefits of federal research support is a difficult, if not impossible, task for several reasons. First, the output of research is often intangible. Most of the benefit takes the form of new knowledge that subsequently may be instantiated in new hardware, software, or systems, but is itself difficult to measure. At other times, the benefits take the form of educated people who bring new ideas or a fresh perspective to an organization. Second, the delays between the time a research program is conducted and the time the products incorporating the research results are sold make measurement even more difficult. Often, the delays run into decades, making it difficult to tell midcourse how effective a particular program has been. Third, the benefits of a particular research program may not become visible until other technological advances are made. For example, advances in computer graphics did not have widespread effect until suitable hardware was more broadly available for producing three-dimensional graphical images. Finally, projects that are perceived as failures often provide valuable lessons that can guide or improve future research. Even if they fail to reach their original objectives, research projects can make lasting contributions to the knowledge base.
Despite these difficulties, several observations can be made that provide a qualified understanding of the influence of federal research programs on industry, government, and universities. They demonstrate the effect federal funding has had on computing and, by extension, on society.
Providing the Technology Base for Growing Industries
Federal research funding has helped build the technology base on which the computing industry has grown. A number of important computer-related products trace their technological roots to federally sponsored research programs. Early mainframe computers were given a significant boost from federally funded computing systems of the 1950s, such as the U.S. Air Force's Semi-Automatic Ground Environment (SAGE) project. Although a command-and-control system designed to warn of attacks by Soviet bombers, SAGE pioneered developments in real-time digital computing and core memory (among other advances) that rapidly spread throughout the fledgling computer industry. Time-shared minicomputers, which dominated the market in the 1970s and early 1980s, exploited time-sharing research conducted in the 1960s under the Defense Advanced Research Projects Agency's (DARPA's)1 Project MAC and earlier work sponsored by the National Science Foundation (NSF) on the Compatible Time-Sharing System at the Massachusetts Institute of Technology (MIT) (see Chapter 4). The Internet, which came of age in the early 1990s, was derived from DARPA's ARPANET program of the early 1970s, which created a packet-switching system to link research centers across the country, as well as from subsequent programs managed by NSF to expand and improve its NSFNET (see Chapter 7). Federal funding for relational databases helped move that technology out of corporate laboratories to become the basis of a multibillion-dollar U.S. database industry. The graphical user interface, which became commonplace on personal computers in the 1990s, incorporates research conducted at SRI International under a DARPA contract some 30 years earlier (Chapter 4).
The economic impact of federally funded research in computing is evident in the many companies that have successfully commercialized technologies developed under federal contracts. Examples include Sun Microsystems, Inc., Silicon Graphics, Inc., Informix Corporation, Digital Equipment Corporation, and Netscape Communications Corporation. Established companies, such as International Business Machines Corporation (IBM) and American Telephone and Telegraph Corporation (AT&T), also commercialized technologies developed with federal sponsorship, such as core memories and time-sharing operating systems. Clearly, federally sponsored research was only one element in the success of these companies. Private firms had to dedicate tremendous resources to bring these technologies successfully to market, investing in their research and development, establishing manufacturing capacity, and setting up marketing and distribution channels. But new technology created the seed for continued innovation.
Maintaining University Research Capabilities
Federal funding has also maintained university research capabilities in computing. Universities depend largely on federal support for research programs in computer science and electrical engineering, the two academic disciplines most closely aligned with computing and communications. Since 1973, federal agencies have provided roughly 70 percent of all funding for university research in computer science. In electrical engineering, federal funding has declined from its peak of 75 percent of total university research support in the early 1970s, but still represented 65 percent of such funding in 1995.2 Additional support has come in the form of research equipment. Universities need access to state-of-the-art equipment in order to conduct research and train students. Although industry contributes some equipment, funding for university research equipment has come largely from federal sources since the 1960s. Between 1981 and 1995, the federal government provided between 59 and 76 percent of annual research equipment expenditures in computer science and between 64 and 83 percent of annual research equipment expenditures in electrical engineering.3 Such investments have helped ensure that researchers have access to modern computing facilities and have enabled them to further expand the capabilities of computing and communications systems.
Universities play an important role in the innovation process. They tend to concentrate on research with broad applicability across companies and product lines and to share new knowledge openly.4 Because they are not usually subject to commercial pressures, university researchers often have greater ability than their industrial counterparts to explore ideas with uncertain long-term payoffs. Although it would be difficult to determine how much university research contributes directly to industrial innovation, it is telling that each of the case studies and other major examples examined in this report—relational databases, the Internet, theoretical computer science, artificial intelligence, virtual reality, SAGE, computer time-sharing, very large scale integrated circuits, and the personal computer—involved the participation of university researchers. Universities play an especially effective role in disseminating new knowledge by promoting open publication of research results. They have also served as a training ground for students who have taken new ideas with them to existing companies or started their own companies. Diffusion of knowledge about relational databases, for instance, was accelerated by researchers at the University of California at Berkeley who published the source code for their Ingres system and made it available free of charge. Several of the lead researchers in this project established companies to commer-
cialize the technology or brought it back to existing firms where they championed its use (see Chapter 6).
Creating Human Resources
In addition to supporting the creation of new technology, federal funding for research has also helped create the human resources that have driven the computer revolution. Many industry researchers and research managers claim that the most valuable result of university research programs is educated students—by and large, an outcome enabled by federal support of university research. Federal support for university research in computer science grew from $65 million to $350 million between 1976 and 1995, while federal support for university research in electrical engineering grew from $74 million to $177 million (in constant 1995 dollars).5 Much of this funding was used to support graduate students. Especially at the nation's top research universities, the studies of a large percentage of graduate students have been supported by federal research contracts. Graduates of these programs, and faculty researchers who received federal funding, have gone on to form a number of companies, including Sun Microsystems, Inc. (which grew out of research conducted by Forest Baskett and Andy Bechtolsheim with sponsorship from DARPA) and Digital Equipment Corporation (founded by Ken Olsen, who participated in the SAGE project). Graduates also staff academic faculties that continue to conduct research and educate future generations of researchers.
Furthermore, the availability of federal research funding has enabled the growth and expansion of computer science and computer engineering departments at U.S. universities, which increased in number from 6 in 1965 to 56 in 1975 and to 148 in 1995 (Andrews, 1997, p. 5). The number of graduate students in computer science also grew dramatically, expanding more than 40-fold from 257 in 1966 to 11,500 in 1995, with the number of Ph.D. degrees awarded in computer science increasing from 19 in 1966 to over 900 in 1995 (NSF, 1997b, Table 46). Even with this growth in Ph.D. production, demand for computing researchers still outstrips the supply in both industry and academia (U.S. Department of Commerce, 1997).
Beyond supporting student education and training, federal funding has also been important in creating networks of researchers in particular fields—developing communities of researchers who could share ideas and build on each other's strengths. Despite its defense orientation, DARPA historically encouraged open dissemination of the results of sponsored research, as did other federal agencies. In addition, DARPA and other federal agencies funded large projects with multiple participants from different organizations. These projects helped create entire commu-
nities of researchers who continued to refine, adopt, and diffuse new technology throughout the broader computing research community. Development of the Internet demonstrates the benefits of this approach: by funding groups of researchers in an open environment, DARPA created an entire community of users who had a common understanding of the technology, adopted a common set of standards, and encouraged their use broadly. Early users of the ARPANET created a critical mass of people who helped to disseminate the technology, giving the Internet Protocol an important early lead over competing approaches to packet switching (see Chapter 7).
Scientific societies have also played a significant role in this respect. Groups such as the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) and their subgroups have helped create communities of researchers and facilitated communication among them. The development of virtual reality, for example, benefited enormously from the creation of SIGGRAPH, the ACM's special interest group for computer graphics. This organization brought together university and industry researchers, as well as users of computer graphics, from a variety of fields (e.g., arts, entertainment, medicine, and manufacturing). Its annual conferences have become a showcase of new technology and a primary forum for exchanging new ideas.
Accomplishing Federal Missions
In addition to supporting industrial innovation and the economic benefits that it brings, federal support for computing research has enabled government agencies to accomplish their missions. Investments in computing research by the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and the National Institutes of Health (NIH), as well as the Department of Defense (DOD), are ultimately based on agency needs. Many of the missions these agencies must fulfill depend on computing technologies. DOD, for example, has maintained a policy of achieving military superiority over potential adversaries not through numerical superiority (i.e., having more soldiers) but through better technology. Computing has become a central part of information gathering, management, and analysis for commanders and soldiers alike (High Performance Computing Modernization Office, 1995).
Similarly, DOE and its predecessors would have been unable to support their mission of designing nuclear weapons without the simulation capabilities of large supercomputers. Such computers have retained their value to DOE as its mission has shifted toward stewardship of the nuclear stockpile in an era of restricted nuclear testing. Its Accelerated Strategic Computing Initiative builds on DOE's earlier success by attempting to
support development of simulation technologies needed to assess nuclear weapons, analyze their performance, predict their safety and reliability, and certify their functionality without testing them.6 In addition, NASA could not have accomplished its space exploration or its Earth observation and monitoring missions without reliable computers for controlling spacecraft and managing data. New computing capabilities, including the World Wide Web, have enabled the National Library of Medicine to expand access to medical information and have provided tools for researchers who are sequencing the human genome.
Characteristics of Effective Federal Support
The success of federal funding in computing research derives both from the kind of programs and projects it has supported and the ways it has structured those programs. By funding a mix of fundamental research and system development activities, for example, government was able to promote the long-term health of the field and to demonstrate new technologies. By funding a mix of work in universities and industry, it was able to marry long-term objectives to real-world problems. And, by channeling its funding through a variety of federal agencies, it was able to ensure broad-based coverage of many technological approaches and to address a range of technical problems. This section examines some of the key factors that have led to the success of federal research investments in the past and attempts to provide guidance for structuring future research programs.
Support for Long-range, Fundamental Research
A strength of federal research funding is that it complements, rather than competes with, private research investments. Successful government research programs have supported research that private industry has had little incentive or ability to support because the commercial applications of the research were too distant and too uncertain, or because the research itself was so fundamental that individual firms could not expect to capture the benefits themselves while preventing others from doing so (see Chapter 2). Private industry is generally not able to assume the risks inherent in such projects, nor does it continue funding research in a particular field over extended periods if the payback is unclear. In many such instances, federally sponsored research has laid the groundwork for new technologies that ultimately created not only new products, processes, and services, but also entire industries. Such investments were typically made years—if not decades—before practical applications became feasible; they helped advance knowledge of the field sufficiently so
that firms could begin to make appropriate investments. This pattern has been repeated in numerous cases:
- In artificial intelligence (AI), early funding came mostly from federal sources, primarily DARPA. Although large computing and communications companies, such as IBM and AT&T, established small programs for artificial intelligence research in the 1950s, these efforts were scaled back and redirected toward more practical topics (such as speech recognition) when it became evident that more fundamental research might not produce marketable results for more than a decade. The federal government, too, cut back on some programs that failed to show initial progress (such as machine translation and, for a time, speech recognition), but it continued to make strong investments in AI research to explore both fundamental research questions and applications of AI technology. These investments, combined with industry efforts, enabled sufficient progress for a number of AI-based products to begin entering the market place. Based on pioneering efforts such as DENDRAL, an expert (or rule-based reasoning) system for deducing the likely molecular structure of organic compounds, a number of firms began creating rule-based reasoning systems for engineering and medical applications in the mid-1970s and the 1980s. Building on work conducted with industry and federal funding, several companies, including IBM, Dragon Systems, and Lucent Technologies, introduced in the 1990s robust, continuous speech-recognition packages for use with personal computers. A range of other AI technologies began to appear as integral parts of other systems, such as grammar checkers in word processors, decision aids for troubleshooting software, and software agents for finding information on the World Wide Web (see Chapter 9).
- Pioneering work in virtual reality was conducted by Ivan Sutherland, then at Harvard University, with support from several defense agencies. A handful of private firms, such as General Electric, established research programs to build on this work but soon realized that products incorporating such technology lay many years in the future. Subsequent research—funded by agencies such as DARPA and NSF and conducted at universities such as the University of Utah, California Institute of Technology, and the University of North Carolina at Chapel Hill—created a number of advances in hardware and software for rendering two- and three-dimensional computer graphics that have since been used widely in medicine, entertainment, and engineering applications. Federally funded research in these areas succeeded in developing the technology to the point that private companies could both develop products and invest in productive research. In virtual reality, for example, the entertainment industry has built on early university research to create systems for pro-
- ducing computer-animated films. More recently, Microsoft established a large research group in computer graphics to help improve graphics for desktop computers. Its interest is now driven largely by the video game industry and the search for improved user interfaces (see Chapter 10 for a case study of the federal role in virtual reality research).
In both of these cases, industry had limited incentives to invest in research. The time needed for such programs to yield tangible results was often measured in decades, far beyond the planning horizons of many companies. Furthermore, early progress in these fields required fundamental advances that were applicable to a range of potential applications and were difficult for any single company fully to appropriate (or control). Because few mechanisms exist for companies to collectively fund fundamental research of mutual interest,7 federal funding has often been the most appropriate mechanism for supporting research, especially if the research is applicable to government missions.
This is not to say that industry will not support long-term research. Many larger companies conduct fundamental research with broad applicability. IBM and AT&T are the most prominent examples. The ability of such companies to support fundamental research is closely linked to their ability to recoup their investments in these areas (see Chapter 2) and, hence, to their overall profitability and dominance in the marketplace. AT&T, for example, conducted long-term research at Bell Laboratories and for many years had a government-granted monopoly on the telephone industry. Its research expenditures were in effect a tax on consumers because they were paid for by AT&T's regulated rates for telephone service. Since divestiture, Bell Laboratories (now part of Lucent Technologies) has continued to fund long-term research, but a more conscious effort has been made to link that research to corporate needs and to capture the benefits of the research investment (Buderi, 1998). IBM maintained long-term research at its T.J. Watson Research Center and its other laboratories, and, given its market dominance, was able to appropriate many of the results of that research. However, as the computer industry has become more competitive and IBM's market dominance has declined, IBM's research has been reined in somewhat and redirected to specific strategic areas (Markov, 1996). Long-term fundamental research is still conducted, but it has greater relevance to IBM's interests. In contrast, as Microsoft has grown and its dominance has increased in the software industry, it has begun to fund more long-term research. Although Microsoft researchers have considerable flexibility in choosing research topics, they must demonstrate the relevance of the research to Microsoft's interests (Ziegler, 1997).
Support for Efforts to Build Large Systems
Although support for fundamental research is an important part of the government's research portfolio, many advances in computing have stemmed from projects aimed at building operational systems. Systems developed to meet the government's needs often resulted in pioneering advances that were subsequently incorporated into a range of commercial applications. Such system-building programs not only created new technology and know-how but also established networks of people that helped to rapidly disseminate knowledge broadly throughout the technical community. For example:
- The development of SAGE stemmed from the needs of DOD for improved early warning capabilities against Soviet bomber attacks. It built on Project Whirlwind, an effort funded by the Office of Naval Research (ONR) to develop a general-purpose aircraft simulator and that pioneered real-time digital computing. Despite the fact that SAGE was almost obsolete when it was finished, it provided invaluable learning experiences for the engineers and scientists designing and developing the communications and computing technology. Countless graduate students and postdoctoral engineers and scientists, for instance, had their first hands-on experiences with computers while working on these projects (see Chapter 4).
- The development of packet-switched networks and internetworking (the interconnection of multiple networks) can be traced to federal funding from DARPA and NSF. Packet switching was conceptualized by Paul Baran (then at RAND Corporation) in 1961 and independently by Donald Davies at the National Physical Laboratory in England in 1965. DARPA saw the technology as a means of allowing more efficient use of geographically separated computing resources and funded development of the first packet-switched network. Large telecommunications companies, such as AT&T, did not participate in DARPA's subsequent program to build a packet-switched network, the ARPANET, although they did conduct in-house research on packet-switched networks (AT&T's work on asynchronous transfer mode—or ATM switching—is an example). Instead, DARPA contracted with Bolt, Beranek, and Newman, which had been started by three MIT professors in 1948 and performed much of the work on the ARPANET in association with a handful of universities and private companies.8 The ARPANET demonstrated the capabilities of packet switching and became a source for innovations such as e-mail. The protocols that allowed the flows of information packets through interconnected networks (internets) were developed jointly by Vinton Cerf, then at DARPA, and Robert Kahn (see Chapter 7). Continued efforts,
- sponsored by NSF, to develop CSNET, and later NSFNET, demonstrated the value of internetworked communication systems and led to the eventual commercialization of the Internet.
The value of system-building efforts derives from the close linkages between research in computer engineering (as opposed to computer science) and the development of specific artifacts. Theory and practice are closely linked, and innovation tends to proceed in a highly nonlinear fashion, with attempts to build operational systems stimulating identification of new problems for further research. Development of new products or services can precede the development of the underlying science, pointing out potentially fruitful avenues of inquiry. For example, development of magnetic core memory for computers did not flow directly from advances in materials research (although it certainly drew upon such research), but from the need to develop a memory system with short enough access times and high enough reliability to support the real-time digital computing demanded by Project Whirlwind (see Chapter 4). Similarly, attempts to develop techniques for virtual surgery (see Chapter 10) motivated and accelerated research in areas such as high-resolution graphics, haptic interfaces, force-feedback systems, robotics, and control techniques.
Building on Industrial Research
Even in areas in which industry has a well-defined interest, government-sponsored research has been able to hasten the commercialization of new technology developed in industry laboratories. Some technologies, such as relational databases and reduced instruction set computing (RISC) computers, were invented by industry researchers but were not commercialized immediately because they either competed with existing product lines or were considered too risky for further development. In these cases, government funding has supported an independent community of technical experts who validated these technologies and provided a pool of talent that helped exploit the idea both in the corporation of origin and in competing corporations.
Early work on relational databases, for example, was conducted by Ted Codd at IBM, but IBM saw the technology as a threat to its established line of database products. Codd publicized the results of his work, seeding efforts in relational databases by several university researchers, including Michael Stonebraker and Eugene Wong at the University of California at Berkeley (UC-Berkeley). With subsequent funding from NSF, Stonebraker and Wong were able to develop a relational database system called Ingres (interaction graphics and retrieval system). To com-
mercialize the technology, Stonebraker started Ingres Corporation, which demonstrated the viability of the relational approach and helped disseminate knowledge about it, building a community of researchers who further developed the relational database technology. This work, and efforts by other large database vendors, helped stimulate continued development of IBM's System R, which created the dominant query language for relational databases.
The development of RISC processing followed a similar history. John Cocke at IBM invented a RISC processor for IBM's Stretch computer, but IBM did not use the technology more widely because it might detract from sales of existing products. DARPA funding enabled university researchers to continue working on RISC. David Patterson at UC-Berkeley and John Hennessy at Stanford University developed RISC processor designs that were commercialized by Sun Microsystems, Inc., and MIPS Computer Systems, respectively. Other designs were offered by competing firms, such as Hewlett-Packard Company and IBM (see Chapter 4).
To some extent, this phenomenon is not unexpected. Large industry research groups produce more ideas than they can possibly exploit given time and financial constraints. These ideas sometimes find their way directly into the marketplace through start-up companies; at other times, however, the amount of research needed to demonstrate the feasibility and benefits of a technology is beyond the capabilities of start-up companies and direct commercialization is unlikely. In these cases, federal funding of university research can be an effective mechanism for helping bring new technology to the marketplace.
Not all pathbreaking research requires government assistance. For example, the development of the personal computer—which represented a significant departure from dominant modes of computing at the time—took place mostly in industry, with the Xerox Palo Alto Research Center and Apple Computer playing prominent roles (see Chapter 4). This work demonstrated the viability of personal computers, especially in the business marketplace, and IBM subsequently developed its own personal computer. Nevertheless, federal funding was important in supporting some of the early ideas on human-computer interaction (such as the computer mouse) that contributed to developing the personal computer.
Diverse Sources of Government Support
Between 1945 and 1995, federal support for computing was provided by a range of organizations, including DARPA, NSF, DOE, NASA, and NIH. This diversity of funding sources has had a salutary effect on computing research. Federal funding agencies differ widely in their cultures, goals, resources, and perspectives, and thus in the kinds of research
projects they support. The result has been a federal research establishment that has nurtured diverse approaches to research. DARPA, for example, has tended to award contracts for large programs involving multiple researchers and research organizations. It has concentrated its funding for computer research on a limited number of centers of excellence, such as MIT, Stanford University, Carnegie Mellon University, and UC-Berkeley. Program managers have generally been given significant discretion in selecting and shaping new research initiatives. NSF, in contrast, has primarily supported individual investigators, with considerably smaller awards. Its funding has been purposely spread among researchers at a wide range of institutions, generally universities, and project selection has been based largely on peer review. NSF has also funded projects intended to support the broad educational and research missions of universities. Other agencies, such as NASA, DOE, and NIH, mostly concentrate their resources on research more directly applicable to their missions: space, energy, and health, respectively. As a result, federal funding agencies complement one another rather than compete in funding research, with each supporting work best suited to its particular needs. In the end, no single approach can support a vibrant research base; all are needed to play different roles.
The most obvious benefit of diverse sources of funding is the opportunity for researchers to seek support from multiple potential sponsors of their work. If a particular agency cannot support a worthy research project for any of several reasons—limited resources, poor match with agency objectives, or the judgments of individual program managers—another agency may continue to sponsor potentially fruitful lines of inquiry. For example, DARPA and ONR declined to fund Michael Stonebraker's work on relational databases because DOD was already supporting other database research (see Chapter 6). NSF, the Air Force Office of Scientific Research, and the Navy Electronic Systems Command, however, viewed the program as fitting well into their research portfolios and subsequently funded Stonebraker's Ingres project. With this funding, Stonebraker was able to demonstrate the merits of the relational approach, which later garnered much industry support and became a dominant way to design databases. The process of revising and resubmitting proposals for consideration by multiple sponsors also provides an opportunity for more fully exploring the applications of a technology and the different approaches that can be pursued. It is unlikely that any single agency has the expertise required to understand the varied needs and interests of potential users of new computing and communications technology in government and in industry.
Diverse modes of support for research (i.e., research funding vs. procurement contracts) have also been valuable in ensuring a balance be-
tween open-ended research and research directed toward specific sponsors' needs. In the late 1940s, the Office of Naval Research began to doubt the relevance of the Whirlwind computer to its mission of supporting computing for scientists and mathematicians. As the project evolved from a programmable flight simulator to a real-time digital computer, ONR was not convinced it could continue to support the work. At about the same time, the Air Force decided that Whirlwind was appropriate for its SAGE command-and-control project and maintained support for it. Subsequently, the SAGE project pioneered many advances in computing, from real-time computing to core memories to computer graphics. In the end, it also trained a generation of hardware and software engineers (Redmond and Smith, 1980).
Diversity in funding for research also widens the range of applications for new technology and the technological approaches taken. As Chapter 9 demonstrates, the majority of federal support for artificial intelligence research, for example, came from DARPA, but other agencies such as NSF, NASA, and ONR funded projects to pursue particular applications of interest to them. NASA supported development of the pioneering expert system DENDRAL, to deduce the likely structure of organic compounds from known chemical analyses and spectrometry data. The same is true in virtual reality research (see Chapter 10). DARPA, NSF, and NIH have all sponsored relevant research, but each with specific mission interests to motivate their investments: DARPA in helmetmounted displays and applications for training and simulation, NSF in scientific visualization, and NIH in molecular design and manipulation of biomedical images. Such diversity of funding is important in the early stages of technological development when the uncertainty associated with any particular approach is high. Furthermore, some technologies become reliable or viable only if used in multiple applications, and funding agencies with different needs can help foster the pursuit of diverse, complementary approaches to a problem.
In addition, support from different agencies can be effective at different points in the innovation process. For example, work pioneered by one agency can lead to follow-up work supported by other agencies that allows the technology to mature. In some cases, small-scale efforts funded by NSF, ONR, or other agencies planted the seeds of larger DARPA-sponsored programs, as occurred in the development of computer time-sharing. NSF funded early work at MIT on its first time-sharing system, CTSS (Compatible Time-Sharing System), which by 1964 had connected 24 terminals across the MIT campus. The success of CTSS demonstrated the viability of time-sharing and created a nexus of researchers with expertise in developing and using time-shared systems. It also raised additional questions about the ability to scale up such systems to support a
larger number of terminals and to provide adequate security to prevent users from corrupting each other's programs or data. DARPA built on the CTSS effort with Project MAC, a much larger program that received $25 million between 1963 and 1970. Project MAC had ambitious goals for exploring interactive computing, including time-sharing. By its end, the project not only had produced the MULTICS system, which eventually supported 1,000 users, but also had given impetus to the fledgling time-shared computer industry and helped bring computers out of the laboratory. A program of this scope was beyond the capabilities of NSF at the time, yet NSF played an important role in demonstrating, on a smaller scale, the viability of time-sharing.
At other times, DARPA has transferred programs to other agencies once they reached a certain level of maturity. In the case of the Internet, for example, DARPA supported development of hardware and software (e.g., network routers and transmission protocols) for the ARPANET. By 1975–7 years after DARPA awarded the first contract for work on the system—the project had reached a sufficient level of maturity for DARPA to transfer management of the network to the Defense Communication Agency. By the early 1980s, NSF was developing packet-switched networks to link university researchers, first through the CSNET (for computer science researchers) and later through the NSFNET. These networks were seen as a means of supporting the research community by providing a shared medium for exchanging information. In 1989, the ARPANET was absorbed by the NSFNET. Other discipline-specific networks that had been constructed by NASA, DOE, and other agencies were also linked to the NSFNET, and NSF became the government's primary supporter of networking infrastructure. It assumed responsibility for upgrading and expanding the network, which eventually became the backbone of the Internet.
Strong Program Managers and Flexible Management Structures
Scientific and technological research explores the unknown; hence, its outcomes cannot be predicted at the start—even if a clear, practical goal motivates the work. In fact, the outcomes anticipated at the start of a research project can differ from those eventually achieved or that prove to be most important. The Internet is a case in point. DARPA's early interest in packet-switched networks (such as the ARPANET) grew from a desire to use more efficiently the computing capabilities that were distributed among its many contractor sites. By allowing remote access to these disparate computers in a seamless fashion, DARPA program managers hoped to expand the number of researchers who could use them and increase their utilization rates. These results were achieved in the end,
but, as the ARPANET was subsumed into the NSFNET, which later evolved into the Internet, the range of applications for packet-switched networks expanded in a number of unanticipated directions. Few could have predicted the popularity of electronic mail as a means of communicating among computer users; still fewer could have anticipated the emergence of the World Wide Web as a means for sharing information and conducting business. Although visions of expansive computer networks for public and private use existed, they were not part of DARPA's original plan, nor did they receive much attention then within the research community.
Moreover, even research projects that do not achieve their original objectives can produce meaningful results or generate valuable knowledge for guiding future research efforts. By some measures, Project Whirlwind and SAGE were failures (see Chapter 4). Planned as a computer-driven aircraft simulator, Whirlwind cost far more than expected and did not produce a simulator; rather, the attempt to develop the simulator resulted in the development of a real-time digital computer eventually used as part of the Air Force's SAGE command-and-control system. By the time it was deployed in the late 1950s and into the 1960s, SAGE's mission was largely obsolete, as intercontinental ballistic missiles were seen as a greater threat than Soviet bombers. Yet both these projects made tremendous contributions to computing that have paid back handsome dividends over time, far beyond the costs of research and development.
Other projects show meaningful returns only after a long time because their applications are not immediately recognized or other technological advances are needed to make their usefulness evident. Work on the mathematics of one-way functions, for example, was not appreciated fully until it was realized that it provided a basis for public-key cryptography (see Chapter 8). Twenty years passed before the benefits of work on the mathematics of hidden Markov models were incorporated into general-purpose speech-recognition systems for PCs. Only after continued increases in processing power and memory capacity did hidden Markov models become feasible for use in recognizing continuous speech on PCs.
Such difficulties frustrate attempts to meaningfully measure the performance of research and also highlight the need for ensuring flexibility in the management and oversight of federally funded research programs. Researchers need sufficient intellectual freedom to follow their intuition and to modify research plans based on preliminary results. Constraining research too narrowly can limit their ability and willingness to take risks in choosing new research directions. Building such flexibility into federal structures for managing research requires both skilled program managers—who understand, articulate, and promote the visions of researchers—and an organizational culture that accepts and promotes exploratory efforts.
These two elements complement one another: organizations that promote exploration and allow program managers to exercise their own discretion in selecting new directions for research tend to attract individuals who are effective program managers and who earn the respect of the research community.
DARPA and NSF both have incorporated these principles into their institutional structures. Especially in the 1960s and 1970s, DARPA gave program managers sufficient funds to shape coherent research programs, and program budgets required only two levels of approval: one by the office director and one by the DARPA director. The organization as a whole aimed to generate order-of-magnitude improvements in computing technology by funding a combination of fundamental research and large system-building efforts. It was able to attract visionary leaders, such as J.C.R. Licklider, Ivan Sutherland, Robert Taylor, Lawrence Roberts, Vinton Cerf, and Robert Kahn. Many of these leaders were drawn from the research community for short tours of duty. They brought to DARPA an understanding of current research challenges and a vision of the future. They were attracted to DARPA by the promise of being able to help implement a vision and lead the field. They maintained an interactive relationship with the research community, taking ideas from researchers and turning them into strategic directions, rather than trying to force their own agendas. They managed with a light touch, giving researchers room to pursue open-ended projects.
Clearly, there are limits to the flexibility that researchers and program managers can be allowed. In development-oriented programs, for example, program managers must ensure that specific objectives are met. In exploratory research, program managers must ensure that research funds are used prudently. But such accountability must be balanced against the unpredictability of research. Structures for managing and overseeing federally funded research need to allow program managers to alter programs midcourse in response to preliminary results and need to recognize that research projects can produce valuable results even if they do not achieve their original objectives. Failing to do so risks stifling creativity and innovation. The history of computing demonstrates the benefits of a flexible approach. By giving program managers greater discretion, federal agencies such as DARPA and NSF were able to support the development of the numerous innovations identified in this report.
Collaboration among researchers from academia and industry often has been a successful way of linking practical goals with technical capabilities. Although tensions can exist relative to the differing time horizons
between academic research and industry development cycles, collaboration between researchers and product developers has had salutary effects on computing research, helping to ensure the relevance of academic research and helping industry to take advantage of new academic research. Such collaboration allows government program managers to better leverage their resources by attracting industry contributions. Similarly, government funding can act as a ''seal of approval'' that encourages greater private investment. In this way, government funding, on average, spurs—rather than displaces—private research investments.
Collaboration between industry and universities builds communities of researchers who pursue a particular field and share a common vocabulary. Rapid advances in computing technology have resulted from the pace at which information has been exchanged between researchers and disseminated throughout the research and product development communities. IBM's ability to commercialize core memories rapidly, for example, was related to its participation in the SAGE project, which pioneered the innovation. Overall, the computing community has an impressive track record of transferring technology and knowledge successfully between the academic and industrial communities. As a number of researchers note, however, fruitful collaborations tend to evolve from research projects as the necessary skills to conduct a research or development program are assembled and as information about a research topic spreads throughout the research community. Researchers themselves often serve as the best means of technology transfer, taking knowledge with them as they move among posts in government, industry, and universities or as they start new companies to commercialize research results. Attempts to deliberately bring together university and industry researchers in collaborative projects can also be successful, but considerable flexibility must be allowed in specifying the nature of the collaboration.
Organizational Innovation and Adaptation
The history of computing is characterized by frequent modification of the structures for federal research support. As discussed in Chapter 4, new organizations have been created, and existing ones have been modified to better adapt to changing technology, political influences, and, most important, changing national needs.
Early work in computing, for example, was driven largely by defense interests. The ENIAC, the nation's first digital electronic computer, was developed with funding from the Army Ballistic Research Laboratory and produced its first operational calculations as part of the effort to develop the hydrogen bomb. Subsequently, DOD became the largest federal supporter of research in computer science and electrical engineer-
ing. In order to manage defense-related investments in computing, new organizations were needed. Immediately after World War II, the individual services established research offices (the Army Research Office, Office of Naval Research, and Air Force Office of Scientific Research) to manage their research portfolios. But the desire to prevent another technological surprise like Sputnik and to separate defense research from interservice rivalry and near-term operational considerations demanded the establishment of a separate agency, DARPA. As the importance of computing became increasingly apparent for defense applications, DARPA established the Information Processing Techniques Office to manage computing research. This office has changed names and structure over the past 30 years, to better reflect changes in the technology, and has continued to invest in an ever-changing array of computer-related technologies.
The founding of NSF in 1950 also followed from national imperatives, as policymakers and researchers alike tried to institutionalize and build on the many successes the nation had in mobilizing the research community during World War II (marked by the rapid development and introduction of innovations like the atomic bomb and radar). NSF established an Office of Computing Activities in 1967 to support research, education, and computing facilities. The components of this office were later dispersed among other NSF directorates. Recognizing the emergence of computing as an independent discipline with its own research needs, NSF established the Computer and Information Sciences and Engineering (CISE) Directorate in 1986. CISE, and its predecessors, carried out multiple missions: funding computing research, supporting educational initiatives, and maintaining computing and communications infrastructure for the research community.
Growing concerns over the competitiveness of U.S. industry in the 1980s and early 1990s produced a shift in federal policy for computing and a resultant shift in the organization of federal support for computing research. Greater emphasis was placed on partnerships among government, universities, and industry to facilitate more rapid transfer of technology into the marketplace and to tie research more closely to industrial needs. As a result, NSF established a number of Engineering Research Centers (ERCs) to better link academic research to industrial needs, and the National Institute of Standards and Technology began its Advanced Technology Program, which funded consortia working on precompetitive research projects of mutual interest. Loss of market share in memory chips and semiconductor manufacturing equipment prompted the government to invest $100 million annually for 7 years in SEMATECH, the semiconductor manufacturing technology consortium, which brought together 12 of the nation's largest semiconductor manufacturers to conduct
precompetitive research that would strengthen the U.S. semiconductor industry.
The end of the Cold War and the dominance of U.S. firms in the global market for computing in the 1990s significantly altered the political environment for research funding after 1990. It is likely that computing research will be redirected to new missions, whether improving health, providing government benefits (social security, food stamps, and so forth), or supporting economic growth. DOD and other federal agencies will continue to demand advances in information technology to support their missions, but new organizational structures may also be needed to ensure that the research enterprise is well matched to research needs.
Given the importance of computing to the nation's economy, security, and health, it is important to ensure that the United States maintains its leadership in the field. Doing so will require the concerted efforts of industry, universities, and government. As the lessons above suggest, each sector has an important role to play in the overall innovation process. While the information technology industry as a whole has evolved considerably over the past 50 years, opportunities for significant innovation continue to exist. Expanding and exploiting information infrastructure for a range of social, business, and personal needs will require continued research and development to make computing and communications systems more capable, more useful, and more reliable. The success of such efforts will depend in large part on resolving ongoing debates about the scope and direction of federal support for science and technology.