At a time when the U.S. style of competitive market capitalism attracts the world's attention—even its envy—and U.S. computer firms dominate the global marketplace, it is difficult to recall and acknowledge that the federal government has played a major role in launching and giving momentum to the computer revolution, which now takes pride of place among the nation's recent technological achievements. Federal funding not only financed development of most of the nation's early digital computers, but also has continued to enable breakthroughs in areas as wide ranging as computer time-sharing, the Internet, artificial intelligence, and virtual reality as the industry has matured. Federal investment also has supported the building of physical infrastructure needed for leading-edge research and the education of undergraduate and graduate students who now work in industry and at academic research centers.
The computer revolution is not simply a technical change; it is a sociotechnical revolution comparable to an industrial revolution. The British Industrial Revolution of the late 18th century not only brought with it steam and factories, but also ushered in a modern era characterized by the rise of industrial cities, a politically powerful urban middle class, and a new working class. So, too, the sociotechnical aspects of the computer revolution are now becoming clear. Millions of workers are flocking to computing-related industries. Firms producing microprocessors and software are challenging the economic power of firms manufacturing automobiles and producing oil. Detroit is no longer the symbolic center of the U.S. industrial empire; Silicon Valley now conjures up visions
of enormous entrepreneurial vigor. Men in boardrooms and gray flannel suits are giving way to the casually dressed young founders of start-up computer and Internet companies. Many of these entrepreneurs had their early hands-on computer experience as graduate students conducting federally funded university research.
As the computer revolution continues and private companies increasingly fund innovative activities, the federal government continues to play a major role, especially by funding research. Given the successful history of federal involvement, several questions arise: Are there lessons to be drawn from past successes that can inform future policy making in this area? What future roles might the government play in sustaining the information revolution and helping to initiate other technological developments? This report reviews the history of innovation in computing (and related communications technologies) to elucidate the role the federal government has played by funding computing research and to identify factors that have contributed to the nation's success in this field.1 It draws on a series of case studies that trace the lineage of innovations in particular subdisciplines of computing and on a more general historical review of the industry since World War II. The lessons derived from this examination are intended to guide ongoing efforts to shape federal policy in this field (Box ES.1).
Innovation in computing stems from a complementary relationship among government, industry, and universities. In this complex arrangement, government agencies and private companies fund research that is conducted primarily in university and industry research laboratories and is incorporated into myriad new products, processes, and services. While the contributions of industry to the computing revolution are manifest in the range of new products, processes, and services offered, those of the federal government are harder to discern. Nevertheless, federal funding of major computing initiatives has often contributed substantially to the development and deployment of commercial technologies. Commercial developments, similarly, have contributed to government endeavors.
The federal government has played a critical role in supporting the research that underlies computer-based products and services. From less than $10 million in 1960, federal funding for research in computer science climbed to almost $1 billion in 1995. Federal expenditures on research in electrical engineering (which includes semiconductor and communications technologies—necessary underpinnings for computing) have fluctuated between $800 million and $1 billion since the 1970s. Such funding has constituted a significant fraction of all research funds in the comput-
BOX ES.1 Why a Historical Approach?
Science and technology policy issues are usually approached in an analytical and quantitative way that projects the future from the present by extrapolating from quantitative data. A historical approach, as used in this report, provides a different perspective. History offers empirical evidence of the success and failure of past policies and allows patterns to be discovered that can inform future decisions. It allows analogies to be drawn between events that occurred decades apart but that may be applicable in the future. Furthermore, historical narrative can accommodate messy complexity more easily than can a tightly structured analytical essay, and it allows reflection on long-term process development and evolution. The case studies in this report present finely nuanced accounts that convey the ambiguities and contradictions common to real-life experiences.
Of course, history is limited in its ability to serve as a guide to future. History cannot suggest what would have happened if circumstances had changed in the past. For example, history can show the influence of federal funding on historical innovations in computing, but it cannot suggest what directions might have been taken without federal support. In addition, teasing out lessons from history that can inform the future is a difficult task. Past outcomes are often tied to specific circumstances. The success or failure of specific research programs, for example, may be influenced as much by the particular people involved as by the amount of funding available. The case studies presented in this report attempt to overcome some of the limitations of history as a guide by examining events that occurred at various points in time and identifying lessons that many, if not all, of the cases offer. In this way, they can contribute to judgments about basic policies that are effective in different contexts.
ing field (Figure ES.1). The vast majority of this funding has been awarded to industry and university researchers, where it has supported innovative work in computing and, to a larger extent, communications (see Chapter 3 for detailed information on spending patterns).
Federal research funding plays an important role in supporting university efforts in computing. Federal support has constituted roughly 70 percent of total university research funding in computer science and electrical engineering since 1976. This funding has had several effects. First, it has promoted advances in fields such as computer graphics, artificial intelligence, and computer architecture: algorithms for rendering three-dimensional graphics images, expert systems for assisting in drug design, and time-shared computing systems all derive from federally funded university research. Beyond these direct contributions to the technology base, federal funding for universities has had other benefits as well. It has played a critical role in educating students in the computing field. In computer science departments at universities such as the Massachusetts
Institute of Technology (MIT), Stanford University, the University of California at Berkeley (UC-Berkeley), and Carnegie Mellon University, over half of all graduate students receive financial support from the federal government, mostly in the form of research assistantships. In addition, most of the funding used by academic computer science and electrical engineering departments to purchase research equipment comes from federal agencies. By placing computing equipment in engineering schools and universities, the government has made possible hands-on learning experiences for countless young engineers and scientists and has enabled university researchers to continue their work.
The effects of federal support for computing research are difficult to quantify but pervasive. Patent data, although a limited indicator of inno-
vation, provide strong evidence of the links between government-supported research and innovation in computing. More than half of the papers cited in computing patent applications acknowledge government funding (see Chapter 3).2 More specific evidence of the value of federally funded research derives from a close examination of particular innovations. Each of the major areas examined in the five case studies presented in Part II of this report—relational databases, the Internet, theoretical computer science, artificial intelligence, and virtual reality—benefited from federal research funding (Box ES.2). Such funding provided a means for sustaining research in universities and industry and complemented research expenditures by industry.
Lessons from History
Why has federal support been so effective in stimulating innovation in computing? Although much has depended on the unique characteristics of individual research programs and their participants, several common factors have played an important part. Primary among them is that federal support for research has tended to complement, rather than preempt, industry investments in research. Effective federal research has concentrated on work that industry has limited incentive to pursue: long-term, fundamental research; large system-building efforts that require the talents of diverse communities of scientists and engineers; and work that might displace existing, entrenched technologies. Furthermore, successful federal programs have tended to be organized in ways that accommodate the uncertainties in scientific and technological research. Support for computing research has come from a diversity of funding agencies; program managers have formulated projects broadly where possible, modifying them in response to preliminary results; and projects have fostered productive collaboration between universities and industry. The lessons below expand on these factors. The first three lessons address the complementary nature of government- and industry-sponsored research; the final four highlight elements of the organizational structure and management of effective federally funded research programs. Greater elaboration of these lessons is provided in Chapter 5 of this report.
1. Government supports long-range, fundamental research that industry cannot sustain.
Federally funded programs have been successful in supporting long-term research into fundamental aspects of computing, such as computer graphics and artificial intelligence, whose practical benefits often take years to demonstrate. Work on speech recognition, for example, which
BOX ES.2 Case Studies of Innovation in Computing
The case studies contained in Chapters 6 though 10 of this report provide detailed accounts of innovation in particular areas of computing: relational databases, the Internet, theoretical computer science, artificial intelligence, and virtual reality. Representing a range of technologies and time frames, the cases demonstrate significant interaction among industry, universities, and government in developing and commercializing new computing technology. The lessons learned from these cases highlight the variation and similarities in the interactions, as well as key elements of the innovation process. The following brief summary of the case studies includes limited examples of the results of federal investments in research. Readers are directed to the full case studies for a more complete description of federal involvement in these areas.
Development of relational database technology—now a billion-dollar industry dominated by U.S. companies such as Informix, Sybase, IBM, and Oracle—relied on the complementary efforts of industry and government-sponsored academics. Although originating within IBM, relational database technology was not rapidly commercialized because it competed with IBM's existing database products. The National Science Foundation (NSF) funded the Ingres project at the University of California at Berkeley, which refined and promulgated the technology, thus spreading expertise and rekindling market interest in relational databases. Many of the companies now producing relational databases have on their staffs—or were founded by—participants in Ingres.
Development of the Internet grew largely out of government-sponsored research, development, and deployment programs. Building on research conducted by Paul Baran and Donald Davies, the Defense Advanced Research Projects Agency (DARPA, during certain periods called ARPA) funded the development of a packet-switched network, the ARPANET, by industry and academia. It subsequently supported creation of the protocols used for interconnecting networks across the Internet. To further its goals of supporting research and educational infrastructure, NSF funded development of networks for research and educational uses and, in effect, laid the groundwork for today's Internet. The World Wide Web and browser technology currently used to navigate the Internet were devised by Timothy Berners-Lee at CERN and Marc Andreesen, then a student at the NSF-sponsored National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
Theoretical Computer Science
Typically viewed as the province of academia, theoretical computer science has benefited from the efforts of both industry and university researchers. Although some advances—such as number theory and cryptology—have translated directly into practice, many others (such as finite state machines and complexity theory) have entered engineering practice and education more subtly, influencing the way researchers and product developers approach and think about problems. Progress in theory has both informed practice and been driven by practical developments that have challenged or outpaced existing concepts.
Work in artificial intelligence broadly addresses capabilities for enabling machines (computers) to exhibit characteristics of human intelligence, such as understanding language, learning, and problem solving. Support for research in artificial intelligence (AI) over the past three decades has come largely from government agencies, such as DARPA, NSF, and the Office of Naval Research (ONR). Firms that initiated AI research programs in the 1960s scaled back their programs once they realized that commercial applications lay many years in the future. Continued federal investments allowed a number of advances in areas such as expert systems, speech recognition, and image processing. For example, speech recognition systems, which had been the focus of DARPA funding in the early 1970s, finally entered the marketplace in the mid-1990s. Many other AI technologies have been commercialized and embedded into a range of new products.
Research in virtual reality attempts to develop technologies for creating computer-generated environments that are indistinguishable from real ones. Innovation in virtual reality stems from the convergence of advances in numerous interrelated fields, such as computer graphics, psychology, computer networking, robotics, and computer hardware. It has been both pushed by technological advances in these underlying areas and pulled by creative attempts to devise particular applications, such as flight simulators, entertainment, virtual surgery, engineering design, and tools for molecular modeling. Much of the underlying research has been conducted by universities, with federal support from agencies such as DARPA, NSF, and the National Aeronautics and Space Administration, but industry has played an important role in commercializing technologies and identifying key research needs. Interdisciplinary research efforts have been the norm in this field, as exemplified by the collaborative research effort between the computer graphics laboratory at the University of North Carolina at Chapel Hill and Hewlett-Packard.
was begun in the early 1970s (some started even earlier), took until 1997 to generate a successful product for enabling personal computers to recognize continuous speech. Similarly, fundamental algorithms for shading three-dimensional graphics images, which were developed with defense funding in the 1960s, entered consumer products only in the 1990s, though they were available in higher-performance machines much earlier. These algorithms are now used in a range of products in the health care, entertainment, and defense industries.
Industry does fund some long-range work, but the benefits of fundamental research are generally too distant and too uncertain to receive significant industry support. Moreover, the results of such work are generally so broad that it is difficult for any one firm to capture them for its own benefit and also prevent competitors from doing so (see Chapter 2). Not surprisingly, companies that have tended to support the most fundamental research have been those, like AT&T Corporation and IBM
Corporation, that are large and have enjoyed a dominant position in their respective markets. As the computing industry has become more competitive, even these firms have begun to link their research more closely with corporate objectives and product development activities. Companies that have become more dominant, such as Microsoft Corporation and Intel Corporation, have increased their support for fundamental research.
2. Government supports large system-building efforts that have advanced technology and created large communities of researchers.
In addition to funding long-term fundamental research, federal programs have been effective in supporting the construction of large systems that have both motivated research and demonstrated the feasibility of new technological approaches. The Defense Advanced Research Projects Agency's (DARPA's) decision to construct a packet-switched network (called the ARPANET) to link computers at its many contractor sites prompted considerable research on networking protocols and the design of packet switches and routers. It also led to the development of structures for managing large networks, such as the domain name system, and development of useful applications, such as e-mail. Moreover, by constructing a successful system, DARPA demonstrated the value of large-scale packet-switched networks, motivating subsequent deployment of other networks, like the National Science Foundation's NSFNET, which formed the basis of the Internet.
Efforts to build large systems demonstrate that, especially in computing, innovation does not flow simply and directly from research, through development, to deployment. Development often precedes research, and research rationalizes, or explains, technology developed earlier through experimentation. Hence attempts to build large systems can identify new problems that need to be solved. Electronic telecommunications systems were in use long before Claude Shannon developed modern communications theory in the late 1940s, and the engineers who developed the first packet switches for routing messages through the ARPANET advanced empirically beyond theory. Building large systems generated questions for research, and the answers, in turn, facilitated more development.
Much of the success of major system-building efforts derives from their ability to bring together large groups of researchers from academia and industry who develop a common vocabulary, share ideas, and create a critical mass of people who subsequently extend the technology. Examples include the ARPANET and the development of the Air Force's Semi-Automatic Ground Environment (SAGE) project in the 1950s. Involving researchers from MIT, IBM, and other research laboratories, the SAGE project sparked innovations ranging from real-time computing to
core memories that found widespread acceptance throughout the computer industry. Many of the pioneers in computing learned through hands-on experimentation with SAGE in the 1950s and early 1960s.3 They subsequently staffed the companies and laboratories of the nascent computing and communications revolution. The impact of SAGE was felt over the course of several decades.
3. Federal research funding has expanded on earlier industrial research.
In several cases, federal research funding has been important in advancing a technology to the point of commercialization after it was first explored in an industrial research laboratory. For example, IBM pioneered the concept of relational databases but did not commercialize the technology because of its perceived potential to compete with more-established IBM products. National Science Foundation (NSF)-sponsored research at UC-Berkeley allowed continued exploration of this concept and brought the technology to the point that it could be commercialized by several start-up companies—and more-established database companies (including IBM). This pattern was also evident in the development of reduced instruction set computing (RISC). Though developed at IBM, RISC was not commercialized until DARPA funded additional research at UC-Berkeley and Stanford University as part of its Very Large Scale Integrated Circuit (VLSI) program of the late 1970s and early 1980s. A variety of companies subsequently brought RISC-based products to the marketplace, including IBM, the Hewlett-Packard Company, the newly formed Sun Microsystems, Inc., and another start-up, MIPS Computer Systems. For both relational databases and VLSI, federal funding helped create a community of researchers who validated and improved on the initial work. They rapidly diffused the technology throughout the community, leading to greater competition and more rapid commercialization.
4. Computing research has benefited from diverse sources of government support.
Research in computing has been supported by multiple federal agencies, including the Department of Defense (DOD)—most notably the Defense Advanced Research Projects Agency and the military services—the National Science Foundation, National Aeronautics and Space Administration (NASA), Department of Energy (DOE), and National Institutes of Health (NIH). Each has its own mission and means of supporting research. DARPA has tended to concentrate large research grants in so-called centers of excellence, many of which over time have matured into some of the country's leading academic computer departments. The
Office of Naval Research (ONR) and NSF, in contrast, have supported individual researchers at a more diverse set of institutions. They have awarded numerous peer-review grants to individual researchers, especially in universities. NSF has also been active in supporting educational and research needs more broadly, awarding graduate student fellowships and providing funding for research equipment and infrastructure. Each of these organizations employs a different set of mechanisms to support research, from fundamental research to mission-oriented research and development projects, to procurement of hardware and software.
Such diversity offers many benefits. It not only provides researchers with many potential sources of support, but also helps ensure exploration of a diverse set of research topics and consideration of a range of applications. DARPA, NASA, and NIH have all supported work in expert systems, for example, but because the systems have had different applications—decision aids for pilots, tools for determining the structure of molecules on other planets, and medical diagnostics—each agency has supported different groups of researchers who tried different approaches.
Perhaps more importantly, no single approach to investing in research is by itself a sufficient means of stimulating innovation; each plays a role in the larger system of innovation. Different approaches work in concert, ensuring continued support for research areas as they pass through subsequent stages of development. Organizations such as NSF and ONR often funded seed work in areas that DARPA, with its larger contract awards, later magnified and expanded. DARPA's Project MAC, which gave momentum to time-shared computing in the 1960s, for example, built on earlier NSF-sponsored work on MIT's Compatible Time-Sharing System. Conversely, NSF has provided continued support for projects that DARPA pioneered but was unwilling to sustain after the major research challenges were resolved. For example, NSF funds the Metal Oxide Semiconductor Implementation Service (MOSIS)—a system developed at Xerox PARC and institutionalized by DARPA that provides university researchers with access to fast-turnaround semiconductor manufacturing services. Once established, this program no longer matched DARPA's mission to develop leading-edge technologies, but it did match NSF's mission to support university education and research infrastructure. Similarly, NSF built on DARPA's pioneering research on packet-switched networks to construct the NSFNET, a precursor to today's Internet.
5. Strong program managers and flexible management structures have enhanced the effectiveness of computing research.
Research in computing, as in other fields, is a highly unpredictable endeavor. The results of research are not evident at the start, and their most important contributions often differ from those originally envisioned. Few expected that the Navy's attempt to build a programmable aircraft simulator in the late 1940s would result in the development of the first real-time digital computer (the Whirlwind); nor could DARPA program managers have anticipated that their early experiments on packet switching would evolve into the Internet and later the World Wide Web.
The potential for unanticipated outcomes of research has two implications for federal policy. First, it suggests that measuring the results of federally funded research programs is extremely difficult. Projects that appear to have failed often make significant contributions to later technology development or achieve other objectives not originally envisioned. Furthermore, research creates many intangible products, such as knowledge and educated researchers whose value is hard to quantify. Second, it implies that federal mechanisms for funding and managing research need to recognize the uncertainties inherent in computing research and to build in sufficient flexibility to accommodate mid-course changes and respond to unanticipated results.
A key element in agencies' ability to maintain flexibility in the past has been their program managers, who have responsibility for initiating, funding, and overseeing research programs. The funding and management styles of program managers at DARPA during the 1960s and 1970s, for example, reflected an ability to marry visions for technological progress with strong technical expertise and an understanding of the uncertainties of the research process. Many of these program managers and office directors were recruited from academic and industry research laboratories for limited tours of duty. They tended to lay down broad guidelines for new research areas and to draw specific project proposals from principal investigators, or researchers, in academic computer centers. This style of funding and management resulted in the government stimulating innovation with a light touch, allowing researchers room to pursue new avenues of inquiry. In turn, it helped attract top-notch program managers to federal agencies. With close ties to the field and its leading researchers, they were trusted by—and trusted in—the research community.4
This funding style resulted in great advances in areas as diverse as computer graphics, artificial intelligence, networking, and computer architectures. Although mechanisms are clearly needed to ensure accountability and oversight in government-sponsored research, history demonstrates the benefits of instilling these values in program managers and providing them adequate support to pursue promising research directions.
6. Collaboration between industry and university researchers has facilitated the commercialization of computing research and maintained its relevance.
Innovation in computing requires the combined talents of university and industry researchers. Bringing them together has helped ensure that industry taps into new academic research and that university researchers understand the challenges facing industry. Such collaboration also helps facilitate the commercialization of technology developed in a university setting. All of the areas described in this report's case studies—relational databases, the Internet, theoretical computer science, artificial intelligence, and virtual reality—involved university and industry participants. Other projects examined, such as SAGE, Project MAC, and very large scale integrated circuits, demonstrate the same phenomenon.
Collaboration between industry and universities can take many forms. Some projects combine researchers from both sectors on the same project team. Other projects involve a transition from academic research laboratories to industry (via either the licensing of key patents or the creation of new start-up companies) once the technology matures sufficiently. As the case studies demonstrate, effective linkages between industry and universities tended to emerge from projects, rather than being thrust upon them. Project teams assembled to build large systems included the range of skills needed for a particular project. University researchers often sought out productive avenues for transferring research results to industry, whether linking with existing companies or starting new ones. Such techniques have often been more effective than explicit attempts to encourage collaboration, many of which have foundered due to the often conflicting time horizons of university and industry researchers.
7. Organizational innovation and adaptation are necessary elements of federal research support.
Over time, new government organizations have formed to support computing research, and organizations have continually evolved in order to better match their structure to the needs of the research and policymaking communities. In response to proposals by Vannevar Bush and others that the country needed an organization to fund basic research, especially in the universities, for example, Congress established the National Science Foundation in 1950. A few years earlier, the Navy founded the Office of Naval Research to draw on science and engineering resources in the universities. In the early 1950s during an intense phase of the Cold War, the military services became the preeminent funders of computing and communications. The Soviet Union's launching of Sputnik in 1957
raised fears in Congress and the country that the Soviets had forged ahead of the United States in advanced technology. In response, the U.S. Department of Defense, pressured by the Eisenhower administration, established the Advanced Research Projects Agency (ARPA, now DARPA) to fund technological projects with military implications. In 1962 DARPA) created the Information Processing Techniques Office (IPTO), whose initial research agenda gave priority to further development of computers for command-and-control systems.
With the passage of time, new organizations have emerged, and old ones have often been reformed or reinvented to respond to new national imperatives and counter bureaucratic trends. DARPA's IPTO has transformed itself several times to bring greater coherence to its research efforts and to respond to technological developments. NSF in 1967 established the Office of Computing Activities and in 1986 formed the Computer and Information Sciences and Engineering (CISE) Directorate to couple and coordinate support for research, education, and infrastructure in computer science. In the 1980s NSF, which customarily has focused on basic research in universities, also began to encourage joint academic-industrial research centers through its Engineering Research Centers program. With the relative increase in industrial support of research and development in recent years, federal agencies such as NSF have rationalized their funding policies to complement short-term industrial R&D. Federal funding of long-term, high-risk initiatives continues to have a high priority.
As this history suggests, federal funding agencies will need to continue to adjust their strategies and tactics as national needs and imperatives change. The Cold War imperative shaped technological history during much of the last half-century. International competitiveness served as a driver of government funding of computing and communications during the late 1980s and early 1990s. With the end of the Cold War and the globalization of industry, the U.S. computing industries need to maintain their high rates of innovation, and federal structures for managing computing research may need to change to ensure that they are appropriate for this new environment.
As this report demonstrates, the federal government has played a significant role in the development of the computing industry. Although difficult to quantify precisely, the returns from federal investments in computing and communications have been tremendous. Many of the leading concepts being exploited today—from virtual reality to the Internet—derive from research funded by federal agencies. As the industry has grown, the role of the government has evolved, but it has re-
mained essential in supporting long-term research and efforts to build large systems. The computing industry has advanced at an astonishing rate, driven by competition and commercial reward. Research—funded by the government and privately—has made that remarkable progress possible.
Policymakers attempting to develop sound science and technology policies and promote the continued vitality of the computing industry can find useful guidance in history. The explorations of Meriwether Lewis and William Clark suggest an analogy. They drew on numerous stories told by others, including native Americans and fur traders, who had tentatively explored the lands west of the Mississippi. From these histories they imaginatively created with broad brush strokes a picture of the frontier and prepared for the host of contingencies that they might encounter. So, too, can the stories contained in the case studies in this report provide illustrations to help policymakers address the challenges they face as computing enters the next millennium.