Hardware and Networks

Much of this progress was built on rapid advances in integrated circuit technology. By the late 1970s, integrated circuit design and manufacturing had become sophisticated enough that an entire computer could be built on a single chip, leading to the development of the microprocessor. Integrated circuits had also become faster, cheaper, and more reliable than the discrete technologies they replaced. Both industrial and federally funded research on the design and manufacture of very large scale integrated (VLSI) silicon chips had contributed to this progress. The 1980s saw astounding advances in the power and speed of these microprocessors; for most of the decade, the number of transistors per dollar, or the computing power per dollar, doubled approximately every 18 months.4

At the same time, digital networking on a truly global scale emerged from the packet-switching network experiments directed and funded by ARPA in the late 1960s and early 1970s. The early experiments were aimed at producing a robust, fault-tolerant network that would have no single point of failure. The resulting ARPANET was the prototype for the development of the larger Internet, simultaneously serving as a testbed and linking network researchers. In 1985 the National Science Foundation (NSF) began to link its supercomputer centers via the Internet and in 1987 established NSFNET as a high-speed backbone for the Internet.

Over time, and often in partnership with the private sector, numerous federal agencies began to support networking, moving what started as a defense technology into the general civilian research community, the higher-education community, and eventually into an expanding portion of the nation’s general population. In both computing and networking, federally funded research in partnership with private efforts has played a significant role in the development of technologies that have successfully migrated into the business sector and provided broad benefits for the whole nation.

Architectural and Programming Issues

The same underlying technology that advanced the microprocessor and made sophisticated networking possible also formed the roots of the High Performance Computing and Communications Initiative. It seemed clear to a few farsighted individuals in the computing community that as VLSI technology developed, it would favor computing structures that relied on replication of smaller computing units, as opposed to monolithic computers that relied primarily on very high speed circuits that were expensive to design, produce, maintain, and operate.

This vision of high-performance computing brought two major technical challenges.

  • Interconnection and memory architecture: how to unite large numbers of slower and cheaper processors or computers into systems capable of delivering truly high performance.

  • Programming: how to program such collections of devices to solve large and complex problems.

Architecture—Parallel computing. Generically, the approach of using multiple processors came to be called parallel computing or parallel processing. While the basic concept of parallel processing is not new, the development in the past 10 years of faster microprocessors and networks has led many in the computer community to believe that

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement