contain hundreds of millions of transistors, David Nagel, senior vice president and general manager of AppleSoft, said.

In a very real sense, convergence has been enabled by advances in digital hardware technology, which have in turn been driven by development of engineering processes that result in the ability to etch more and finer lines on crystals of silicon. Better lithography and more lines per millimeter enable the creation of greater and greater numbers of the active elements of digital signal processing—transistors—on any given area of silicon. One of the early pioneers of the integrated circuit, Gordon Moore (now chairman of Intel Corporation), coined a heuristic that describes the doubling in this ratio that has occurred almost like clockwork for the past 25 years, a heuristic now known as Moore's Law (Karlgaard, 1994). Having more and smaller transistors on a single chip of silicon allows those transistors to be operated at lower voltages and with lower switching currents. This in turn allows the transistors to be operated faster and at lower electrical power levels, with the net result that the microprocessors that have evolved from the earliest 10-transistor integrated circuits now have computing speeds measured in hundreds of millions of instructions per second. At these speeds, real-time multimedia computing becomes possible; advanced compression algorithms that can reduce the required bandwidth of communication systems can be implemented with circuits at price points compatible with mass markets.

Although physical laws limit how small lines can be made using the current optical lithographic technologies (a limit that would be reached shortly after the end of this century), new technologies using shorter-wavelength etching beams (e.g., x-ray, plasma) that allow significant progress below the limits imposed by visible-light lithography are now in early prototyping stages. However, new tools of all sorts are also needed to manage the incredible complexity of circuits that contain tens or hundreds of millions of switch elements. While design tools are being developed that can automate much of the design, a new problem has arisen for which no immediate solution is apparent: How does one adequately test these circuits? But while the problems of designing and testing of massively complex chips representing the very fastest hardware systems may begin to slow progress by the end of the decade, it certainly will be possible to create systems with the computing performance of today's fastest chips and all that is needed to put a complete computer on a single chip by the end of this decade. These general-purpose computing systems—complemented by more limited purpose logic for performing tasks such as high-speed compression, decompression, and even recognition of audio signals—will enable engineers to build digital systems with the overall computing power of today's supercomputers in products priced like commodity consumer electronics. In fact, the term "computing," which implies numerical computation—arithmetic—no longer adequately describes products currently being designed and sold based on high-performance digital technology.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement