The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Getting up to Speed the Future of Supercomputing
tified at least 10 defense applications that rely on high-performance computing (p. 22): comprehensive aerospace vehicle design, signals intelligence, operational weather/ocean forecasting, stealthy ship design, nuclear weapons stockpile stewardship, signal and image processing, the Army’s future combat system, electromagnetic weapons, geospatial intelligence, and threat weapon systems characterization.
Advanced computer research programs have had major payoffs in terms of technologies that enriched the computer and communication industries. As an example, the DARPA VLSI program in the 1970s had major payoffs in developing timesharing, computer networking, workstations, computer graphics, windows and mouse user interface technology, very large integrated circuit design, reduced instruction set computers, redundant arrays of inexpensive disks, parallel computing, and digital libraries.42 Today’s personal computers, e-mail, networking, data storage all reflect these advances. Many of the benefits were unanticipated.
Closer to home, one can list many technologies that were initially developed for supercomputers and that, over time, migrated to mainstream architectures. For example, vector processing and multithreading, which were initially developed for supercomputers (Illiac IV/STAR100/TI ASC and CDC 6600, respectively), are now used on PC chips. Instruction pipelining and prefetch and memory interleaving appeared in early IBM supercomputers and have become universal in today’s microprocessors. In the software area, program analysis techniques such as dependence analysis and instruction scheduling, which were initially developed for supercomputer compilers, are now used in most mainstream compilers. High-performance I/O needs on supercomputers, particularly parallel machines, were one of the motivations for Redundant Array of Inexpensive Disks (RAID)43 storage, now widely used for servers. Scientific visualization was developed in large part to help scientists interpret the results of their supercomputer calculations; today, even spreadsheets can display three-dimensional data plots. Scientific software libraries such as LAPACK that were originally designed for high-performance platforms are now widely used in commercial packages running on a large range of
NRC. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation’s Information Infrastructure. Washington, D.C.: National Academy Press, pp. 17-18.
RAID is a disk subsystem consisting of many disks that increases performance and/or provides fault tolerance.