FIGURE 2-1 In 1945 the first general-purpose electronic computer was termed the ENIAC and worked principally to solve design problems with the hydrogen bomb. SOURCE: Image courtesy of Los Alamos National Laboratory/Science Photo Library.

FIGURE 2-1 In 1945 the first general-purpose electronic computer was termed the ENIAC and worked principally to solve design problems with the hydrogen bomb. SOURCE: Image courtesy of Los Alamos National Laboratory/Science Photo Library.

THE PATH TO EXASCALE COMPUTING

The Evolution of Computing Architectures: From ENIAC to Multicore

There has been enormous growth in computing capability over the past 60 years, with an overall performance increase of 14 orders of magnitude.1 Since the inception of the first general-purpose electronic computer, the Electronic Numerical Integrator and Computer (ENIAC), capable of tens of FLOPs (see Figure 2-1), the U.S. computer architecture research agenda has been driven by applications that are critical to national security and national scientific competitiveness. The most dramatic increase has occurred over the past 20 years with the advent of massively parallel computers and associated programming paradigms and algorithms.2 Computing capability growth over the past 30 years and projections for the next 20 years are shown in Figure 2-2. Through the late 1970s and into the early 1990s, supercomputing was dominated by vector computers. In 1987 a seminal paper on the use of massively parallel computing marked an inflection point for supercomputing (Gustafson et al., 1988). Instead of the very expensive, special-purpose hardware found in vector platforms, commercial off-the-shelf parts could be connected with networks to create supercomputers (so-called Beowulf clusters).

Although the programming paradigm for these new parallel platforms presented a significant challenge, it also presented enormous potential. From the mid-1990s through today, massively parallel computers have ridden Moore’s law (Mollick, 2006) to gain more performance for less capital cost.

Simulations using greater than 10,000 processors have become routine at national laboratories and supercomputer centers, while simulations using dozens and even hundreds of processors are now routine on university campuses. However, the computing future presents new challenges. The high-performance computing (HPC) community is now looking at several departure points from the past 15 years in order to leverage the increasingly common use of multicore central processing units (CPUs) and accelerators, as projected in Figure 2-2. Exascale initiatives are being developed by several federal agencies, and contracts for 10-plus petaflop computers have been awarded. Notable examples include the National Science Foundation’s (NSF) Blue Waters system located at the National Center for Supercomputer Applications and the National Nuclear Security Administration’s (NNSA) Sequoia system located at Lawrence Livermore National Laboratory. IBM is developing both systems within its Power 7 and Blue Gene product lines, respectively.

1

Available from http://en.wikipedia.org/wiki/Supercomputing. Last accessed May 27, 2009.

2

Available from http://en.wikipedia.org/wiki/Supercomputing. Last accessed May 27, 2009.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement