• The current computing capabilities available today for the four fields investigated and specific projections about the computations that would be enabled by a petascale machine.

  • The need for computing resources beyond today’s emerging capabilities and desirable features of future balanced high-end systems.

  • Pros and cons associated with the use of community codes (although they are discussed briefly in Chapter 6).

  • The policies of various U.S. government funding agencies with respect to HECC.

  • The ability of the academic community to build and manage computational infrastructure.

  • Policies for archiving and storing data.


The federal government has been a prime supporter of science and engineering research in the United States since the 1940s. Over the subsequent decades, it established a number of federal laboratories (primarily oriented toward specific government missions) and many intramural and extramural research programs, including an extensive system for supporting basic research in academia for the common good. Until the middle decades of the twentieth century, most of this research could be classified as either theoretical or experimental.

By the 1960s, as digital computing evolved and matured, it became widely appreciated that computational approaches to scientific discovery would become a third mode of inquiry. That idea had, of course, already been held for a number of years—at least as early as L.F. Richardson’s experiment with numerical weather prediction in 1922 (Richardson, 1922) and certainly with the use of the ENIAC in the 1940s for performing ballistics calculations (Goldstine and Goldstine, 1946). By the 1970s, the confluence of computing power, robust mathematical algorithms, skilled users, and adequate resources enabled computational science and engineering to begin contributing more broadly to research progress (see, for instance, Lax, 1982).

In its role of furthering science and engineering for the national interest, the federal government has long accepted the responsibility for supporting high-end computing, beginning with the ENIAC. Clearly, the ENIAC would not be considered a supercomputer today (nor would the Cray-1, to choose a cutting-edge technology from the late 1970s), but high-end computing is commonly defined as whatever caliber of computing is pushing the state of art of computing at any given time. Similarly, today’s teraflop2 computing is becoming fairly routine within the supercomputing community, and some would no longer consider computing at a few teraflops as being at the high end. Petascale computing will be the next step,3 and some are beginning to think about exascale computing, which would represent a further thousandfold increase in capability beyond the petascale.

Science and engineering progress over many years has been accompanied by the development of new tools for examining natural phenomena. The invention of microscopes and telescopes four centuries ago enabled great progress in observational capabilities, and the resulting observations have altered our views of nature in profound ways. Much more recently, techniques such as neutron scattering, atomic force microscopy, and others have been built on a base of theory to enable investigations that were otherwise impossible. The fact that theory underpins these tools is key: Scientists needed a good


The prefix “tera-” connotes a trillion, and “flop” is an acronym for “floating point operations.”


Los Alamos National Laboratory announced in June 2008 that it had achieved processing speeds of over 1 petaflop/s for one type of calculation; see the news release at http://www.lanl.gov/news/index.php/fuseaction/home.story/story_id/13602. Accessed July 18, 2008.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement