National Academies Press: OpenBook

Getting Up to Speed: The Future of Supercomputing (2005)

Chapter: 4 The Demand for Supercomputing

« Previous: 3 Brief History of Supercomputing
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

4
The Demand for Supercomputing

The committee now turns to a discussion of some of the key application areas that are using supercomputing and are expected to continue to do so. As each of these areas is reviewed, it is important to recognize that many of the areas are themselves supported by government research funds and contribute to broader societal objectives, ranging from national defense to the ability to make more informed decisions on climate policy. Also, as is discussed further in Chapter 5, the precise technology requirements for these different application areas differ. Additionally, several of them are subject to at least some degree of secrecy. As will be discussed in Chapter 8, a key issue in the effective management of and policy toward supercomputing involves understanding and choosing the degree of commitment, the degree of diversification, and the degree of secrecy associated with the technology.

Supercomputers are tools that allow scientists and engineers to solve computational problems whose size and complexity make them otherwise intractable. Such problems arise in almost all fields of science and engineering. Although Moore’s law and new architectural innovations enable the computational power of supercomputers to grow, there is no foreseeable end to the need for ever larger and more powerful systems.

In most cases, the problem being solved on a supercomputer is derived from a model of the physical world. An example is predicting changes that Earth’s climate might experience centuries into the future. Approximations are made when scientists use partial differential equations to model a physical phenomenon. To make the solution feasible, compromises must be made in the resolution of the grids used to discretize

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

the equations. The coefficients of the matrices are represented as numbers expressed in scientific notation.1 Therefore, the computation does not precisely emulate the real phenomenon but, rather, simulates it with enough fidelity to stimulate human scientific imagination or to aid human engineering judgment. As computational power increases, the fidelity of the models can be increased, compromises in the methods can be eliminated, and the accuracy of the computed answers improves. An exact solution is never expected, but as the fidelity increases, the error decreases and results become increasingly useful.

This is not to say that exact solutions are never achieved. Many problems with precise answers are also addressed by supercomputing. Examples are found in discrete optimization, cryptography, and mathematical fields such as number theory. Recently a whole new discipline, experimental mathematics, has emerged that relies on algorithms such as integer relation detection. These are precise calculations that require hundreds or even thousands of digits.2,3 At the hardware level, these operations are most efficiently done using integer arithmetic. Floating-point arithmetic is sometimes used, but mostly to perform whole number operations.

By studying the results of computational models, scientists are able to glean an understanding of phenomena that are not otherwise approachable. Often these phenomena are too large and complex or too far away in time and space to be studied by any other means. Scientists model turbulence inside supernovae and material properties at the center of Earth. They look forward in time and try to predict changes in Earth’s climate. They also model problems that are too small and too fast to observe, such as the transient, atomic-scale dynamics of chemical reactions. Material scientists can determine the behavior of compounds not known to exist in nature.

Supercomputers not only allow people to address the biggest and most complex problems, they also allow people to solve problems faster, even those that could fit on servers or clusters of PCs. This rapid time to solution is critical in some aspects of emergency preparedness and national defense, where the solutions produced are only valuable if they can be acted on in a timely manner. For example, predicting the landfall of a

1  

IEEE Standard 754, available at <http://cch.loria.fr/documentation/IEEE754/#SGI_man>.

2  

Jonathan M. Borwein and David H. Bailey. 2004. Mathematics by Experiment: Plausible Reasoning in the 21st Century. Natick, Mass.: A.K. Peters.

3  

Jonathan M. Borwein, David H. Bailey, and Roland Girgensohn. 2004. Experimental Mathematics: Computational Paths to Discovery. Natick, Mass.: A.K. Peters.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

hurricane allows evacuation of the coastline that will be impacted (saving lives), while not disturbing the surrounding area (saving money). Rapid time to solution in a commercial arena translates into minimizing the time to market for new products and services. The ability to solve many problems in a reasonable time frame allows engineers to explore design spaces before committing to the time and expense of building prototypes.

An important phenomenon that cannot be underestimated is how the potential for making a scientific discovery can encourage human creativity. Few advances in science and technology are unplanned or unexpected, at least in hindsight. Discoveries almost always come in the wake of work that inspires or enables them. When one discovery opens up the possibility of another, the leading intellects of our time will focus tremendous time and energy on developing the algorithms needed to make a discovery that appears tantalizingly close. Supercomputing expands the space within which such new algorithms can be found by maximizing the resources that can be brought to bear on the problem.

Supercomputing allows pioneering scientists and engineers to invent solutions to problems that were initially beyond human ability to solve. Often, these are problems of great national importance. Dimitri Kusnezov, Director of the NNSA, put it this way when he testified before the U.S. Senate in June 2004:4 “Simulating the time evolution of the behavior of an exploding nuclear device is not only a mammoth scientific enterprise from a computational perspective, it probably represents the confluence of more physics, chemistry and material science, both equilibrium and non-equilibrium, at multiple length and time scales than almost any other scientific challenge.”

Over time and with increasing experience, the algorithms mature and become more efficient. Furthermore, smaller computing systems such as servers and personal computers become more powerful. These two trends make problems that were once addressable only by nation states now addressable by large research and engineering enterprises and, given enough time, eventually by individual scientists and engineers. Consider an example from mechanical dynamics. Starting in the 1950s, scientists at the nuclear weapons laboratories pioneered the use of explicit finite element programs to simulate the propagation of shocks through the devices they were developing. These codes became available to industrial users in the 1980s. Through the 1980s and into the 1990s, automotive companies ran

4  

Testimony of Dimitri Kusnezov, Director, Office of Advanced Simulation and Computing, NNSA, U.S. Department of Energy, before the U.S. Senate Committee on Energy and Natural Resources, Subcommittee on Energy, June 22, 2004.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

their occupant safety problems on the same type of supercomputers used by the national laboratories. As the power of servers and PCs continued to increase, many of those engineering problems were able to move to departmental-scale systems in the late 1990s, and even to individual PCs today. Without the development of algorithms and software on supercomputers in the 1980s and 1990s, such codes would not be available for broad use on servers and PCs today.

The example above should not be construed to suggest that there is no longer a need for supercomputing in mechanical engineering. On the contrary, while today’s codes are very useful tools for supporting design and analysis, they are by no means predictive. One software vendor believes that his users in the automotive industry could productively employ computing power at least seven orders of magnitude greater than what they have today. There are many such examples, some of which are given later in the chapter.

The above discussion has focused on supercomputers as tools for research performed in other disciplines. By their very nature, supercomputers push the boundaries of computer engineering in terms of scale. To effectively solve the most challenging problems requires that supercomputers be architected differently than standard PCs and servers. As the underlying technology (semiconductors, optics, etc.) from which they are constructed evolves, the design space for supercomputers changes rapidly, making supercomputers themselves objects of scientific curiosity. This last point will be taken up in Chapter 5.

COMPELLING APPLICATIONS FOR SUPERCOMPUTING

The Committee on the Future of Supercomputing has extensively investigated the nature of supercomputing applications and their present and future needs. Its sources of information have included its own membership as well as the many experts from whom it heard in committee meetings. The committee has talked with the directors of many supercomputing centers and with scientists and engineers who run application programs at those centers. Subcommittees visited DOE weapons laboratories, DOE science laboratories, the National Security Agency, and the Japanese Earth Simulator. In addition, the committee held a 2-day applications workshop in Santa Fe, New Mexico, in September 2003, during which approximately 20 experts discussed their applications and their computing requirements. What follows is a consensus summary of the information from all of those sources.

Many applications areas were discussed either at the Santa Fe workshop or in presentations to the committee. In addition to furthering basic scientific understanding, most of these applications have clear practical

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

benefits. A capsule summary of the areas is given first, followed by a detailed description. This is a far from complete list of supercomputing applications, but it does represent their broad range and complexity. Several other recent reports give excellent summaries of the high-end computational needs of applications. Among those reports are the HECRTF workshop report,5 the Scales report,6 the IHEC Report,7 and the HECRTF final report.8

  • Stockpile stewardship. Several of the most powerful computers in the world are being used as part of DOE’s Advanced Simulation and Computing (ASC) to ensure the safety and reliability of the nation’s stockpile of nuclear weapons. France’s CEA (Atomic Energy Commission) has a similar project.

  • Intelligence/defense. Very large computing demands are made by the DoD, intelligence community agencies, and related entities in order to enhance the security of the United States and its allies, including antici-pating the actions of terrorists and of rogue states.

  • Climate prediction. Many U.S. high-end computational resources and a large part of the Japanese Earth Simulator are devoted to predicting climate variations and anthropogenic climate change, so as to anticipate and be able to mitigate harmful impacts on humanity.

  • Plasma physics. An important goal of plasma physics will be to produce cost-effective, clean, safe electric power from nuclear fusion. Very large simulations of the reactions in advance of building the generating devices are critical to making fusion energy feasible.

  • Transportation. Whether it be an automobile, an airplane, or a spacecraft, large amounts of supercomputer resources can be applied to understanding and improving the vehicle’s airflow dynamics, fuel consumption, structure design, crashworthiness, occupant comfort, and noise reduction, all with potential economic and/or safety benefits.

  • Bioinformatics and computational biology. Biology has huge emerging computational needs, from data-intensive studies in genomics to

5  

NITRD High End Computing Revitalization Task Force (HECRTF). 2003. Report of the Workshop on the Roadmap for the Revitalization of High-End Computing. Daniel A. Reed, ed. June 16-20, Washington, D.C.

6  

DOE, Office of Science. 2003. “A Science-Based Case for Large-Scale Simulation,” Scales Workshop Report, Vol. 1, July.

7  

Department of Defense, National Security Agency. 2002. Report on High Performance Computing for the National Security Community. July 1.

8  

NITRD HECRTF. 2004. Federal Plan for High End Computing. May.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

computationally intensive cellular network simulations and large-scale systems modeling. Applications promise to provide revolutionary treatments of disease.

  • Societal health and safety. Supercomputing enables the simulation of processes and systems that affect the health and safety of our society (for instance, pollution, disaster planning, and detection of terrorist actions against local and national infrastructures), thereby facilitating government and private planning.

  • Earthquakes. Supercomputing simulation of earthquakes shows promise for allowing us to predict earthquakes and to mitigate the risks associated with them.

  • Geophysical exploration and geoscience. Supercomputing in solid-earth geophysics involves a large amount of data handling and simulation for a range of problems in petroleum exploration, with potentially huge economic benefits. Scientific studies of plate tectonics and Earth as a geodynamo require immense supercomputing power.

  • Astrophysics. Supercomputer simulations are fundamental to astrophysics and play the traditional scientific role of controlled experiments in a domain where controlled experiments are extremely rare or impossible. They allow vastly accelerated time scales, so that astronomical evolution can be modeled and theories tested.

  • Materials science and computational nanotechnology. The simulation of matter and energy from first principles is very computationally intensive. It can lead to the discovery of materials and reactions having large economic benefits—for instance, superconductors that minimize transmission loss in power lines and reduce heating in computers.

  • Human/organizational systems studies. The study of macroeconomics and social dynamics is also amenable to supercomputing. For instance, the behavior of large human populations is simulated in terms of the overall effect of decisions by hundreds of millions of individuals.

Common Themes and Synergies Across Applications Areas

The committee was struck by the many similarities across application areas in the importance of supercomputing to each scientific domain, the present use of computational equipment, and projected future supercomputing needs. Most of the applications areas use supercomputer simulations in one of three ways: (1) to extend the realization of complex natural phenomena so that they can be understood scientifically; (2) to test, via simulation, systems that are costly to design or to instrument, saving both time and money; or (3) to replace experiments that are hazardous, illegal, or forbidden by policies and treaties. The use of supercomputing provides information and predictions that are beneficial

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

to the economy, to health, and to society at large. The applications areas all use supercomputing to accomplish tasks that are uneconomical—or even impossible—without it.

Whether the task is cracking a cryptographic code, incorporating new physics into a simulation, or detecting elusive targets, the real value of supercomputing is increased insight and understanding. Time to solution includes getting a new application up and running (the programming time), waiting for it to run (the execution time), and, finally, interpreting the results (the interpretation time). Applications areas have productivity problems because the time to program new supercomputers is increasing. While application codes and supercomputing systems have both become more complex, the compilers and tools that help to map application logic onto the hardware have not improved enough to keep pace with that complexity. The recent DARPA High Productivity Computing Systems (HPCS) initiative, having recognized this problem, has a strong focus on improving the programmability of supercomputers and on developing productivity metrics that will provide a measure of this improvement.9

It is well known that computational techniques span application areas. For example, astrophysics, aircraft design, climate modeling, and geophysics all need different models of fluid flow. Computational modeling used in applications that seek fundamental understanding enhances applications that solve real-world needs. Thus, basic understanding of plasma physics and materials facilitates stockpile stewardship, while basic results in weather prediction can facilitate climate modeling. These examples are illustrative, not a complete story.

In July 2003, Raymond Orbach, Director of the DOE Office of Science, testified before the U.S. House of Representatives Committee on Science. He said

The tools for scientific discovery have changed. Previously, science had been limited to experiment and theory as the two pillars for investigation of the laws of nature. With the advent of what many refer to as “Ultra-Scale” computation, a third pillar, simulation, has been added to the foundation of scientific discovery. Modern computational methods are developing at such a rapid rate that computational simulation is possible on a scale that is comparable in importance with experiment and theory. The remarkable power of these facilities is opening new vistas for science and technology. Previously, we used computers to solve sets of equations representing physical laws too complicated to solve analytically.

9  

For more information on the HPCS program, see <http://www.darpa.mil/ipto/programs/hpcs/index.htm>.

 

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

Now we can simulate systems to discover physical laws for which there are no known predictive equations.10

Dr. Orbach also remarked that computational modeling and simulation were among the most significant developments in the practice of scientific inquiry in the latter half of the 20th century. Supercomputing has contributed to essentially all scientific research programs and has proved indispensable to DOE’s missions. Computer-based simulation can bridge the gap between experimental data and simple mathematical models, thus providing a means for predicting the behavior of complex systems.

Selected Application Areas

Stockpile Stewardship

In June 2004, Dimitri Kusnezov, Director of the Office of Advanced Simulation and Computing at DOE’s National Nuclear Security Administration, testified before the U.S. Senate Committee on Energy and Natural Resources. He said “Since the dawn of the nuclear age, computation has been an integral part of the weapons program and our national security. With the cessation of testing and the advent of the science-based Stockpile Stewardship Program, ASC simulations have matured to become a critical tool in stockpile assessments and in programs to extend the life of the nation’s nuclear deterrent.”11

Even with simple, low-resolution physics models, weapons simulations have given insight and information that could not be obtained in other ways.12 Thus, the DOE nuclear weapons laboratories have always been at the forefront of supercomputing development and use. The huge challenge of nuclear weapons simulation is to develop the tools (hardware, software, algorithms) and skills necessary for the complex, highly coupled, multiphysics calculations needed for accurate simulations. Under the DOE/NNSA Stockpile Stewardship Program, several of the larg-

10  

Testimony of Raymond L. Orbach, Director, Office of Science, U.S. Department of Energy, before the U.S. House of Representatives Committee on Science, July 16, 2003.

11  

Testimony of Dimitri Kusnezov, Director, Office of Advanced Simulation and Computing, NNSA, U.S. Department of Energy, before the U.S. Senate Committee on Energy and Natural Resources, Subcommittee on Energy, June 22, 2004.

12  

This subsection is based on white papers by Charles F. McMillan et al., LLNL, “Computational Challenges in Nuclear Weapons Simulation,” and by Robert Weaver, LANL, “Computational Challenges to Supercomputing from the Los Alamos Crestone Project: A Personal Perspective.” Both papers were prepared for the committee’s applications workshop at Santa Fe, N.M., in September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

est supercomputers in the world are being developed and used as part of the NNSA ASC program to ensure the safety and reliability of the nation’s stockpile of nuclear weapons.

One of the fundamental problems that the national laboratories are attempting to solve with extremely complex (and obviously classified) codes is the simulation of the full physical operation of the nuclear weapons in the U.S. stockpile. This problem is important in order to continue to certify to the nation that the nuclear deterrent stockpile is safe and reliable in the absence of testing. Prior to the development of the current generation of National Laboratory codes, weapons designers had to rely on a more empirical solution to the complex, nonlinear coupled physics that occurs in a nuclear weapon. This procedure had to be augmented by an experimental test of the design.

In the absence of nuclear testing, the simulation codes must rely less on empirical results and must therefore be more refined. The simulations have evolved from two-dimensional models and solutions to three-dimensional ones. That evolution has required a more than 1,000-fold increase in computational resources. To achieve that capability, the simulations are developed and run on the most advanced platforms—systems that are prototype machines with few users. These platforms often lack the ideal infrastructure and stability, leading to new and unanticipated challenges, with the largest runs taking many months to a year to complete. Dr. Kusnezov noted that stockpile simulations “currently require heroic, nearly yearlong calculations on thousands of dedicated processors. It is essential that we provide the designers with the computational tools that allow such simulations to be completed in a reasonable time frame for systematic analysis. This is one of the requirements that drive us well into the petascale regime for future platforms.”13

During the last 5 years, ASC has acquired a number of increasingly powerful supercomputing systems and plans to continue such acquisition. The vendors of these systems include Intel, IBM, Silicon Graphics, Cray, and Hewlett-Packard. The number of processors in these systems ranges from about 2,000 to a proposed 131,000, with peak performances ranging from 3 trillion floating-point operations per second (Tflops) to a proposed 360 Tflops. Portability of applications among the systems has become relatively smooth because of commitment in general to standard languages and programming models and avoidance of processor-specific optimizations. These practices have allowed the ASC community to begin taking advantage of new processor technology as it becomes available.

13  

Testimony of Dimitri Kusnezov, Director, Office of Advanced Simulation and Computing, U.S. Department of Energy, before the U.S. Senate Committee on Energy and Natural Resources, Subcommittee on Energy, June 22, 2004.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

The ASC programming environment stresses software development tools because of the scale of the hardware architecture, software complexity, and the need for compatibility across ASC platforms. A multiphysics application code may take 4 to 6 years to become useful and may then have a lifespan of several decades. Thus, it is important that code development focus on present and future supercomputing systems. Almost all ASC applications, for example, use a combination of three programming models: the serial model, symmetric multiprocessing using OpenMP, and message passing using MPI. Programming is typically done in ANSI C, C++, and Fortran 90. Algorithm development attempts to balance the (often competing) requirements of high-fidelity physics, short execution time, parallel scalability, and algorithmic scalability14; not surprisingly, it is in some ways influenced by target architectures. It is interesting to note that, even with all this effort, codes running on the ASC White system typically attain from 1 percent to 12 percent of theoretical peak performance. It is not uncommon for complex scientific codes run on other platforms to exhibit similarly modest percentages. By contrast, the somewhat misleading Linpack benchmarks run at 59 percent of peak on that system.

Signals Intelligence

The computational challenges posed by the Signals Intelligence mission of the NSA are enormous.15 The essence of this mission is to intercept and analyze foreign adversaries’ communications signals, many of which are protected by encodings and other complex countermeasures. NSA must collect, process, and disseminate intelligence reports on foreign intelligence targets in response to intelligence requirements set at the highest levels of the government. The Signals Intelligence mission targets capabilities, intentions, and activities of foreign powers, organizations, or persons. It also plays an important counterintelligence role in protecting against espionage, sabotage, or assassinations conducted for or on behalf of foreign powers, organizations, persons, or international terrorist groups or activities.

The context and motivation that the Signals Intelligence mission pro-

14  

Parallel scalability means near-linear decrease in execution time as an increasing number of processors are used; algorithm scalability means moderate (near-linear) increase in computer time as problem size increases.

15  

This subsection is based on excerpts from the white paper “Computational Challenges in Signals Intelligence,” prepared by Gary Hughes, NSA, and William Carlson and Francis Sullivan, Institute for Defense Analyses, Center for Computational Science, for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

vides are essential to understanding its demands on supercomputing. Two characteristics are key: problem choice and timeliness of solutions. The highest priority problems to be solved are chosen not by NSA itself but rather by the very entities that pose the greatest danger: foreign adversaries. They do this when they choose communication methods. This single characteristic puts phenomenal demands on both the development of solutions and their deployment on available computing platforms. Solutions must also be timely—the intelligence derived from the communication “attack at dawn” is, to say the least, far less valuable at noon. Timeliness applies to both the development of solutions and their deployment. While these specific mission-driven requirements are unique to the Signals Intelligence mission, their effect is seen across a fairly broad spectrum of mission agencies, both inside and outside the defense community. This is in contrast to computing that targets broad advances in technology and science. In this context, computations are selected more on the basis of their match to available resources and codes.

There are two main uses of supercomputing driven by the Signals Intelligence mission: intelligence processing (IP) and intelligence analysis (IA). Intelligence processing seeks to transform intercepted communications signals into a form in which their meaning can be understood. This may entail overcoming sophisticated cryptographic systems, advanced signal processing, message reconstruction in the presence of partial or corrupted data, or other complex signaling or communications subsystems. Intelligence analysis begins with the output of IP and seeks to transform the blizzard of communication messages into a complete mosaic of knowledge so that adversaries’ intentions can be discerned and actionable intelligence provided to national leadership and others with a need to know.

The key computational characteristics of Signals Intelligence problems differ greatly from those of the other scientific problems discussed in this section. There is extensive use of bit operations and operations in non-standard algebraic systems; floating point is used on only a tiny percentage of problems. A significant portion of the problem space is easily amenable to all forms of parallel processing (e.g., “embarrassingly parallel”) techniques. Yet another significant portion of the problem space uses computations needing random access to extremely large data sets in memory and sustained, but unpredictable, interprocessor communication. In fact, the designers of cryptographic systems do their best to ensure there is no way to segment the code-breaking problem. Additionally, the knowledge discovery problem requires the understanding of extremely large graph networks with a dynamic collection of vertices and edges. The scale of this knowledge discovery problem is significantly larger than the largest commercial data mining operations.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

Computational systems for Signals Intelligence include workstations, workstation farms, Beowulf clusters, massively parallel supercomputers, vector supercomputers, “handmade” FPGA-enhanced systems, and others. Operating systems used are mainly UNIX and Linux and programming is done mainly in C and Universal Parallel C (UPC).16 Interprocessor communication is essential for the most demanding computations, yet MPI and related message passing models are not used because the added overhead of message passing systems is much too high a price to pay. Instead, SHMEM, a message-passing library developed for the Cray T3E and related systems, is employed.

Defense

A Mitre Corporation survey documented in June 200117 listed 10 DoD applications for supercomputing, which are still valid today:

  • Weather and ocean forecasting.

  • Planning for dispersion of airborne/waterborne contaminants.

  • Engineering design of aircraft, ships, and other structures.

  • Weapon (warhead/penetrators) effect studies and improved armor design.

  • Cryptanalysis.

  • Survivability/stealthiness.

  • Operational intelligence, surveillance, and reconnaissance (ISR).

  • Signal and image processing research to develop new exploitation.

  • National missile defense.

  • Test and evaluation.

Many of these defense applications require computational fluid dynamics (CFD), computational structural mechanics (CSM), and computational electromagnetics (CEM) calculations similar to those needed by

16  

Tarek A. El-Ghazawi, William W. Carlson, and Jesse M. Draper, “UPC Language Specification (V 1.1.1),” <http://www.gwu.edu/~upc/docs/upc_spec_1.1.1.pdf>; Robert W. Numrich and John Reid, 1998, “Co-array Fortran for Parallel Programming,” SIGPLAN Fortran Forum 17(2):1-31; J. Nieplocha, R.J. Harrison, and R.J. Littlefield, 1996, “Global Arrays: A Nonuniform Memory Access Programming Model for High-Performance Computers,” Journal of Supercomputing 10:197-220; Katherine Yelick, Luigi Semenzato, Geoff Pike, Carleton Miyamoto, Ben Liblit, Arvind Krishnamurthy, Paul Hilfinger, Susan Graham, David Gay, Philip Colella, and Alexander Aiken, 1998, “Titanium: A High-Performance Java Dialect,” Concurrency: Practice and Experience 10:825-836.

17  

Richard Games. 2001. Survey and Analysis of the National Security High Performance Computing Architectural Requirements. MITRE Corp. June 4.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

other supercomputing applications areas discussed in this report. One defense application that relies critically on supercomputing is comprehensive aerospace vehicle design, such as the design of the F-35 Joint Strike Fighter, and this reliance will only accelerate. Future aerospace development programs will involve hypersonic capabilities requiring more comprehensive physics models for accurate simulation in these harsh flight regimes. Two distinct types of computational science are required. CFD is used in the engineering design of complex flow configurations, including external airflow, and for predicting the interactions of chemistry with fluid flow for combustion and propulsion. CEM is used to compute electromagnetic signatures of tactical ground, air, sea, and space vehicles. Currently, we have the capability to model the external airflow, propulsion performance, vehicle signature, and materials properties in vehicle design with reasonable predictive accuracy on current systems, provided that these aspects are computed independently. But what is desired is the ability to combine these independent modeling efforts into an interactive modeling capability that would account for the interplay among model components. For example, engineers could quickly see the effect of proposed changes in the propulsion design on the vehicle’s radar and infrared signature. Exceptional supercomputing performance and exceptional programmability are jointly required to enable a fine-grained, full-air-frame combined CFD and CEM simulation of a vehicle like the Joint Strike Fighter.18

Climate Modeling

Comprehensive three-dimensional modeling of the climate has always required supercomputers.19 To understand the role of supercomputing in climate modeling, it is important to first describe the composition of a climate model. Present-day climate models are made up of several major components of the climate system. In a sense they are now really Earth system models designed to deal with the issue of global change. The standard components are an atmosphere model, an ocean model, a combined land-vegetation-river transport (hydrological) model (which is sometimes a part of the atmospheric model), and a sea ice model. Some of

18  

High-End Crusader. 2004. “HEC Analysis: The High-End Computing Productivity Crisis.” HPC Wire 13(15).

19  

This subsection is based on white papers by Warren M. Washington, NCAR, “Computer Architectures and Climate Modeling,” and by Richard D. Loft, NCAR, “Supercomputing Challenges for Geoscience Applications,” both prepared for the committee’s applications workshop in Santa Fe, N.M., in September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

the climate models have embedded chemical cycles such as carbon, sulfate, methane, and nitrogen cycles, which are treated as additional aspects of the major components. Indeed, climate modeling is similar to astrophysics and plasma physics in that it is a multiscale and multiphysical discipline. Although all relevant processes ultimately interact at the 10,000-km scale of the planet, the most important and least parameterizable influence on climate change is the response of cloud systems; clouds are best treated by embedding explicit submodels with grid sizes down to 1 km into a coarser climate grid. Similarly, the most important aspect of the oceanic part of climate change deals with changes in the Gulf Stream and the associated thermohaline overturning in the North Atlantic, where horizontal grid spacing in the hydrodynamics is required to be only a few kilometers in order to resolve the fundamental length scales. Southern Ocean processes, which involve both the sea-ice cover as it affects marine biological productivity and the stability of the antarctic ice cap as it affects global sea level, also occur mainly at this small space scale. Land component models should represent the biological properties of multiple types of vegetation and soil at a resolution of 1 km, and models of the global carbon cycle must represent the complex chemical and biological reactions and processes in the free atmosphere, the land surface, and the full-depth ocean. Even then, some processes must be prescribed separately on the basis of laboratory and process studies into such phenomena as cloud microphysics, small-scale ocean mixing, chemical reactions, and biological interactions.

Even with the highest performing supercomputers available today, climate simulations of 100 to 1,000 years require thousands of computational hours. Climate modeling requires multi-thousand-year simulations to produce equilibrium climate and its signals of natural variability, multi-hundred-year simulations to evaluate climate change beyond equilibrium (including possible abrupt climatic change), many tens of runs to determine the envelope of possible climate changes for a given emission scenario, and a multitude of scenarios for future emissions of greenhouse gases and human responses to climate change. However, these extended simulations require explicit integration of the nonlinear equations using time steps of only seconds to minutes in order to treat important phenomena such as internal waves and convection.

During each time step of a climate model, there is a sizeable amount of floating-point calculation, as well as a large amount of internal communication within the machine. Much of the spatial communication derives inherently from the continuum formulations of atmospheric and oceanic dynamics, but additional communication may arise from numerical formulations such as atmospheric spectral treatment or oceanic implicit free-surface treatments. Because of the turbulent nature of the underlying flu-

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

ids, large volumes of model output must be analyzed to understand the underlying dynamics; this requires large external storage devices and efficient means of communicating with them.

As already indicated, an important aspect of the climate model is the grid resolution, both vertically and horizontally. In particular, the presence of moisture leads to a new class of small-scale fluid motions—namely, moist convection—which requires very high horizontal and vertical resolution (on the order of a kilometer) to resolve numerically. To resolve moist convection, the governing equations must include nonhydrostatic effects. This set of governing equations is considerably more difficult to solve than the hydrostatic primitive equations traditionally used in lower resolution atmospheric models. While direct numerical simulation at a global 1-km grid scale remains impractical for the foreseeable future, even so-called super parameterizations that attempt to realistically capture the sub-grid-scale properties of the underlying moist dynamics are dramatically more computationally expensive than current physics packages in operational models.

Resolution increases in hurricane modeling made possible by supercomputing upgrades since 1998 have improved the ability to forecast hurricane tracks, cutting the track error in half and providing advance information to reduce loss of life and property in threatened areas.20 Resolution increases will improve predictions of climate models, including the statistics of severe events in the face of climatic change.

All of the above considerations point to a massive need for increased computational resources, since current climate models typically have grid sizes of hundreds of kilometers, have few components and oversimplified parameterizations, have rarely reached equilibrium, and have rarely simulated future climate changes beyond a century. Moreover, they are seldom run in ensembles or for multiple-emission scenarios. Today, the climate modeler must make compromises in resolution in order to perform a realistic set of simulations. As advances in technology increase the speed of the supercomputers, history shows that the model complexity grows correspondingly, bringing both improved treatment of physical processes (such as clouds, precipitation, convection, and boundary layer fluxes) and the need for finer grid resolution.

Recently, climate models running on the most advanced U.S. supercomputers have approached grid sizes of 100 km. In particular, simulations with the Community Climate System Model (CCSM),21 running

20  

CNN. 2004. “Supercomputers Race to Predict Storms.” September 16.

21  

See <http://www.ccsm.ucar.edu/> for more information.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

mainly at the National Center for Atmospheric Research (NCAR) in support of the important Fourth Assessment of the Intergovernmental Panel on Climate Change (IPCC),22 have used a 100-km ocean grid and an atmospheric grid of about 140 km. Very compute-intensive ocean-only simulations have been carried out at 10 km for simulated periods of only a few decades at several DOE and DoD sites and for longer periods on the Earth Simulator, and the results show a striking increase in the realism of strong currents like the Gulf Stream. Also, the initialization of the ocean component of climate models using four-dimensional data assimilation has used enormous amounts of supercomputer time at NCAR and the San Diego Supercomputer Center while still being carried out at relatively coarse resolution.

Notwithstanding the implied need for high internal bandwidth and effective communication with external storage, the requirement for sustained computational speed can be taken as a measure of computing needs for climate modeling. A 100- to a 1,000-fold increase in compute power over the next 5 to 10 years would be used very effectively to improve climate modeling. For example, the embedding of submodels of cloud systems within climate model grids removes much of the uncertainty in the potential climatic response to increasing greenhouse gases but increases the computing time by a factor of 80. Ocean components of climate models should be run a few thousand years at the desired 10-km resolution to test their ability to simulate long-term equilibrium conditions from first principles. Additional aspects of atmospheric chemistry and oceanic chemistry and biology are needed to move toward a proper treatment of the global carbon cycle and its vulnerability to greenhouse gases and industrial pollutants.

Continuing progress in climate prediction can come from further increases in computing power beyond a factor of 1,000. One detailed study of computational increases needed for various facets of climate modeling has shown the need for an ultimate overall increase in computer power of at least a billion-fold.23 (Such a large increase could also be used for complex systems in plasma physics and astrophysics.) The breakdown of ultimate needs for increased computing power in climate modeling is as follows:

22  

More information is available at <http://www.ipcc.ch/about/about.htm>.

23  

Robert Malone, John Drake, Philip Jones, and Douglas Rotman. In press. “High-End Computing in Climate Modeling.” A Science-Based Case for Large-Scale Simulation. D. Keyes, ed. Philadephia, Pa.: SIAM Press.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
  • Increase the spatial resolution of the grids of the coupled model components. The resolution targets are about 10 km in both the atmosphere and ocean, but for different reasons. It has been demonstrated that 10-km resolution is needed to resolve oceanic mesoscale eddies. A similar resolution is needed in the atmospheric component to obtain predictions of surface temperature and precipitation in sufficient detail to analyze the regional and local implications of climate change. This increases the total amount of computation by a factor of 1,000.

  • Increase the completeness of the coupled model by adding to each component model important interactive physical, chemical, and biological processes that heretofore have been omitted owing to their computational complexity. Inclusion of atmospheric chemistry, both tropospheric and stratospheric, and biogeochemistry in the ocean are essential for understanding the ecological implications of climate change. This increases computation by a factor of 100.

  • Increase the fidelity of the model by replacing parameterizations of subgrid physical processes by more realistic and accurate treatments as our understanding of the underlying physical processes improves, often as the result of observational field programs. This increases computation by a factor of 100.

  • Increase the length of both control runs and climate-change-scenario runs. Longer control runs will reveal any tendency for the coupled model to drift and will also improve estimates of model variability. Longer climate-change-scenario runs will permit examination of critical issues such as the potential collapse of the global thermohaline circulation that may occur on time scales of centuries in global warming scenarios. Computation increases by a factor of 10.

  • Increase the number of simulations in each ensemble of control runs or climate-change-scenario runs. Increase the number of climate-change scenarios investigated. These issues are both examples of perfectly parallel extensions of present-day simulations: Each instance of another scenario or ensemble member is completely independent of every other instance. Ensemble members are distinguished by small perturbations in their initial conditions, which are quickly amplified by the nonlinearity of the equations. The use of ensembles provides an important measure of the range of variability of the climate system. Computation increases by a factor of 10.24

24  

Ibid.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Plasma Physics

A major goal of plasma physics research is to produce cost-effective, clean, safe electric power from nuclear fusion.25 Very large simulations of the reactions in advance of building the generating devices can save billions of equipment dollars. Plasmas comprise over 99 percent of the visible universe and are rich in complex, collective phenomena. Fusion energy, the power source of the Sun and other stars, occurs when forms of the lightest atom, hydrogen, combine to make helium in a very hot (~100 million degrees centigrade) ionized gas, or “plasma.” The development of a secure and reliable energy system that is environmentally and economically sustainable is a truly formidable scientific and technological challenge facing the world in the 21st century. This demands basic scientific understanding that can enable the innovations to make fusion energy practical. Fusion energy science is a computational grand challenge because, in addition to dealing with space and time scales that can span more than 10 orders of magnitude, the fusion-relevant problem involves extreme anisotropy; the interaction between large-scale fluidlike (macroscopic) physics and fine-scale kinetic (microscopic) physics; and the need to account for geometric detail. Moreover, the requirement of causality (inability to parallelize over time) makes this problem among the most challenging in computational physics.

Supercomputing resources can clearly accelerate scientific research critical to progress in plasma science in general and to fusion research in particular. Such capabilities are needed to enable scientific understanding and to cost-effectively augment experimentation by allowing efficient design and interpretation of expensive new experimental devices (in the multi-billion-dollar range). In entering the exciting new physics parameter regimes required to study burning fusion plasmas, the associated challenges include higher spatial resolution, dimensionless parameters characteristic of higher temperature plasmas, longer simulation times, and higher model dimensionality. It will also be necessary to begin integrating these models together to treat nonlinear interactions of different phenomena. Various estimates indicate that increases in combined computational power by factors of 1,000 to 100,000 are needed. Associated challenges include advancing computer technology, developing algorithms, and improving theoretical formulation—all of which will contribute to better overall time-to-solution capabilities.

25  

This subsection is based on excerpts from the white paper “Plasma Science,” prepared by W.M. Tang, Princeton University, for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Transportation

High-performance computing contributes to many aspects of transportation product engineering. It provides many benefits, such as reduced time to market, reduced requirements for physical prototypes, the ability to explore a larger design space, and a deeper understanding of vehicle behavior. The main problems addressed by high-performance computing include occupant safety (crash), noise, vibration, and harshness (NVH), durability, airflow, and heat transfer. These problems vary in time to solution from a few hours to days to weeks. The general goal is to achieve overnight turnaround times for all types of problems, which trades off the complexity of the models being run with the ability of engineers to utilize the results. The models need to have sufficient detail to provide a high degree of confidence in the accuracy of the results. Today’s machines are not fast enough to compensate for the scaling limitations of many of these problems.26

Transportation manufacturers drastically reduce their development expenses and time to market by replacing physical models and car crashes with virtual tests run on supercomputers. According to Bob Kruse, GM’s executive director for vehicle integration,27 supercomputing will enable his company to shorten its product development cycle from the 48 months of a few years ago to 15 months. The company is performing fewer rounds of vehicle prototyping, which has reduced engineering costs by 40 percent. Kruse went on to say that GM has eliminated 85 percent of its real-world crash tests since moving to modeling crashes on its supercomputer. In theory, the company could do away with its $500,000 crash tests, but the National Highway Traffic Safety Administration still requires final real-world crash testing.

There is a long history of using high-performance computing in the automotive industry (see Box 4.1). Automotive computer-aided engineering (CAE) may well be the largest private-sector marketplace for such systems. In the 1980s and early 1990s, automotive companies worldwide deployed the same Cray vector supercomputers favored by government laboratories and other mission agencies. This changed in the late 1990s, when government-funded scientists and engineers began migrating to distributed memory systems. The main CAE applications used in the automotive industry contain millions of lines of code and have proven very

26  

Based on excerpts from the white paper “High Performance Computing in the Auto Industry,” by Vincent Scarafino, Ford Motors, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

27  

John Gartner. 2004. “Supercomputers Speed Car Design.” Wired News. April 26.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

BOX 4.1
Automotive Companies and Their Use of High-Performance Computers

One of the main commercial users of supercomputing is the automotive industry. The largest car manufacturers in the United States, Europe, and the Far East all use supercomputing in one form or another for the design and validation cycle. This market segment is called mechanical computer-aided engineering (MCAE).

The use of computing in the automotive industry has come about in response to (1) the need to shorten the design cycle and (2) advances in technology that enable such reduction. One advance is the availability of large-scale high-performance computers. The automotive industry was one of the first commercial segments to use high-performance computers. The other advance is the availability of third-party application software that is optimized to the architecture of high-performance computers. Both advances operate in other industries as well; the existence of third-party application software for MCAE, electrical CAD, chemistry, and geophysics has increased the market for high-performance computers in many industries.

Since the use of supercomputing is integrated within the overall vehicle design, the time to solution must be consistent with the overall design flow. This requirement imposes various time constraints. To be productive, designers need two simulation runs a day (one in the morning and one overnight) or three (one in the morning, one in the afternoon, and one overnight). To meet that need, typical computer runs must complete in 4 to 8 hours or, at most, overnight. In many situations, the fidelity of the input is matched to this requirement. As additional compute power is added, the fidelity of the models is increased and additional design features simulated.

Demand for computing doubles every year. One measure of demand is the size of the model. Current models process 1 million elements. Larger models are not run now for two reasons: (1) processor capability is not powerful enough to process more than 1 million elements with manageable time-to-solution characteristics—that is, the single job takes too long to complete subject to operational requirements—and (2) the companies do not have adequate tools such as visualization to help them understand the outputs from larger simulations.

difficult to port to the distributed memory computational model. As a result, automotive companies have tended not to purchase capability systems in the last decade. Instead they have increased capacity and reduced their costs by replacing vector mainframes with shared memory multiprocessor (SMP) servers and, more recently, clusters of PCs.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

Almost all software is supplied by third-party, independent software vendors (ISVs). There are several de facto standard codes that are used, among them the following:

  • MSC/NASTRAN (structural analysis). NASTRAN generally runs on one processor and is I/O bound. Large jobs are run with limited parallelization on small SMP systems (four to eight processors).

  • PAMCRASH/LS-DYNA/RADIOSS (crash analysis). These codes use modest degrees of parallelism, ranging from 12 to 100 processors in production automotive calculations today. At that scale, crash codes work well on clusters. While at least one of these codes has run on 1,024 processors,1 load imbalances limit the effective scaling of these codes for today’s automotive calculations.

In the past 10 years there has been considerable evolution in the use of supercomputing in the automotive industry. Ten years ago, CAE was used to simulate a design. The output of the simulation was then compared with the results from physical tests. Simulation modeled only one component of a vehicle—for example, its brake system—and only one “discipline” within that subsystem (for example, temperature, weight, or noise). There has been a transition to the current ability to do design verification—that is, a component is designed by human engineers but the properties of the design can be checked before the component is built and tested. In some cases multidisciplinary verification is possible. The longer-term goal is to automate the design of a vehicle, namely, to move from single subsystems to an integrated model, from single disciplines to multidisciplinary analysis, and from verifying a human design to generating the design computationally. Design definition will require optimization and first-order analysis based on constraints. Attaining this objective will reduce design cycle times and increase the reliability and safety of the overall design.

   

NOTE: The committee is grateful to Vince Scarafino, Ford Motor Company, for his assistance in developing this box. In addition, the committee thanks the several auto manufacturers who kindly provided anonymous input.

1  

Roger W. Logan and Cynthia K. Nitta. 2002. “Verification & Validation (V&V) Methodology and Quantitative Reliability at Confidence (QRC): Basis for an Investment Strategy.” DOE paper UCRL-ID-150874.

This is not to say that there is no longer a demand for supercomputers in the automobile industry. In March of 2000, Toyota purchased 30 VPP5000 vector processors from Fujitsu. At the time, this was arguably the most powerful privately owned system in the world. As independent software vendor (ISV) CAE codes have matured to the point where they

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

can effectively exploit hundreds of processors, automotive companies have responded by purchasing larger systems. GM recently announced the purchase of a large IBM system rated at 9 Tflops peak, which would place it within the top 20 systems in the June 2004 TOP500 list.28 Frank Roney, a managing director at IBM, said GM’s supercomputer would most likely be the most powerful computer owned by a private company. In May 2004, the Japan Agency for Marine-Earth Science and Technology announced that it would make the Earth Simulator available to the Japanese Automobile Industry Association starting in summer 2004.29 According to a June 2004 report from Top500.org, automotive companies, including Ford, GM, Renault, VW, BMW, Opel, and Daimler Chrysler (three companies are anonymous), own 13 of the 500 fastest supercomputers in the world. The same report indicates that automakers dedicate nearly 50 percent of their supercomputing hours to crash test simulations.

Automotive engineers are continually expanding their computational requirements to exploit both available computing power and advances in software. Finite-element models of automobiles for crash simulation use mesh spacing of about 5 mm, resulting in problems that have as many as 1 million elements. The automotive engineering community would like to reduce the mesh size to 1 mm, resulting in 100 million elements. Today’s crash test models typically include multiple dummies, folded front and side airbags, and fuel in the tanks. Deployment of airbags and sloshing of fuel are modeled with CFD. Engineers in the future will expect CAE tools to automatically explore variations in design parameters in order to optimize their designs. John Hallquist of Livermore Software Technology Corporation believes that fully exploiting these advances in automotive CAE will require a seven-order-of-magnitude increase beyond the computing power brought to bear today.30 This would allow, among other things, much greater attention to occupant safety requirements, including aspects of offset frontal crash, side impact, out-of-position occupants, and more humanlike crash dummies.

While the use of supercomputers has historically been the most aggressive in the automotive industry, supercomputing facilitates engineer-

28  

Ibid.

29  

Summary translation of an article from Nihn Keizai newspaper, May 24, 2004, provided by the NSF Tokyo regional office.

30  

Based on excerpts from the white paper “Supercomputing and Mechanical Engineering,” by C. Ashcraft, R. Grimes, J. Hallquist, and B. Maker, Livermore Software Technology Corporation, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

ing in many other aspects of transportation. According to Ray Orbach, the DOE Office of Science’s research accomplishments in transportation simulation have received accolades from corporations such as GE and GM.31 When the Office of Science met with the vice presidents for research of these and other member companies of the Industrial Research Institute, it learned, for example, that GE is using simulation very effectively to detect flaws in jet engines. If the engine flaws identified by simulation were to go undetected, the life cycle of those GE engines would be reduced by a factor of 2, causing GE a loss of over $100,000,000. For example, the evaluation of a design alternative to optimize a compressor for a jet engine design at GE would require 3.1 × 1018 floating-point operations, or over a month at a sustained speed of 1 Tflops, which is near today’s state of the art in supercomputing. To do this for the entire jet engine would require sustained computing power of 50 Tflops for the same period. This is to be compared with many millions of dollars, several years, and many designs and redesigns for physical prototyping.32

In summary, transportation companies currently save hundreds of millions of dollars using supercomputing in their new vehicle design and development processes. Supercomputers are used for vehicle crash simulation, safety models, aerodynamics, thermal and combustion analyses, and new materials research. However, the growing need for higher safety standards, greater fuel efficiency, and lighter but stronger materials demands dramatic increases in supercomputing capability that will not be met by existing architectures and technologies. Some of these problems are relatively well understood and would yield to more powerful computing systems. Other problems, such as combustion modeling inside pistons, are still open research challenges. Nevertheless, a supercomputing capability that delivered even 100 Tflops to these applications would save billions of dollars in product design and development costs in the commercial transportation sector.33

Bioinformatics and Computational Biology

The past two decades have witnessed the emergence of computation and information technology as arguably the most important disciplines

31  

Testimony of Raymond L. Orbach, Director, Office of Science, U.S. Department of Energy, before the U.S. House of Representatives Committee on Science, July 16, 2003.

32  

Ibid.

33  

Ibid.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

for future developments in biology and biomedicine.34 The explanation of biological processes in terms of their underlying chemical reactions is one of the great triumphs of modern science and underlies much of contemporary medicine, agriculture, and environmental science. An exciting consequence of this biochemical knowledge is that computational modeling methods developed to study fundamental chemical processes can now, at least in principle, be applied to biology. Many profound biological questions, such as how enzymes exhibit both exquisite selectivity and immense catalytic efficiency, are amenable to study by simulation. Such simulations could ultimately have two goals: (1) to act as a strong validation that all relevant features of a biochemical mechanism have been identified and understood and (2) to provide a powerful tool for probing or reengineering a biochemical process.

Computation also is essential to molecular biology, which seeks to understand how cells and systems of cells function in order to improve human health, longevity, and the treatment of diseases. The sheer complexity of molecular systems, in terms of both the number of molecules and the types of molecules, demands computation to simulate and codify the logical structure of these systems. There has been a paradigm shift in the nature of computing in biology with the decoding of the human genome and with the technologies this achievement enabled. Equations-of-physics-based computation is now complemented by massive-data-driven computations, combined with heuristic biological knowledge. In addition to deployment of statistical methods for data processing, myriad data mining and pattern recognition algorithms are being developed and employed. Finding multiple alignments of the sequences of hundreds of bacterial genomes is a computational problem that can be attempted only with a new suite of efficient alignment algorithms on a petaflops supercomputer. Large-scale gene identification, annotation, and clustering expressed sequence tags are other large-scale computational problems in genomics.

In essence, computation in biology will provide the framework for understanding the flow of information in living systems. Some of the grand challenges posed by this paradigm are outlined below, along with the associated computational complexity:

34  

This subsection is based in part on excerpts from the white papers “Quantum Mechanical Simulations of Biochemical Processes,” by Michael Colvin, LLNL, and “Supercomputing in Computational Molecular Biology,” by Gene Myers, UC Berkeley, both prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
  • Deciphering the genome continues to be a challenging computational problem. One of the largest Hewlett-Packard cluster systems ever developed was used to assemble the human genome. In the annotation of the genome (assigning functional roles), the computation can be extraordinarily complex. Multiple genome comparisons, which are practically impossible with current computers, are essential and will constitute a significant challenge in computational biomedicine for the future.

  • There are typically a few hundred cell types in a mammal, and each type of cell has its own repertoire of active genes and gene products. Our understanding of human diseases relies heavily on figuring out the intracellular components and the machinery formed by the components. The advent of DNA microarrays has provided us with a unique ability to rapidly map the gene expression profiles in cells experimentally. While analysis of a single array is not a supercomputing problem, the collective analysis of a large number of arrays across time or across treatment conditions explodes into a significant computational task.

  • Genes translate into proteins, the workhorses of the cell. Mechanistic understanding of the biochemistry of the cell involves intimate knowledge of the structure of these proteins and details of their function. The number of genes from various species is in the millions, and experimental methods have no hope of resolving the structures of the encoded proteins. Computational modeling and prediction of protein structures remain the only hope. This problem, called the protein-folding problem, is regarded as the holy grail of biochemistry. Even when knowledge-based constraints are employed, this problem remains computationally intractable with modern computers.

  • Computer simulations remain as the only approach to understanding the dynamics of macromolecules and their assemblies. Early simulations were restricted to small macromolecules. In the past three decades, our ability to compute has helped us to understand large macromolecular assemblies like membranes for up to tens of nanoseconds. These simulations that scale as N2 are still far from capable of calculating motions of hundreds of thousands of atoms for biologically measurable time scales.

  • Understanding the characteristics of protein interaction networks and protein-complex networks formed by all the proteins of an organism is another large computational problem. These networks are small-world networks, where the average distance between two vertices in the network is small relative to the number of vertices. Small-world networks also arise in electric power networks and semantic networks for intelligence analysis and in models of the Web; understanding the nature of these networks, many with billions of vertices and trillions of edges, is critical to making them invulnerable to attacks. Simulations of small-world networks fall into three categories: topological, constraint-driven,

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

and dynamic. Each of these categories involves complex combinatorial, graph theoretic, and differential equation solver algorithms and challenges any supercomputer. Current algorithmic and computational capabilities will not be able to address computational needs for even the smallest microorganism, Haemophilus influenza. There is an imminent need for the development of novel methods and computing technology.

  • The achievement of goals such as a cure for cancer and the prevention of heart diseases and neurovascular disorders continue to drive biomedicine. The problems involved were traditionally regarded as noncomputational or minimally computational problems. However, with today’s knowledge of the genome and intracellular circuitry, we are in a position to carry out precise and targeted discovery of drugs that, while curing the pathology, will only minimally perturb normal function. This is rapidly emerging as a serious computational task and will become the preeminent challenge of biomedicine.

  • Much of our knowledge of living systems comes from comparative analysis of living species. Phylogenetics, the reconstruction of historical relationships between species or individuals, is now intensely computational, involving string and graph algorithms. In addition to being an intellectual challenge, this problem has a significant practical bearing on bioterrorism. Computation is the fastest and currently the only approach to rapidly profiling and isolating dangerous microorganisms.

In conclusion, we are at the threshold of a capability to perform predictive simulations of biochemical processes that will transform our ability to understand the chemical basis of biological functions. In addition to its value to basic biological research, this will greatly improve our ability to design new therapeutic drugs, treat diseases, and understand the mechanisms of genetic disorders.

Societal Health and Safety

Computational simulation is a critical tool of scientific investigation and engineering design in many areas related to societal health and safety, including aerodynamics; geophysics; structures; manufacturing processes with phase change; and energy conversion processes. Insofar as these mechanical systems can be described by conservation laws expressed as partial differential equations, they may be amenable to analysis using supercomputers. Trillions of dollars of economic output annually and the health and safety of billions of people rest on our ability to simulate such systems.

Incremental improvements in the accuracy and reliability of simulations are important because of huge multipliers. A very small (perhaps

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

even 1 percent) improvement in the efficiency of heat exchangers or gas turbines could have a significant impact on the global environment and economy when aggregated over the lifetime of many such devices.35

The problem of monitoring the quality of air, water, and other utility networks has gained prominence in the wake of terrorist events like Tokyo’s subway incident and London’s poison gas bomb plot. One example of a computational problem of this type is optimizing the placement of sensors in municipal water networks to detect contaminants injected maliciously. Traditionally, this type of problem was studied using numerical simulation tools to see how a water supply network is impacted by the introduction of contaminant at a given point. Recently, combinatorial optimization formulations have been proposed to compute optimal sensor locations. Optimal sensor placement is desirable to ensure adequate coverage of the network’s flow for detection and remediation of contaminants. The objective of one model is to minimize the expected fraction of the population that is at risk for an attack. An attack is modeled as the release of a large volume of harmful contaminant at a single point in the network with a single injection. For any particular attack, assume that all points downstream of the release point can be contaminated. In general, one does not know a priori where this attack will occur, so the objective is to place sensors to provide a compromise solution across all possible attack locations. Depending on the size of the water network, the amount of computation needed can be extremely large and can certainly require supercomputing performance for timely results, especially in an emergency.36

Earthquakes

An important application in geophysical exploration is earthquake modeling and earthquake risk mitigation. When an earthquake occurs, some areas the size of city blocks are shaken, while other areas are stable and not shaken. This effect is caused by the focusing or deflection of seismic waves by underground rock structures. If the underground rock struc-

35  

Based on excerpts from the white paper “Supercomputing for PDE-based Simulations in Mechanics,” by David Keyes, Columbia University, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

36  

Based on excerpts from the white paper “Supercomputing and Discrete Algorithms: A Symbiotic Relationship,” by William Hart, Bruce Hendrickson, and Cindy Phillips, Sandia National Laboratories, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

ture of an area in an earthquake-prone region could be simulated or imaged, damage mitigation strategies could include identifying dangerous areas and avoiding building on them and simulating many typical earthquakes, noting which areas are shaken and identifying dangerous areas.

Using forward simulation, one can match seismic simulation results with observed seismographic data. Then an image of the underground rock in a region can be deduced by repeatedly simulating the error from forward simulation by adjoint methods.37

Current earthquake simulation codes running at the California Institute of Technology and the Pittsburgh Supercomputer Center use frequencies up to 1 Hz, which equates to a resolution of several miles of rock. Seismographs can collect data up to 20 Hz or more, which yields a resolution of hundreds of feet of rock. This is a useful resolution for risk mitigation, since buildings are hundreds of feet in size. However, the computing power needed to process such data is on the order of 1 exaflops, or 1,000 Pflops (25,000 times the power of the Earth Simulator). For useful earthquake risk mitigation, the algorithms exist, the codes are written and de-bugged, and the input data exist. The consequence of not proceeding is continued loss of life and extensive property damage in earthquake-prone regions of the world.38

Geophysical Exploration and Geoscience

The simulation of petroleum reservoirs is a large consumer of supercomputing resources in this application area.39 All of the major oil companies simulate petroleum reservoirs to predict future oil and gas production from the subsurface of Earth, where porous sandstone or lime-stone formations may hold oil and gas. Predictions are made using differential equations that represent flow in porous media in three dimensions. In addition to the simple case of flow of oil, water, and gas in the reservoirs, it is often necessary to include the phase behavior of multicomponent hydrocarbon fluids for enhanced-recovery processes and/or thermal effects for steam injection or in situ combustion recovery techniques.

37  

Erik P. DeBenedictus. 2004. “Completing the Journey of Moore’s Law,” Presentation at the University of Illinois, May 5.

38  

Ibid.

39  

This subsection is based on excerpts from the white paper “High Performance Computing and Petroleum Reservoir Simulation,” by John Killough, Landmark Graphics Corporation, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

The overall goal of the simulations is to maximize hydrocarbon liquid and gas recovery and net present value.

The motivation for using supercomputing in reservoir simulation has always existed. From the earliest simulation models, computing resources have been severely taxed simply because the level of complexity desired by the engineer almost always exceeded the speed and memory of the hardware. The high-speed vector processors of the late 1970s and early 1980s led to orders of magnitude improvement in the speed of computation and led to production models of several hundred thousand cells. The relief brought by these models was short lived. The desire for increased physics of compositional modeling and the introduction of geostatistically/structurally based geological models led to increases in computational complexity even beyond the large-scale models of the vector processors. Tens of millions of cells with complete reservoir parameters now became available for use by the engineer. Although upscaling or lumping provided a tool to dramatically reduce model sizes, the inherent assumptions of the upscaling techniques left the engineer with a strong desire to incorporate all of the available data in studies.

Scientific studies of Earth’s interior are heavily dependent on supercomputer power. Two examples are illustrative. One is the geodynamo—i.e., an understanding of how Earth’s magnetic field is generated by complicated magnetohydrodynamic convection and turbulence in its outer core, a long-standing grand challenge in fluid dynamics. Supercomputer simulations have enabled major breakthroughs in the last decade, including the first self-consistent dynamo solution and the first simulated magnetic reversal, both of which occurred in 1995. However, these simulated dynamos are still many orders of magnitude away from the “correct” parameter range. The second example comes from the need to understand the dynamics of Earth’s plate tectonics and mantle convection, which drives continental drift, mountain building, etc. To do this simulation properly requires incorporating the correct multirheological behavior of rocks (elastic, brittle, viscous, plastic, history-dependent, and so forth), which results in a wide range of length scales and time scales, into a three-dimensional, spherical model of the entire Earth, another grand challenge that will require substantially more computing power to address.40

40  

For more information, see <http://sdcd.gsfc.nasa.gov/ESS/olson.finalreport/final_report.html>. A more general article is P.J. Tackley, J.R. Baumgardner, G.A. Glatzmaier, P. Olson, and T. Clune, 1999, “Three-Dimensional Spherical Simulations of Convection in Earth’s Mantle and Core Using Massively-Parallel Computers,” Advanced Simulations Technologies Conference, San Diego, pp. 95-100.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Astrophysics

Observation has always been fundamental to astronomy, but controlled experiments are extremely rare.41 Thus, astronomical computer simulations have assumed the traditional scientific role of controlled experiments by making it possible to test scenarios when the underlying physical laws are known. Observations still provide a check, but they show the results of processes that cannot be controlled in a laboratory. Furthermore, the evolutionary time scales for most astronomical systems are so long that these systems seem frozen in time. Constructing evolutionary models purely from observation is therefore difficult. By observing many different systems of the same type (e.g., stars or galaxies), we can see many different stages of development and attempt to put them into a logical order, but we cannot watch a single system evolve. A supercomputer simulation is usually required to provide the evolutionary model that ties the different observed stages together using known physical laws and properties of matter.

Stellar evolution theory provides an excellent example of why astro-physicists have been forced to rely on computer simulation. Although one can perform laboratory experiments to determine the properties of the gaseous constituents in a star like the Sun, one cannot build an experimental star in the laboratory and watch it evolve. That must be done by computer simulation. Although one can make some simple arguments and estimates without using a computer, the physics involved in stellar evolution theory is complex and nonlinear, so one does not get very far in developing the theory without a computer.

Supercomputing power can be used to literally add a spatial dimension, turning a two-dimensional simulation of a supernova explosion into three-dimensional simulation, or it can be used to add treatments of new and important phenomena into a simulation. For example, magnetic fields could be added to global simulations of solar convection to address the operation of the dynamo that drives the sunspot cycle. For some problems, such as the development of large-scale structure in the expanding universe, simply getting more of the system under study into the computational problem domain by dramatically increasing the size of the computational grid should have a significant impact on scientific discovery. Alternatively, one might choose to simulate the same size system, using supercomputing power to treat structures on a much wider range of

41  

This subsection is based on excerpts from the white paper “Future Supercomputing Needs and Opportunities in Astrophysics,” by Paul Woodward, University of Minnesota, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

length and time scales. An excellent example is the cosmological problem, since it contains scales of interest ranging from that of a single star to that of a large cluster of galaxies.

Physicists trying to determine whether our universe will continue to expand or eventually collapse have gathered data from dozens of distant supernovae. By analyzing the data and simulating another 10,000 supernovae on supercomputers at NERSC, they have concluded that the universe is expanding—and at an accelerating rate.42

Materials Science and Computational Nanotechnology

The emerging fields of computational materials science examine the fundamental behavior of matter at atomic to nanometer length scales and picosecond to millisecond time scales in order to discover novel properties of bulk matter for numerous important practical uses.

Predictive equations take the form of first principles electronic structure molecular dynamics (FPMD) and quantum Monte Carlo (QMC) techniques for the simulation of nano-materials. The QMC methods are highly parallel across multiple processors but require high bandwidth to local memory, whereas the FPMD methods are demanding of both local and global bandwidth. The computational requirements of a materials science problem grow typically as the cube of the number of atoms in any simulation even when the newest and best computational algorithms are used—making the area an almost unlimited consumer of future increases in computer power. The most beneficial simulations in terms of practical applications require large numbers of atoms and long time scales—far more than presently possible in both of those aspects. For example, FPMD simulations are currently limited to a few hundred atoms for a few picoseconds. The promise of revolutionary materials and processes from materials science will routinely require several petaflops of computer power in the not too distant future.

As the Committee on the Future of Supercomputing heard in numerous presentations during its site visits, computational materials science is now poised to explore a number of areas of practical importance. Algorithms are well tested that will exploit 100 to 1,000 times the computing power available today. Materials scientists in a number of universities as well as in DOE laboratories are already targeting the largest future con-

42  

Testimony of Raymond L. Orbach, Director, Office of Science, U.S. Department of Energy, before the U.S. House of Representatives Committee on Science, July 16, 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

figurations of Cray X1 and IBM Blue Gene/L in order to advance their applications.

The promise of new materials and processes covers a wide variety of economically important areas. Among the most important are these:

  • Better electronic equipment. Materials with superconducting properties are most useful when they can function at temperatures well above absolute zero. The negligible power loss of superconductors makes them ideal for constructing a range of devices from MRI machines to microprocessors, when cooling can be provided by relatively inexpensive liquid nitrogen (as opposed to more expensive liquid helium systems). A computational search is well under way for superconductors with higher critical temperatures than substances already found in the laboratory.

  • Improved power transmission. It is possible that computational methods will discover synthetic materials with much better conducting properties at room temperatures than those presently available. The possibility of nearly loss-free power transmission has major economic implications. Even supercomputing itself would benefit greatly.

  • High-density data storage. Some supercomputing applications will require magnetic storage densities of terabits per square inch in the relatively near future. The information will need to be stored in nanometer-scale particles or grains. A detailed understanding of the magnetism in nanometer particles will have to come from computational studies that will be validated with selected experiments. This is a new way to approach the science involving magnetic storage and constitutes a major opportunity for petaflops-scale computing.43

  • Photoelectric devices. In selective-light-absorbing materials for solar energy, for photothermal energy conversion, or for optical sensors, the active semiconductor particles will contain millions of atoms to ensure sharp enough lines. With clever techniques exploiting special features to reduce the computational burden, the optical properties of such particles can be accurately evaluated, and even charging effects from electron excitations can be accounted for. Such calculations can now be performed only by using very large allocations of time on the most powerful computers available in the United States. To be useful for designing new structures and devices, such simulations need to be run almost routinely for configurations that do not have the special features currently being exploited.44

43  

Thomas Schulthess. 2004. “Ab-initio Monte Carlo for Nanomagnetism.” ORNL White Paper.

44  

“Accelerating the Revolution in Computational Materials Science,” 2002, <http://www.ultrasim.info/doe_docs/acc_mat_sci.pdf>.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
  • Electric motors. Scientists have recently achieved breakthrough quantum mechanical simulations of magnetic moments at high temperatures. Such simulations were limited to a few thousand atoms of pure iron. Understanding more complex substances is the key to designing materials for stronger magnets in order to build more efficient and powerful electrical generators and motors. For simulations to accurately model the dynamics of magnetic domains in more complex materials, much larger simulation sizes will be required. Award-winning algorithms of high quality exist, so the issue now is having a computing platform capable of sustaining the level of computation necessary to carry out the science.45

  • Catalysts. The U.S. chemical, biochemical, and pharmaceutical industries are the world’s largest producer of chemicals, ranging from wonder drugs to paints to cosmetics to plastics to new, more efficient energy sources. A key ingredient in nearly all such industrial processes is a type of chemical called a catalyst. The true computational design of practical catalysts for industrial and commercial applications will require the ability to predict, at the molecular level, the detailed behavior of the large, complex molecules and materials involved in catalytic processes. This level of detail is not available from experiments, and it is not feasible on currently available computer hardware. For example, to simulate the platinum catalyst in a car’s catalytic converter requires the model to include hundreds to tens of thousands of platinum atoms. A realistic simulation of the actual process in a car engine would take decades on today’s computer hardware. The design of new catalysts simply cannot wait this long if the U.S. chemical and pharmaceutical industries are to remain competitive. New computational capabilities will revolutionize the chemical industry, turning the art of catalysis creation into the science of catalyst design.46

  • Bioengineering. Within the biology arena, the use of supercomputers will enable microscopic modeling of DNA repair mechanisms and drug/ DNA interactions, effectively bringing quantum simulations into the realm of biology. In particular, nearly exact QMC results will represent valuable theoretical benchmarks that may help overcome some of the current limitations of experimental biology.47

45  

Ibid.

46  

“Computational Design of Catalysts: Building the Science Case for Ultrascale Simulations,” 2002, <http://www.ultrasim.info/doe_docs/catalysis_redux2.pdf>.

47  

F. Gygi, G. Galli, J.C. Grossman, and V. Bulatov. 2002. “Impact of Earth-Simulator-Class Computers on Computational Nanoscience and Materials Science.” DOE Ultrascale Simulation White Paper.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

In summary, computational materials science is emerging as an important factor in providing the designer materials and processes that will underlie the economic progress of the nation in the coming decades. Simulating the complexity of large numbers of atoms and molecules over increasingly long time periods will challenge supercomputers of petaflops power and beyond.

Human/Organizational Systems Studies

The study of macroeconomics and social dynamics is amenable to simulation and study using supercomputing. In such applications, the behavior of large human populations is simulated in terms of the overall effect of decisions by hundreds of millions of individuals. The simulations can model physical or social structures with hundreds of thousands, or maybe even millions, of actors interacting with one another in a complex fashion. Supercomputing makes it possible to test different interactor (or interpersonal) relations to see what macroscopic behaviors can ensue. Simulations can determine the nature of the fundamental forces or interactions between actors. Some logistical examples include airline crew scheduling, inventory management, and package delivery scheduling (the FedEx problem).48

Sociotechnical systems of 106 to 109 agents (people, packets, commodities, and so on) with irregular interactions on time scales of seconds to years can be simulated using supercomputers at institutions like Los Alamos National Laboratory. However, the customers for such simulations are often organizations such as metropolitan planning offices, which do not generally have access to sophisticated supercomputing systems and therefore are limited to manipulating the amount of data that can be handled by COTS technology such as Linux clusters. Over the coming years, researchers will expand existing simulations of transportation, electricity distribution and markets, epidemiology, and mobile telecommunications on scales ranging from that of a city the size of Portland, Oregon (1.6 million people) to national scale. Sociotechnical simulations in the future will require coupling many large, heterogeneous, irregular simulation systems, which will require advanced supercomputing power to accomplish.49

48  

Testimony of Raymond L. Orbach, Director, Office of Science, U.S. Department of Energy, before the U.S. House of Representatives Committee on Science, July 16, 2003.

49  

Based on excerpts from the white paper “The Future of Supercomputing for Sociotechnical Simulation,” by Stephen Eubank, LANL, prepared for the committee’s Santa Fe, N.M., applications workshop, September 2003.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

PROJECTED COMPUTING NEEDS FOR APPLICATIONS

The scientific and engineering applications that use supercomputing are diverse both in the nature of the problems and in the nature of the solutions. Most of these applications have unsatisfied computational needs. They were described in expert briefings to the committee as computing-limited at present and very much in need of 100 to 1,000 times more computing power over the next 5 to 10 years. Increased computing power would be used in a variety of ways:

  • To cover larger domains, more space scales, and longer time scales;

  • To solve time-critical problems (e.g., national security ones) in shorter times;

  • To include more complete physics and/or biogeochemistry;

  • To use more sophisticated mathematical algorithms with desirable linear scaling; and

  • To add more components to models of complex systems.

Various experts made estimates of the long-range computing power needed for their disciplines in units of petaflops. Most of the applications areas discussed would require a minimum sustained performance of 10 Pflops to begin to solve the most ambitious problems and realize practical benefits. To move toward a full solution of these problems would require capabilities of 100 Pflops and beyond.

The overall computing style in important application areas appears to be evolving toward one in which community models are developed and used by large groups. The individual developers may bring diverse back-grounds and expertise to modeling a complex natural system such as the climate system or to a daunting engineering effort like the development of a fusion power generator. In addition, the applications are moving toward first-principles methods, in which basic physical and biochemical relations are used as much as possible instead of ad hoc parameterizations involving approximations and poorly known constants. Both trends will greatly increase the amount of computing power required in various applications.

A common computational characteristic is the demand for both capacity and capability. Typically, each disciplinary area does many smaller simulations and parameter studies using machine capacity prior to large simulations that require machine capability, followed by analysis studies that use capacity. Many application areas could each use at least one large computing center almost continuously to attack multiple problems in this way.

Another computational characteristic is that each application area has

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

a rather high degree of problem complexity. There may be multiple time and space scales, different component sub-models (e.g., magnetic, hydro-dynamic, or biochemical), different types of equations (e.g., nonlinear partial differential equations and ordinary differential equations), and different algorithms (spectral, finite-difference, finite-element, algebraic) covering a range of problems being studied in each area.

It is clear from the summary above that a 1,000-fold increase in computing power is needed almost immediately and a 1,000,000-fold increase will ultimately be needed by the current major applications. Some of this increase can be expected on the basis of Moore’s law and greater numbers of processors per machine. Any increase in raw computing power in terms of raw flops will have to be accompanied by larger memories to accommodate larger problems, and internal bandwidth will have to increase dramatically. As problems become more data-oriented, more effective parallel I/O to external devices will be needed, which will themselves have to be larger than today’s disks and mass storage systems.

Table 4.1 summarizes six supercomputing system bottlenecks that often limit performance on important applications and gives examples of the applications. It should be noted that the limitations/bottlenecks in application areas are heavily dependent on the problem-solving strategies and the algorithms used.

The ability of applications to be mapped onto hardware effectively is critically dependent on the software of the overall system, including both the operating system and the compilers. Application programmers and users will need software that exploits the features of any given machine without heroic efforts on the programmer’s part. Software ideally should

TABLE 4.1 Six Limitations of Supercomputing Systems

Limitation/Bottleneck

Typical Areas of Application

Floating-point performance

Astrophysics, defense radar cross-sections, climate modeling, plasma physics

Memory size

Intelligence, materials science, genomics, automobile noise, vibration, and harshness

Memory bandwidth

Intelligence, climate modeling, materials science, astrophysics, biological systems modeling

Memory latency

Intelligence, nuclear simulation, climate modeling, astrophysics, biological systems modeling

Interconnect bandwidth

Intelligence, climate modeling, materials science, astrophysics, biological systems modeling

Interconnect latency

Intelligence, nuclear simulation, climate modeling, astrophysics, biological systems modeling

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

promote effective parallel processor usage and efficient memory use while hiding many of the details. Ideally, software should allow portability of well-designed application programs between different machine architectures, handle dynamic load balancing, and also have fault tolerance.

There is also a need for better ability to deal with locality while maintaining some type of global addressing in a way that can be mapped efficiently by compilers and run-time systems onto diverse hardware architectures. For lack of alternatives, many supercomputing applications are written in Fortran 90 and C. The use of High-Performance Fortran (HPF) on the Earth Simulator is one of only a few examples of using higher level programming languages with better support for parallelism. More versatile, higher-level languages would need to exploit architectures efficiently in order to attract a critical mass of followers that would sustain the language and its further development. In regard to memory access beyond an individual processor, most communication between and even within nodes uses MPI and sometimes OpenMP, again because of the lack of other choices. Many of the application areas are hampered by the software overheads of existing methods and would benefit significantly from more efficient tools to maximize parallel utilization with minimal programming effort. Chapter 5 discusses the hardware and software issues from a technology perspective.

Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 67
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 68
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 69
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 70
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 71
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 72
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 73
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 74
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 75
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 76
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 77
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 78
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 79
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 80
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 81
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 82
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 83
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 84
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 85
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 86
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 87
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 88
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 89
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 90
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 91
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 92
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 93
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 94
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 95
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 96
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 97
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 98
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 99
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 100
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 101
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 102
Suggested Citation:"4 The Demand for Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 103
Next: 5 Today's Supercomputing Technology »
Getting Up to Speed: The Future of Supercomputing Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Supercomputers play a significant and growing role in a variety of areas important to the nation. They are used to address challenging science and technology problems. In recent years, however, progress in supercomputing in the United States has slowed. The development of the Earth Simulator supercomputer by Japan that the United States could lose its competitive advantage and, more importantly, the national competence needed to achieve national goals. In the wake of this development, the Department of Energy asked the NRC to assess the state of U.S. supercomputing capabilities and relevant R&D. Subsequently, the Senate directed DOE in S. Rpt. 107-220 to ask the NRC to evaluate the Advanced Simulation and Computing program of the National Nuclear Security Administration at DOE in light of the development of the Earth Simulator. This report provides an assessment of the current status of supercomputing in the United States including a review of current demand and technology, infrastructure and institutions, and international activities. The report also presents a number of recommendations to enable the United States to meet current and future needs for capability supercomputers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!