Algorithms

For the simulation of large-scale phenomena, such as cosmology, star formation, or gravity wave formation and propagation, the use of multiresolution for particles (hierarchical N-body methods) or fields (AMR methods) are core capabilities on which much of the current success is built. However, multiresolution methods are mature to varying degrees, depending on the level of model complexity. AMR for magnetohydrodynamics or for Einstein’s equations of general relativity, for example, is currently undergoing rapid development, while for coupled radiation and matter, or general relativistic fluid dynamics, such methods are still in their infancy. Radiation is particularly difficult owing to the need to solve time-dependent problems in six-dimensional phase space. For supernova simulations, an additional set of difficulties exists. For instance, while for much of the simulation the star is spherically symmetric on the largest scales, it has asymmetric three-dimensional motions on the small scales. The need to preserve that large-scale symmetry requires new gridding methodologies such as moving multiblock grids, combined with local refinement. A second difficulty is that there are stiffness issues due to the reaction kinetics of thermonuclear burning or due to low-Mach-number fluid flows. New algorithms will need to be developed to integrate over the fast timescales efficiently and without loss of accuracy or robustness.

Data Analysis and Management

For data-intensive fields like astronomy and astrophysics, the potential impact of HECC is felt not just in the power it can provide for simulations but also in the capabilities it provides for managing and making sense of data, irrespective of whether the data are generated by simulations or collected via observations. The amount, complexity, and rate of generation of scientific data are all increasing exponentially. Major Challenges 1 and 2 in Chapter 2 (the nature of dark matter and dark energy) probably will be addressed most productively through observation. But whether data are collected through observation or generated with a computer, managing and exploiting data sets on this scale is critically dependent on HECC. The specific problems stemming from massive amounts of data are discussed in Chapter 2.

Software Infrastructure

There are three drivers for the development of software infrastructure for astrophysics in the long term. The first is the expected radical change in computer hardware. Gains in aggregate performance are expected to come mainly from increasing the level of concurrency rather than from a balanced combination of increases in clock speeds of the constituent processors and increases in the number of processors. Only a small fraction of the algorithms of importance to astrophysics have been shown to scale well to 103-104 processors. Thus high-end systems requiring the effective use of 108 processors and 109 threads represent an enormous challenge. The second driver is the nonincremental nature of the model and algorithm changes described above. Development of optimal codes will require an aggressive and nimble exploration of the design space. This exploration will involve a moving target because it will have to be done on very high-end systems, whose architectures are evolving simultaneously. The third driver is the problematic aspects of data management.

The response to these three drivers is roughly the same: Design high-level software tools that hide the low-level details of the problem from the developer and user, without foreclosing important design options. Such tools will include new programming environments to replace MPI/OpenMP for dealing



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement