It is often possible to trade resource allocation between C and N—for example, the number of atoms against a classical versus quantum mechanical description of their interaction. It is seldom possible to dramatically lengthen the duration of the simulation. This allocation rule (Eq. 4.1) also highlights the limited ability of parallel computation to affect the duration t. The intrinsic serial character of molecular dynamics can be avoided only by using Monte Carlo and other intrinsically statistical approaches. The impact of computing technology on molecular dynamics simulations is suggested by the history of the size and duration of such simulations as shown in Figure 4.2 Although the extremes of time and length scales cannot yet be reached simultaneously, a large-scale molecular dynamics simulation of fracture in a short-range pair potential model has already been carried out at Los Alamos for 108 particles and 250 vibrational periods. The “food chain” of theoretical descriptions depicted in Figure 4.1 and further summarized in Equation 4.1 provides a useful framework for characterizing the opportunities for NRL in the coming decade. The food chain process requires a usefully succinct summarization of calculational output, such as fitting classical pair and many-body potentials to output from quantum mechanical calculations. These summarization links among the members of the food chain are not known in general and represent one of the foremost challenges in computational materials science. NRL enjoys competence across the entire food chain of theoretical descriptions, from the details of many-particle correlations to continuum theories of chemistry and mechanics. The synthesis of these competencies and expansion of their scope by exploiting parallel computation are the dominant themes of this report. Specific opportunities under this umbrella are identified and elaborated on in the following sections.

Figure 4.2 Historical growth of molecular dynamics calculations over four decades. The number of particles correlates with the growth of available memory, whereas simulation times correlate with processor speed. The particle number evolution appears to exhibit two breaks: the first in the early 1980s reflects the advent of vector supercomputers, and a second in the late 1980s reflects the advent of parallel computers.

CODE PARALLELIZATION

Opportunity 1: Parallelizing code will benefit from the consolidation of existing scalar methods. The possibility of establishing contacts with commercial firms to support such codes should be explored.

An important statement to make at the outset is that parallelization of code is a time-consuming venture and can be carried out only in an environment that has a firm, long-range commitment to this activity and to the research studies that will follow naturally from this code development.

Development of scalable code for massively parallel processors will greatly increase performance and the size of the system that can be studied. Porting an existing code to run on parallel machines is only the first step, although it still offers a computing and memory advantage, even if it does not fully exploit the hardware and software. A more significant challenge is the rethinking of algorithms and code structures. This may require a nontrivial investment of time, but the gains can be large.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement