a balanced system in which computing speed and memory, archival storage and capacity, and network speed and throughput are combined to dramatically increase the performance simulations. The approach of using commercially available components to the extent possible will facilitate transfer of the new technologies for use in a number of scientific and engineering pursuits without duplicating ASCI's costs.
Messina asserted that the advances in computing power that will become available in the next 5 to 10 years will be so great that they will change the very manner in which we pursue advances in science and technology.
Peter R. Taylor, San Diego Supercomputer Center and University of California, San Diego, spoke on the state of the art in computational chemistry and the extent to which it moots the requirements of the chemical science community. Computational chemistry—whose major activities can be classified as molecular electronic structure (often referred to as quantum chemistry), reaction and molecular dynamics, and statistical mechanics—is one of the great scientific success stories of the past three decades, as evidenced by the award of the 1998 Nobel Prize in chemistry to John Pople and Walter Kohn. Taylor described computational chemistry as a mature and very successful field that nevertheless requires continuing effort to improve theories, methods, algorithms, and implementation. He also pointed to a need for training students in these areas.
More powerful computers will allow current methods to be extended to over larger molecules, but new methodologies will be needed to address many of the problems of interest to chemists. Taylor stated that the chemical sciences community needs to encourage the implementation of existing methods on now hardware, as well as the development and implementation of new methods. As new methods are developed, possible advantages offered by new computer architectures can be considered; e.g., approaches previously precluded because of requirements for enormous memory might be perfectly feasible on ASCI-class machines. Use of modern software engineering practices and modern computer languages in implementations can increase ease of maintenance. New methods and implementations can also take advantage of modern storage, retrieval, and data management technologies as well as interactive environments in which users can steer simulations and visualize their data.
Susan L. Graham, University of California, Berkeley, started by noting that high-performance computing is difficult. She elaborated on the technical issues that must be addressed if we are to take advantage of the exciting opportunities offered by the ongoing revolutionary increases in computing power. She indicated that one way to get more out of computing is by using parallelism—it reduces the elapsed time required for the most demanding computations, keeps the calculation moving along when delays arise in sequential computation, and overcomes fundamental limitations bounding the speed of sequential computation, such as the speed of light. However, advances from parallelism won't come for free. Issues that must be addressed in improving end-to-end performance of a calculation include identifying the work that can be done in parallel, correctly partitioning that work across the processors, and arranging the data so that it resides close to where it is needed (because of communication delays). Even then, at a lower level in the system, the system software (or a programmer) has to describe the details of how the work is actually done. Graham also mentioned issues in addition to performance that are going to become increasingly problematic, such as security and fault tolerance.
Among the nontechnical issues mentioned were concerns about having enough people with the deep knowledge of both chemistry and information technology required for developing workable problem-solving strategies. In addition, Graham pointed out that the scientific community will have to become