The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Getting up to Speed the Future of Supercomputing
promote effective parallel processor usage and efficient memory use while hiding many of the details. Ideally, software should allow portability of well-designed application programs between different machine architectures, handle dynamic load balancing, and also have fault tolerance.
There is also a need for better ability to deal with locality while maintaining some type of global addressing in a way that can be mapped efficiently by compilers and run-time systems onto diverse hardware architectures. For lack of alternatives, many supercomputing applications are written in Fortran 90 and C. The use of High-Performance Fortran (HPF) on the Earth Simulator is one of only a few examples of using higher level programming languages with better support for parallelism. More versatile, higher-level languages would need to exploit architectures efficiently in order to attract a critical mass of followers that would sustain the language and its further development. In regard to memory access beyond an individual processor, most communication between and even within nodes uses MPI and sometimes OpenMP, again because of the lack of other choices. Many of the application areas are hampered by the software overheads of existing methods and would benefit significantly from more efficient tools to maximize parallel utilization with minimal programming effort. Chapter 5 discusses the hardware and software issues from a technology perspective.