Click for next page ( 10

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 9
1 INTRODUCTION Supercomputers allow the solution of computational problems and the simulation of physical phenomena that may not be possible or economic by any other means. However, supercomputers are expensive to acquire and operate, and their design is undergoing rapid and fundamental change. Both of these circumstances demand reliable evaluation of supercomputer performance--on the one hand, to make the most of costly resources and, on the other, to devise and select even more powerful designs. THE PROBLEM The task undertaken here is to survey briefly the current state of affairs in the evaluation of supercomputers, to suggest what improvements might be attainable, and to point broadly to desirable directions of inquiry (Appendix A) . The problem underlying this task has to do with supercomputers as a unique and costly resource, limitations on available methods of evaluating existing machines, and difficulties in evaluating radically new multiple- and parallel-processor architectures. A supercomputer is an instance of the most highly performing computer available at a given time. The concept is a dynamic one, changing as advances in computer science and technology are made. As such, the supercomputer is a unique, expensive, and scarce resource, whose full and efficient use is clearly desired. Methods for evaluating the performance of supercomputers are essential for the proper acquisition and use of computational resources. Appropriate comparisons of performance also allow the proper match of systems to specific applications, leading to optimization of the allocation of computational and economic resources. Moreover, the design of novel models of supercomputers is based on the performance experience of currently available systems and on expectations of future applications. Thus, performance evaluation methodology and performance measures can play an effective role in the exploitation of computational 9

OCR for page 9
10 resources and can make the development of new and novel machine designs more cost effective. A substantial, well organized community of experts exists who specialize in many aspects of performance evaluation: analysis, simulation, monitoring, capacity planning, and measurement. Their successes include analytic models, empirical relations between hardware burst rates and actual computational rates, queuing network models, and prediction of throughput and response time. Nevertheless, it is not possible to say that the fully satisfactory characterization of even sequential, single-processor machines and their workloads is well in hand. The situation with respect to parallel-processor machines and parallel computation models has just begun to be pursued. The problems we are addressing are fundamentally different from the questions addressed by the expert community just mentioned. The questions here have to do with the measurement of the achievable speed of computation on today's and tomorrow's complex systems, rather than throughput and queuing delays. While it is true that we care about total system performance, it is not so true that we are concerned with maximizing the utilization of each and every system component. Rather, we care that when an application demands a resource, such as an input- output device , that device is available and does not impede the potential performance of the system. Essentially we are most concerned with the computational speed of the processors, and we have a secondary interest in the other system components to the extent that they have an influence on the performance of the processorks). While it will be possible to use analytic models to understand the performance of supercomputer systems in the future, the technology does not appear to be available now to apply these methods directly, or with any confidence, to supercomputer systems. The basic parameters that must be understood as input to analytic models have not been defined. sphere, on conventional sequential uniprocessors, it was possible to estimate, within a reasonable confidence range, the speed of the central processing unit as one resource of a system, today's systems are so application dependent that the range is measurable only in orders of magnitude, not tens of percent. Moreover, there is a fundamental difference between knowing that an application computes on a system for a certain length of time and knowing that that time is well used. It is this distinction that we propose should be measured and understood. Within a class of the workload, there could be some applications that compute results 10 times faster than other applications. If so, it may be that some of these

OCR for page 9
11 applications would be more suited to a different architecture or that they should be rewritten to exploit more fully the architecture on which they are executing. Without understanding performance within the processor architecture as something more than an input variable to a global network model, this type of subtlety will not be observed and repaired. Accordingly, the problem contains such questions as: What methods might be used to evaluate the match between an architecture and an application, given that some variation in both is possible? What measures of performance should be used? How should ease of programming and usability of the system be assessed? What experiments allow the comparing of results? APPROACH TO THE STUDY The task of the committee was limited by its charge to a brief, preliminary effort. The central purpose was to identify problems and opportunities and to recommend directions for further action but not to undertake these. The results of this task, accomplished in a few months, are presented in this report. Given its task and the status of the field, the committee did not consider it fruitful to undertake an exhaustive review of the broad area of computer performance evaluation; rather, on the basis of the expertise of its own members and of appropriate contacts within the industrial, research, and user communities, the committee undertook to draw broad conclusions, to sketch improvements that seem attainable in performance evaluation, and to outline an agenda, or framework, for research whose results would underpin the development of an appropriate performance evaluation methodology. The committee examined the current state-of-the-art methods and practices in the evaluation of supercomputer performance, considering the complexities created by a diversity of available architectures and scientific problem areas. It concluded that the classical measures of performance (millions of instructions per second, millions of operations per second, and millions of floating point operations per second) are simplistic and often misleading. The reason is that supercomputer performance is too complex to be fully characterized by a single figure of merit. The committee also concluded that many of the well developed methods of computer system performance evaluation, and their scientific base, are not appropriate for supercomputers. In short, we lack a complete theory that tells us what the right set of properties should be. This conclusion led the committee to recommend the development of methods for the evaluation of supercomputer performance applicable to

OCR for page 9
12 the existence of a variety of architectures and applications. Accordingly, Chapter 2 briefly outlines basic principles, including performance measurement, on which the evaluation of supercomputers ough be based. Chapter 3 then critiques the current measurement practice on the basis of these principles. Next, Chapter 4 describes some improvements that seem attainable in supercomputer performance evaluation. Finally, Chapter 5 outlines an agenda for research for the development of the scientific base and the measures appropriate for sophisticated evaluation. 't