Skip to main content

Currently Skimming:

5. Toward the Future
Pages 51-70

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 51...
... THE CURRENT STAGE IN SUPERCOMPUTING Supercomputing has come a long way, when viewed from many angles: in speed, the central processing units (CPUs) , memory size, input/output (I/O)
From page 52...
... Hence the data access time from memory becomes slower relative to data compute time. We must now figure out all kinds of tricks to compensate for the gap between the memory chip and the CPU speed.
From page 53...
... In the meantime, we still have to use a solid-state secondary memory device as a buffer to smooth out the speed difference between CPUs and peripherals. Physical Size Not too many people recognize the changes in the physical size of supercomputers.
From page 54...
... The mean time between failures jumped from 10 hours to 100 hours and then to 1000 hours a viable product for use in commercial industry. After the Cray-XMP was introduced, applications expanded rapidly, from pure laboratory research to various commercial product areas.
From page 55...
... Our goal is to move gradually toward more and faster processors, while maintaining a consistent system architecture. This approach will ensure that no users will suffer a degradation of performance in running their existing production codes on the next-generation, more parallel machines when they become available.
From page 56...
... Circuit Den silty Depending on the device type, today's circuit density is approaching the 1 K-gate level for GaAs, the 10 K-gate level for bipolar, and the 100 K-gate level for CMOS. In the future, we may see even larger-scale integrated circuits.
From page 57...
... Multiple levels of interconnect media, such as printed circuit boards, chip attachments, connectors, backplane wires, and so on, all affect performance. As clock rate increases, component, module, and system packaging becomes a very critical issue for the total system design.
From page 58...
... Fortunately, now they have gone up one notch to use LINPACK' a set of mathematical subroutines for solving linear algebra that is, in general, more usable than just the Livermore Loops rate or the peak MFLOPS rate. Even so, the performance numbers on LINPACK are still only an indicator of the computation time for a small part of the total solution process.
From page 59...
... The solution time includes all of the following elements: Data acquisition/entry; Data access/storage; Data motion/sharing; Data computation/process; and Data interpretation/visualization. How to capture the raw and digitized design data, how to store it, and how to move it efficiently in and out of the disk, solid-state secondary memory, and main memory during computation are all essential to the solution process.
From page 60...
... This includes extending vector detection capability to the detection of parallel processable code. From the top down, we should provide system and applications support in terms of libraries, utilities, and packages, all designed to help users prepare their applications to get the most performance out of the parallelism existing at the highest level.
From page 61...
... Application Technology Development Many examples indicate that supercomputers have proved very useful in various industries-in the defense, petroleum, aerospace, automotive, meteorological, electronic, and chemical segments. Today, all the industrial countries of the world are developing their own application techniques using supercomputers.
From page 62...
... SUMMARY New Directions In summary, I will point out a few new directions that may evolve in supercomputing: petition; Comprehensive support for parallel processing; Development of open systems that enhance productivity and com Total system design to minimize solution time; Seamless services environment and distribution of functions; and Wider applications in scientific, engineering, and commercial fields. In the future there will be more comprehensive support for parallel processing from very primitive to very sophisticated levels.
From page 63...
... For example, the basic component technology, parallel architecture concepts, and software and hardware design exploited in the supercomputer arena will trickle down to the mainframe and workstation level; vice-versa, the user interface software and application tools commonly seen at the workstation level will be introduced at the supercomputer level. As a result, the supercomputing technology pulls the computer industry upward, creating new market opportunities and enhancing user productivity.
From page 64...
... It is important for us to keep this cooperative development effort moving. In 5 years, we can design a machine that is 100 times faster than today's, but nobody will be able to use it unless we ship it with good software and application tools.
From page 65...
... Beginning now, while users are developing their nextgeneration applications for a high-performance parallel machine, we can be developing our next-generation system software and application libraries and tools for a high-efflciency user environment. We are entering a new paradigm of supercomputing in which user application (and productivity)
From page 66...
... We need to work with users to design machines that are balanced, while at the same time preparing their future applications to take full advantage of parallel processing. Michael Teter: We from Corning Glass are interacting fairly heavily with the Cornell Supercomputer Facility.
From page 67...
... I wonder if you think that we are facing a major intellectual challenge, a computational mechanics challenge that is even greater than the technical challenge of building faster machines? Steve Chen: Yes, we face a psychological challenge.
From page 68...
... But there is very little discussion of massive parallelism, and many people say from the computer point of view that the future is to get machines that are 1000 or 10,000 times faster. Steve Chen: I can only give you my personal viewpoint.
From page 69...
... Lany Smarr: Critical to the success of that education and training, which I think is issue number one, is having the industrial users live and work in the university environment where, because of the NSF initiative, we have such a vast number of faculty and students who are not having to relearn but are very energetically going directly into using supercomputers. Having them work shoulder to shoulder with the people from industry is proving to be very effective in bringing about that technology transfer.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.