Skip to main content

Currently Skimming:

4. Exisiting Conditions
Pages 21-50

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 21...
... In fact, current supercomputer prices range from about $1 million to $20 million, but in terms of constant dollars, the $10 million average is a useful rule of thumb over about 3 decades. For example, the IBM 704 in the mid-1950s cost $2 million to $3 million and operated at about 10,000 operations per second.
From page 22...
... The supercomputers (such as the Cray Y-MP and ETAi°) are distinguished by their relatively higher execution rates (sustained rates of about 108 to 109 operations per second)
From page 23...
... The prices of the mainframes (such as the IBM 3090, Amdahl 5890, and Control Data Corporation 990) are about the same as those of the supercomputers, ranging from about $1 million to $20 million, but their performance is typically about a factor of 5 below that of supercomputers; also, their memory capacities are typically lower and their input-output systems are slower.
From page 24...
... Massively parallel computers attempt to achieve high performance by using a very large number of slow processors. Finally, the systolic computers implement an algorithm by pumping data through a series of identical functional units; for example, a systolic computer that implemented a matrix multiply would have an array of identical multiply-add processors that communicate their partial results to one another.
From page 25...
... ' ' ' '' ' ' ' ' ""'1 ~........................ Supers Minisupers Workstations Mainframes Superminis PCs Common file Local disk Floppies and system systems hard disks Site networks, Site networks, Site networks, LANs, WANs LANs, WANs LANs, WANs Transparency Transparency Transparency and and and Visualization Visualization Visualization FIGURE 4.2 The scientific computing environment.
From page 26...
... Optical storage technology is being developed in both disk and tape formats, but so far there are no recording standards for optical media, and most users are unwilling to record their archives on a nonstandard medium; thus magnetic tape will continue to be used for archival storage by most scientific computing centers until recording standards are developed for optical disks and tapes. An exception will be found in specialized applications where the advantages of optical media exceed their disadvantages.
From page 27...
... The networks to which data communications ports provide access include local-area networks (LANs) that span a building, site networks that span a whole site, and wide-area networks (WANs)
From page 28...
... An important characteristic of the computing environment should be that users have a uniform interface across all three types of computers, so that they can move applications among the types of computers without significant conversion effort. This is the main reason for the growing popularity of the Unix operating system: Unix is available on all three generic types of computing systems and hence can provide a relatively seamless interface among them.
From page 29...
... It is the need to solve problems with ever-larger complexities and ever-shorter response times that is driving the unrelenting demands for higher execution rates in supercomputers. Historians tell us it is inherent in the nature of science, technology, and engineering that they grow in complexity.
From page 30...
... This last example is taken from the requirements at National Aeronautics and Space Administration Ames for the Numerical Aerodynamic Simulator. Figure 4.4 shows that it is the combination of high complexity and short response times that forces the use of high-performance computers.
From page 31...
... INK XX 108 ~ 1 o5 S U P E R CO M P U TE R S HI N F RAM ES ~ \ _ SUPE~1INIS 104 1o6 ;~ \ 4 _ M I C R Of\ 10 4. 1 o6 1 0 1 0 1014 -1 o1 3 1 o1 2 1 o11 1 0 FIGURE 4.4 Nomograph of computational science and engineenng.
From page 32...
... Some response times must be only a few seconds, and ideally the delay should be imperceptible to the user for such simple tasks as entering a line of instruction during code development. · Preproduction and postproduction.
From page 33...
... EXISTING CONDITIONS 1o8 107 a, 1 on 105 LL Lu 104 in in o ILL An Or 103 1o2 jo1 10° 33 1 Year 1 Month c_ 50 Hours 5 Hours 15 Min. 87 Hours ~c 7 Hours 30 Minutes 3 Minutes ~ 10 Seconds 100 RELATIVE EXECUTION RATE FIGURE 4.5 Scale of response times.
From page 34...
... · Savings. By using computational science and engineering to guide experimentation, costly and time-consuming experiments can be focused on the most productive areas, thereby economizing on manpower, time, and budgets.
From page 35...
... As electronic computers became available, the most powerful of these were installed at Los Alamos; Figure 4.6 illustrates the approximate sustained execution rate of the fastest of these computers (in units of operations per second normalized to the CDC 7600 for administrative purposes)
From page 36...
... designs, but in 1982 a new trend line began with the installation of the first of the parallel-processor supercomputers, the Cray X-MP/2 with two vector processors, and later models in that line with four and eight vector processors. Future prospects for faster supercomputers will be based not only on improvements in component technology and the architecture of single processors, but also on the increasing number of processors used in supercomputers.
From page 37...
... Both dynamic and static RAM memories follow this pattern, with a quadrupling period of 3 years, whereas the quadrupling period of magnetic disk density is much longer, about 8 years. fiend in Cycle Times A scatter diagram of the cycle times of leading-edge supercomputers since the mid-1960s is shown in Figure 4.8.
From page 38...
... Those computers having cycle times that fall above the leading edge of this trend have attempted to use architectural features to compensate for their slower cycle times. fiend in High-Speed Logic Technologies The prospects for faster logic technologies are illustrated in Figure 4.9, which shows the gate delay (in nanoseconds)
From page 39...
... CMOS. The CMOS technology being used in the ETAi ° has two advantages relative to ECL: lower power dissipation and higher gate density.
From page 40...
... The lower power dissipation implies a potential for increases in packing density without causing heat dissipation problems and therefore even further speed increases. This technology also has two disadvantages: higher cost and lower gate density per chip.
From page 41...
... In the mid-1950s in computers such as the IBM 704, instructions were executed in a sequential scalar mode; that is, they specified only one operation on one pair of operands, and the processing of instructions included a series of sequential steps: fetching the instruction, decoding it, forming the effective address, fetching the operand, and then executing the operation. Beginning in about 1960 in computers like the IBM STRETCH, an instruction lookahead provided the ability to fetch and process instruction N + 1 while executing instruction N
From page 42...
... Most vector designs today use the register-to-register design. The leading edge of supercomputer architecture today is found in designs that incorporate multiple vector processors, or parallel-vector designs.
From page 43...
... Parallel Processing In addition to decreasing cycle time and increasing design efficiency, a third approach to increasing computer speed is through the use of multiple processors, and there are many design issues for these so-called parallel computers: Should there be a few fast processors or many slow ones? Should the memory be shared, attached to each processor, or both?
From page 44...
... An Expanded Taxonomy of Architectures In addition to the types of control concurrency serial and parallelincluded in the Flynn taxonomy, a third type is being used, called clustering. In a clustered design, clusters of multiple-instruction-stream processors are connected together with global control with access to global memory.
From page 45...
... Historically, through the CDC 7600 in about 1970, most supercomputers had serial-scalar designs; that is, they executed one stream of scalar instructions. First-generation vector processors had serial-vector designs; that is, they also executed a single stream of instructions, but the instructions specified vector operations.
From page 46...
... Finally, system libraries are being adapted to use parallel processors. Applications will be affected by parallel processing by the need to insert parallel control statements, to develop parallel algorithms, and to rethink the mathematical models on which the algorithms are based.
From page 47...
... Other vendors have followed this design as a guidepost, including parallel-vector processors offered by Control Data Corporation, IBM Corporation, Convex, Alliant, the Japanese Super-Speed Computer Project, and widely rumored introductions of similar designs from the Japanese supercomputer vendors. The number of processors in this domain is being expanded from 8 processors to 16 in the near term, and to 32 and 64 in the generations in development for delivery in the early 1990s.
From page 48...
... However, not all algorithms can be so decomposed, so that there is a decomposition limit to scalability. When the number of tasks available is just one, this is referred to as the serial bottleneck, which has very serious implications for the effectiveness of massive parallelism.
From page 49...
... Thus as P grows, the domain of applicability becomes inherently more limited, because ~/2 approaches 1.0 and 1 - ~/2 approaches 0 very rapidly. This does not imply that it is impossible to use parallel processors effectively, but it does provide a guide to the domain of application, with a smaller number of very fast processors being more useful for general purposes than a larger number of slow processors.
From page 50...
... The university, the industry, or the nation that would be a leader in the modern world of intense international competition must master the information technologies, the leading edge of which are the supercomputing technologies. It is imperative for our future success as a nation that we accept the invitation offered by the supercomputer "Come, enter into the world of .


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.