Skip to main content

Currently Skimming:

3 Parallel Computation
Pages 30-38

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 30...
... Other problems described as being "ripe for impact" include full aircraft flow analysis, turbulent flow simulation, composite materials design, understanding phase transitions, polymeric materials design, and analysis of multiphase and particulate flows. For instance, solution of the Euler equations of compressible flow about a complete aircraft would greatly enhance vehicle design capabilities and dramatically reduce the time from design conception to fabrication and testing.
From page 31...
... In principle, each processor has equal access to any data element; however, with current technology, communication delays develop when too many processors are involved. Hierarchical memory systems where each processor has a local memory cache have improved performances somewhat, but it is believed that architectures of this type are limited to tens or hundreds, rather than thousands, of processors.
From page 32...
... Processors on the Connection Machine operate synchronously in lock step or single instruction, multiple data (SIMD) fashion with a separate control unit dictating the instruction to be executed simultaneously on all processors.
From page 33...
... . _ FIGURE 3.1 Some grand challenges of high-performance computing and their projected computational requirements (from the Federal High-Performance Computing Program)
From page 34...
... Intimately connected with parallel computation, the HPC Program clearly addresses several issues central to computational mechanics. Listed among their "grand challenges" are climatology, turbulence, combustion, vehicle design, oil recovery, oceanography, and viscous flow.
From page 35...
... Problems using uniform meshes and finite difference approximations are, naturally, simpler to map to SIMD or MIMD networks than are formulations using unstructured grids having general connections. One technique for solving unstructuredgrid finite element problems on a SIMD array uses a two-step procedure with an initial mapping of elements of the mesh onto processors followed by a second mapping of the nodes onto processors for the solution of the linear system.
From page 36...
... Software environments on MIMD computers, however, are less developed than those on SIMD computers due to the greater complexity involved in distributing control structures. Mode} mechanics problems involving wave propagation, compressible flow, and elastic deformation have been solved with great success; but realistic problems with embedded heterogeneities such as heterogeneous materials, phase transitions, and elastic-plastic responses present difficulties similar to those described for parallel adaptive procedures.
From page 37...
... New computing languages must have capabilities rich enough to address complexities commonly found in computational mechanics. Access to supercomputers has been improved considerably by the construction of national and regional networks and the funding of computational research sponsored by the National Science Foundation supercomputing centers.
From page 38...
... They will accelerate learning of the more complicated programming methodology and greatly stimulate interdisciplinary interaction. Large systems at remote centers may be used for production and testing of scalability or parallel algorithms.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.