Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 309
309 It is the opinion of this Panel that, except for very small applications (in which the cost of even a small minicomputer is not justified) or very large applications (in which the jobs are so long that the speed difference between a large vector machine and a minicomputer with array processor is important), the minicomputer, with an array processor if needed, is the most cost-effective way to perform the vast majority of scientific computations. Of course, this Panel is not the first to recognize the advantages of minicomputers. Their proliferation has already started. A large number of astronomers have now had experience with minicomputers, and they are finding wide acceptance in the astronomical community. One cau- tionary remark is necessary, however. Today there is little experience in the astronomical community with the minicomputer-array processor combination. Although no major problems are anticipated, additional experience should be obtained before it can be definitely stated that this is a viable mode of operation. IV. THEORETICAL COMPUTING The Panel has been aided in its investigations of theo- retical computing needs by a joint meeting with the Panel on Theoretical and Laboratory Astrophysics (see Chapter 4), the participation of a representative from that Panel, and by a Workshop on Computational Astrophysics held at the NASA/Ames Research Center on the two days preceding the third meeting of the present Panel (which was also held at NASA/Ames). The material that follows is drawn, in part, from all three sources. Many important insights and breakthroughs in modern astronomy have been obtained through large-scale compu- tation. Astronomical phenomena typically combine complex interplays of several physical processes with strongly nonlinear effects. Hostile or unattainable environments preclude laboratory studies. Large-scale computation provides the only hope for sorting out and understanding such interacting processes. Conversely, astronomical situations sometimes represent a setting in which certain kinds of physical processes manifest themselves without hopeless entanglement with other effects. The astronomical context often provides the best setting in which to study the physics of these processes; it is, in effect, our laboratory.
OCR for page 310
310 The complexities of astronomical phenomena together with greatly improved observational data conspire to broaden the scope of problems that demand attention and to sharpen the detail sought in interpreting observations Qualitatively new kinds of data (from Space Telescope, high-efficiency imaging detectors, and the Very Large Array and satellite data from previously unattainable wavelength regions, for example), as well as significantly improved accuracy and greatly increased data rates on more traditional observations, yield a flood of new data and introduce new kinds of problems that demand interpretation or solution. All of this makes access to more computa- tional capability imperative if theorists are to keep abreast of observational data, let alone investigate new problems. Lacking sufficient capability, interpretations offered to explain observational data from the Space Telescope, the Very Large Array, and other sources will necessarily be based on guesses or other shortcuts; only with adequate facilities will we be able to take the full range of physical effects into account and develop theo- . retical interpretations whose quality matches the quality of the observations. The last decade has witnessed a tremendous growth in the use of computers in constructing and testing theo- retical models of astrophysical systems. Computers were used in the 1960's mainly to construct one-dimensional models, for which the demands on computation time are modest by today's standards. In that decade, the greatest advances were made in the field of stellar evolution. Computer experiments played a key role in connecting our understanding of nuclear physics in stellar cores to the observational data, which necessarily refer to only a thin layer at the stellar surface. Computer simulations have been our only means of testing theories of phenomena in stellar interiors, such as the events leading to a supernova explosion. During the 1970's, increased com- puter speed and sophisticated computer programs have permitted much more detailed analysis of supernova models. Such computer calculations have made a major contribution to our understanding of nucleosynthesis. Computers have played a major role in the study of radiative transfer and the calculation of emission spec- tra. Computational techniques have been used to study radiative transfer in stellar atmospheres, the spectra of protostars, and the appearance of dense interstellar clouds in molecular lines. These computer models have played an essential role in relating the observational
OCR for page 311
311 data to physical models of stellar pulsation (e.g., Cepheid variables), stellar mass loss, star formation, and gravitational collapse of dense clouds. The compu- tation of emission spectra in situations where the radia- tive transfer is not coupled to a hydrodynamical calcula- tion is less demanding of computer capability, but here the computer is no less essential. Such computations have helped us to understand the physical conditions in a wide variety of astrophysical objects--x-ray sources, quasars, planetary nebulae, and coronas. Beginning in 1969, computer simulations were applied to models of star formation and over the last decade have increased dramatically in sophistication. Two-dimensional hydrodynamical calculations have been used to study the early stages of star formation. mese calculations have focused on the initial compression and the onset of col- lapse and also on the effects of cloud rotation and mag- netic fields on the subsequent evolution of the collapsing clouds. Only one-dimensional calculations have been per- formed for the later stages of protostellar core forma- tion, but these have become very detailed. Apart from simple arguments based on the virial theorem and similar- ity solutions of limited applicability, computer modeling has provided our only solid means of interpreting the wealth of observational data obtained in this field over the last decade. Computer calculations have played an important role in the investigation of the structure and dynamics of gal- axies. In the 1960's, many N-body calculations were carried out with small numbers of stars interacting through gravitational forces. These calculations yielded excellent models of star clusters, but it was only at the end of the decade, with the development of the particle- following numerical methods, that galactic systems could be simulated with models capable of producing the compli- cated structures characteristic of real disk galaxies. These stellar dynamical models have been refined during the 1970's and extended to treat three-dimensional sys- tems. In addition, gas-dynamical simulations have greatly aided the interpretation of the 21-cm radio observations of galaxies. A new application of computational methods in astronomy has been the simulation of general relativistic systems. This work is now in its infancy; not even simple flows are fully understood. Because of the immense difficulty in obtaining analytic solutions, this is a field in which numerical computations are likely to have a tremendous
OCR for page 312
312 impact. The computations of even very simple situations require an enormous amount of computer time, and we may expect more interesting problems to be attacked only as computers become more powerful and more easily available in the future. Impressive as this list of accomplishments may appear, progress in all these areas has been severely limited by the availability of computational facilities. Many of the projects that have been undertaken only very recently could have been done a decade ago had there been suffi- cient access to the computers existing at that time. The limitation has rarely been the availability of willing manpower or sufficiently powerful computational tech- niques. The slow progress over the decade in galaxy modeling is the case in point. The early spiral galaxy models have barely been surpassed, although much more realistic simulations are possible. When so much can be learned from this experimental approach, it is mystifying that the computational facilities necessary for vigorous pursuit of this research program have not been provided. A significant fraction of the spiral-galaxy simulations during the last decade was performed in England, where computational resources were made available through the controlled fusion program. Three-dimensional simulations of galaxies, like the work on disk galaxies, has pro- gressed at a rate determined by availability of computer time rather than the availability of manpower or compu- tational techniques. Another example of unnecessarily slow growth in the computer simulation of astrophysical systems is in the area of hydrodynamics. Hydrodynamic computer codes capable of modeling a variety of astrophysical systems in two dimensions have been available for at least a decade. Nevertheless, hydrodynamic computations performed on reasonably fine grids are a rarity even today. Computa- tions that involve the much more complicated and time- consuming algorithms for multifluid or implicit hydrody- namics or that also involve radiative transfer are even rarer. In fact, a major portion of this kind or work is now performed in Germany, where easy access to a powerful vector machine has been arranged through the Max Planck Institute in Munich (a third of the machine time is available for astrophysical calculations). Many impor- tant problems, such as the calculation of the nonlinear development of Parker's instability in two dimensions, have gone without solution because the computer facili-- ties are unavailable. Such calculations could easily
OCR for page 313
313 have been performed a decade ago; the computers, codes, and experts were there, and only the computation budgets were lacking. The availability of facilities for theoretical computa- tions has been inadequate in recent years and must be improved if the astronomy program in the United States is to have the proper interplay and balance between theory and observation. Computation capability is made available through three main sources. For small problems, university computer centers are adequate and are cost-effective when the com- puter capability required does not justify the purchase of a dedicated minicomputer-array processor system. These minicomputer-based systems are the second main source of computational capability for theoretical computations and are probably powerful enough to meet the needs of the large majority (perhaps 90 percent) of theoretical compu- tations. Finally, there are problems requiring access to the largest and fastest machines available. These prob- lems have traditionally been attacked through cooperative arrangements between astronomers and large laboratories such as the Lawrence Livermore Laboratory, NASA/Ames, NASA/Langley, Los Alamos National Laboratory, and the National Center for Atmospheric Research. At present, the first and third methods of performing theoretical calculations are dominant, with minicomputer- array processor systems just beginning to play a role. The Panel believes that these three methods will continue to be important in the 1980's, but their relative impor- tance will show a dramatic shift. Because of its cost effectiveness, the minicomputer-array processor config- uration should be performing most of the theoretical astronomical computations by the end of the 1980's. This will occur mostly at the expense of computations performed at university computer centers, which, toward the end of the decade, will be used primarily to support astronomical computations at universities where only small amounts of computation are performed. In addition, some of the problems that are now studied with the biggest machines are amenable to solution with minicomputer-array processor systems. However, there will remain problems--black-hole dynamics, star formation, radio sources and jets, super- novae, galactic chemical evolution, magnetic fields and plasmas, and solar phenomena, for example--that are at the cutting edge of theoretical research and merit atten- tion beyond the fraction of astronomical computing they
OCR for page 314
314 represent. Approximations must be made to fit these problems into the largest and fastest machines available today; our confidence in the results is weakened because of these compromises. Larger and faster machines expected in the coming decade may allow improved treatment of these problems and, very probably, attacks on additional prob- lems that cannot be fit into machines available today. The Panel makes the following recommendations concern- ing theoretical astronomical computations in the 1980's: 1. The primary recommendation is that the funding agencies [the National Science Foundation (NSF) and the National Aeronautics and Space Administration (NASA)] make available funds to purchase approximately 10 mini- computer-array processor systems for the purpose of theoretical calculations in astronomy. m e "canonical" system and its associated costs are described in Appendix A. These funds should be supplied at a steady level of funding in real dollars. This will allow the purchase of 1.7 systems per year for 6 years (a typical useful life for a computer system before it becomes obsolete), after which the oldest systems would be replaced. The funds made available for this purpose should be primarily new funds if theoretical astronomy is to have the increased support that it requires. In addition, funding, perhaps on a cost-sharing basis, is needed to support the main- tenance, operations, and software expenses for ten such systems after a steady state is reached. These computers need not be distinct from those used to perform image processing and analysis (see the next section), but an equivalent of 10 such systems should be dedicated to theoretical computations. m e exact number of such systems required is difficult to quantify. The number 10 represents the Panel's best guess at the number that is required and feasible; how- ever, the proposed steady-state funding plan is flexible. If it turns out that twelve systems are required, they can be purchased with the same level of funding, provided they are replaced at just over 7-year intervals rather than 6-year intervals. 2. m e funding that supports computing at university computer centers should be maintained in those cases where the level of astronomical computation at a given univer- sity does not warrant a switch to a dedicated system. However, the funding agencies should be alert to those cases where one or two medium-scale users (about $30,000/ year) and/or several small users (about $10,000/year) are
Representative terms from entire chapter: