Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 307
307 data bases should be continued and expanded (see Section VII, Astronomical Data Bases). III. THE TREND TOWARD DECENTRALIZATION When computers first appeared on the scene, they were large, costly, temperamental machines. This led to the establishment of centralized computer centers that pur- chased (or leased) the computer and associated peripherals and provided computer services to a community of users each of whom could not afford individually to purchase the computer. Because computer centers were the only computer facilities available, the users had to tailor the problems to be addressed and their working habits to the capabilities and schedules of the computer centers. As computer technology developed, the mainframes became more reliable and more powerful but not appreciably less costly. (To be sure, the price/performance ratio improved dramatically, but this was accomplished by selling more performance at the same price.) Many university computer centers expanded their clientele so that the original science and engineering users of the computer centers were now competing for resources with administrators, managers, the general student population, computer-science students, word processors, game players, and other non- science and nonengineering users. Because of the influx of nontechnically oriented users, computer center staffs were expanded in order to provide support services and to make the computer appear easy to use. None of these developments was necessarily bad, but in many cases they had the consequence that the improved price/performance ratios of newer computers were exploited to provide additional system services rather than additional come putational power. That is, many computer centers are charging the same amount per computation in real dollars today that they charged ten years ago. In addition, university computer centers have been reluctant to support the video image display and hardcopy capability required for astronomical image processing (and, to a lesser extent, for astronomical theory). While the preceding presents a rather dismal picture of the traditional computer center, it must be emphasized that not all computer centers suffer from these problems; there are some astronomers who are pleased with the per- formance of their university centers. Computer centers with specific missions such as those at the Lawrence
OCR for page 307
308 Livermore Laboratory or the National Center for Atmo- spheric Research have generally been successful in providing high-quality services at reasonable cost. In the 1960's, the first minicomputer appeared. Mini- computers took advantage of the same technology used in the mainframes but used this technology to decrease costs rather than increase performance. Since their introduc- tion, the performance of minicomputers has steadily increased so that today the main difference (from the viewpoint of a scientific user) between a top-of-the-line minicomputer and a typical mainframe is mostly cost. (To be sure, the mainframe is supplied with several simul- taneously running operating systems, support for many high-level languages, accounting software, word-processing software, and so on, but these capabilities are seldom used in scientific applications.) An example should make this clear. The most powerful commercially available scientific computer is a vector machine that accommodates up to 65 Mbytes of main memory, costs about $10 million (with some peripherals), and has a computational capability equal to about 100 MFLOPS (million floating point operations per second). Mini- computers (sometimes called supermini's) are now available with 32-bit virtual memories, 8 Mbytes of main memory (which will increase to 32 Mbytes when 64-kbit memory chips become widely available), and a performance of 1-2 MFLOPS. Such a machine costs about $220,000 (with some peripherals). Benchmarks indicate that a large vector machine is about 75 times as powerful as a typical supermini and therefore is about two times as cost-effective. However, an array processor can be attached to a supermini for about $80,000 and improves its performance by about a factor of 10. Thus a supermini with array processor can be about four times as effective and one eighth as fast as a powerful vector machine. A note of caution, how- ever--benchmark tests are highly application dependent. Furthermore, peripherals can be a large component of the cost of a system. Therefore, the price-performance data given above must be treated as coarse averages and may be expected to vary from application to application. Of course, university computer centers do not have the top-of-the-line vector machines, and minicomputers all by themselves (without array processors) provide more cost- effective computing than is available from typical university computer centers.