Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 23
Paper 1 SYSTEMS SOFTWARE: THE PROBLEMS KEEP CHANGING I NTRODUCTION Much of the history of software systems research reflects the efforts of academic codifiers to find stable, unifying generalization in a field continually revolutionized by changes in hardware technology. The following somewhat oversimplified account of major states in the development of hardware will emphasize the constantly shifting priorities with which systems researchers have had to contend. 1. Era of Poor Reliability. The earliest computers were of very limited capability by today's standards, and operated correctly only for short intervals. The programming approaches used in this period had to reflect this unreliability, which among other things discouraged all attempts to construct complex operating systems. The computer user's whole emphasis was to squeeze a maximum of useful numerical computation out of a limited system that might fail at any moment. The "systems" that existed were simply subroutine libraries directly supporting the mix of problems to which a given machine was dedicated. 2. Era of Very High Hardware Costs. The first generation of commercially produced computers was substantially more reliable than the early experimental machines described above. m ese second- generation products could be expected to run reliably for periods of a few hours--long enough to run several jobs. However, substantial computing equipment was still very expensive and thus available only at a relatively few favored centers, whose main aim was to ensure efficient use of their high-priced machinery. External data storage equipment, consisting largely of magnetic tape drives, was still rudimentary. m is hardware environment encouraged the development of multiprogrammed batch operating systems (like the pioneering University of Manchester ATLAS system, which also introduced the notion of circular memory) that aimed to facilitate the efficient flow of work through an expensive hardware configuration. Although the ATLAS system was a university operating system product, such products were rare in this early period. Indeed, the limited availability and high cost of equipment, and also the batch nature of the systems that could most readily be constructed, inhibited university experimentation with operating systems at all but a few favored centers. Accordingly, the 23
OCR for page 24
24 batch systems characteristic of this period were generally developed by large industrial users and by computer manufacturers. 3. Era of Large Economies of Scale. Hardware costs had been declining from the first days of computers, but initially these economies could more readily be captured by large than by small computers. Grosch's Law, i.e., the statement that the performance of a machine is proportional to the square of its cost, emphasizes this phenomenon. Thus at first, the time-division multiplexing, or time sharing, of a single large computer was the most economical way to use hardware. Since falling hardware costs had already made programmer productivity an important issue, this favored the development of large, complex interactive time-sharing systems. Universities played an important role in this development, the MIT work on CTSS being particularly significant. The Manchester ATLAS work and the somewhat . . . later MIT work on MULTICS also explored the memory protection and segmentation schemes needed to support highly dynamic shared use of large machines. Semantic structures for parallel processing, including such formal notions as "process, Semaphore, n and ~monitor, n emerged from this period of academic involvement in interactive systems building. Nevertheless, the commercially successful, widely used time-sharing systems were developed by industry, largely by computer manufacturers, experimental university systems being generally not stable enough, well-documented enough, or sufficiently well maintained to support widespread use. 4. The Coming of Large Random Access Storage Devices. Processing of large volumes of commercial data had been central to computer sales from the time of the first commercially available machines. Fairly elaborate libraries and tape-sorting procedures were built up to facilitate the processing of these data on the tape-based systems originally available. However, the development of large-capacity random access devices (discs) changed the context of systems design radically. Data bases of a hitherto infeasible complexity became possible, and large interactive systems became practical for the first time. Systems of this type spread rapidly to universities as interactive time-sharing systems, and commercially as online reservations systems and other interactive query systems. Data base systems soon became the fastest growing commercial applications of computing. However, for almost the first decade of their development, data base systems failed to attract much interest. Although data base systems have subsequently become a recognized area of research, it is fair to say that their early development was spurred entirely by applications pressures, and that the initial research contribution to these systems, either by universities or by applications pressures, was negligible. Developers in large commercial firms, manufacturers, and software houses had brought data base systems to a fairly advanced practical state before most university research had become particularly familiar with the meaning of the term "data base." Modern research in data bases includes adoption of notions drawn from set theory, from language theory, from multiprocessing control, and from distributed systems and artificial intelligence, and it is beginning to have a significant impact on industrial practice.
OCR for page 25
25 5. Era of Multiprocessors and Minicomputers. Continuing declines in the cost of hardware, especially the fact that the introduction of monolithic memory chips caused the cost of memory to drop substantially, began to make small computers relatively more advantageous than they had been before, thus undermining Grosch's Law. This technological change tended to favor the use of inexpensive minicomputers for small applications, and also the combination of multiple small computers into higher performance systems. A new type of systems software, namely operating systems of simplified structure able to support single users or small groups of users, became appropriate. The UNIX system developed at Bell Laboratories is the best-known system of this type; however, universities also experimented with minicomputer software, producing a few successful systems, of which the UCS Pascal system is the most widely used. Multiprocessor configurations are favorable for applications, such as air traffic control and airline reservation systems, that must be highly reliable and available without interruption. Manufacturers and software groups concerned with such applications have studied the problems of ultrareliable computing and the way in which multiprocessor configurations could be used to solve them. Since the university computing environment does not demand high reliability and most university researchers are unfamiliar with applications demanding such reliability, universities have thus far played only a small role in these investigations. 6. Era of Distributed Computing. The economic factors favoring the use of smaller computers have continued to gain in strength. Currently, it is possible to produce a reasonably powerful micro- computer, with a megabyte or more of monolithic memory, for just a few thousand dollars. Systems of this sort provide enough computing power for many small-scale applications and are ideal working tools for the individual university or industrial researcher. Such researchers are generally interested in relatively light experimental computing and have reason to prefer microprocessor-based interactive computing environments decoupled from the load-conditioned shifts in response time to which large time-shared systems are subject. However, microcomputer-based work station systems are most useful if they can also be used to access larger systems, on which major data bases can be stored, and to which heavier computations can occasionally be relegated. Since communication costs have been declining steadily (though by no means as rapidly as computing costs), it has become feasible to provide this kind of computing environment by linking moderately powerful intelligent terminals or personal computers together in a communications network, which also includes a few big data base machines or scientific ~number-crunchers. n Relatively elaborate communication software and work station software systems emphasizing very high reliability file handling must be combined to support heterogeneous distributed systems of the sort we have described. The communications software needed for this was pioneered by the ARPANET effort, which was largely the work of nonmanufacturer software research laboratories with some participation by university groups. This effort demonstrated many important packet-switching
OCR for page 26
26 techniques and concepts that were subsequently adopted in commercial systems. m e University of Hawaii's ALOHANET project, which became the basis of the Xerox Corporation's Ethernet product, represents another case in which university concepts in communication software were successfully transferred to industry. THE ROLE OF RESEARCH The history that we have recounted emphasizes the fact that the set of problems faced by software systems designers has changed steadily and radically over the whole period examined, owing to constant changes in the underlying technology. Given these changes, the task of the software designer has been to develop systems that could exploit new hardware developments as these appeared; he has had to deal with daily realities, rather than with abstract principles. To make research meaningful in a subject area whose ground rules are constantly changing is difficult. It is all too easy for an intricate piece of systems analysis to "miss the boat" because technological development changes the value of some assumed parameter drastically. Nevertheless, software research has a number of real accomplishments to its credit: 1. Such fundamental systems approaches as multiprogramming, time sharing, and virtual memory were all initially demonstrated at universities. Industrial software research groups originated the important notions of virtual machines, relational data bases, and packet-switching networks. 2. University and industrial research has given us at least some degree of understanding of the factors governing system performance. For example, the notion of "working set" forms the basis of our present understanding of the behavior of demand-paging systems. Performance analysis is an area in which it has even proved fruitful to apply formal mathematical techniques such as probability and queuing theory. However, our understanding of system performance remains fragmentary, in part because technological changes have steadily changed the aspects of performance to which theoretical attention could most appropriately and usefully be directed. 3. Research has codified and clarified the terminology useful for thinking about the structure of complex systems. Examples of this are the notions semaphore, monitor, process, transaction, and deadlock. Research has also begun to contribute important structural protocols around which complex systems can be organized. Note, however, that many of the system-building techniques that are still most widely used even for constructing complex systems, e.g., the resource preallocation and ordering techniques used to prevent deadlock in operating systems, and the detection of deadlock by location of cycles in a "wait-for graph, stem from the work of systems developers rather than from formally organized research. To be successful, software research needs to walk a relatively narrow path. On the one hand, it must aim to precipitate some relatively unitary, technology-independent concept out
OCR for page 27
27 of the confusing welter of practical applications. On the other hand, it must not abstract so much or go so far beyond what is immediately practical that contact with current practice is effectively lost. Software research efforts that deviate in either direction from this path may fail to contribute substantially to developing practice. Demonstration systems lacking unitary focus have often failed to leave anything behind. m is remark applies especially to systems developed at universities, which lack the sales organization that could bring a state-of-the-art system to wide public notice. Other problems that university software systems face is that their definitive completion and release tends to be long delayed and that ordinarily the developers of these systems do not dispose of large enough resources for the attractiveness of their systems to be enhanced by systematic and substantial additions of functionality. With few exceptions, such systems have failed because they have been competing with much larger industrial developments on unfavorable ground. On the other hand, it is not hard to cite software research efforts that have failed by projecting overabstract notions that reflected reality in a manner that was too distant or one-sided. For example, research prototypes systems based upon a concept of hierarchical structuring and data and control abstraction have been constructed, and held up as ideas. Practitioners have remained unconvinced, since many of these attempts have led to unacceptable degradations of performance. Modular decomposition is an idea on which software researchers and practical systems developers have generally tended to agree, but, for efficiency reasons, practical use of this idea has never been as extensive or as rigorous as researchers would advocate. Formal program verification remains a research ideal, but its impact on practice has thus far been close to nil. University work on computer architectures during the last decade has also had limited impact on industrial practice. Various novel architectural suggestions, including architectures directly supporting high-level languages, data flow machines, and capability machines, have been proposed, but commercial development has not followed. While future development of such machines remains a possibility, it should be noted that successful earlier ideas such as time sharing and packet switching were quickly followed up commercially. WHAT THE UNIVERSITY ROLE SHOULD BE University computer science departments involved in software systems research have attempted to play several roles. They have tried to 1. demonstrate new and fundamental systems software concepts; 2. contribute by codification and analysis to the conceptual undertaking of existing systems; 3. invent improved algorithms and protocols around which improved systems could be structured; and 4. demonstrate systems concepts by building and disseminating new systems.
OCR for page 28
28 The recent attempts of university researchers to demonstrate fundamentally new systems concepts (role 1) have been disappointing, especially when compared to earlier periods in which the university impact was large. The increased number of issues that a system design must address may be partly accountable. In data base systems, computer networking, and small operating systems, substantial industrial efforts have generally been required to make new ideas and approaches entirely credible. Per Brinch Hansen's work on Concurrent PASCAL illustrates this pessimistic remark. In many ways this work, which built on such fundamental concepts as monitor and process and incorporated them into an interesting parallel processing language, was excellent. However, the small and relatively inefficient system built to illustrate these ideas ignored too many aspects of the overall operating system problem to be convincing. By contrast, UNIX, although less clearly structured, reached a level of efficiency and functionality that allowed it to achieve wide acceptance. The invention of improved systems algorithms and protocols is an area in which universities can be expected to lead. Some quite interesting inventions of this sort, having applications to compiler optimization and data communication schemes, have already been published. Although the practical impact of this work is still limited, it can be expected to grow in the future, especially as such new fields as robotics, which promises to make particularly extensive use of complex algorithms, develop in importance. One can safely predict that conceptualization and codification (role 2) will continue to be a role in which the university contribution will be predominant. However, this role is secondary in the sense that it records, preserves, and disseminates the results of prior research, rather than produces anything fundamentally new. University researchers are not likely to be content with so subsidiary a role, especially since, to play it well, they would have to make far more strenuous efforts to stay in touch with developing field practice than are now common. Roles 2 and 3 are particularly congenial for universities. The efforts they require can be undertaken by individual researchers or small groups, producing results that are often available within a short time and often discreet enough to be immediately suitable for publication. By contrast, university researchers attempting to play role 4 often find themselves on unfavorable ground. Nevertheless, attempts continue, in part because efforts to play role 1 often inspire them. This is only to be expected. New systems ideas and methods require credible demonstration; credible demonstration requires the construction of a system. However, not just any system will do if the work is to be taken seriously. While production of a commercial system should never be taken as the prime criterion of technical success, an effort to demonstrate the viability of a new software concept must at least address itself to the major problems addressed by commercial systems. This is what MULTICS, System R. and Ethernet development did, but what most university systems, built with limited resources and perhaps faced with obstacles inherent in institutional sociology, do not do.
OCR for page 29
29 These difficulties would seem to confine university systems research efforts to roles 1, 2, and 3. But here there arises the danger of increasing the already undesirable separation of university researchers from essential real-world concerns. Systems research divorced from systems building can easily come to concern itself with unreal problems. Moreover, even if the problems studied are well chosen, a university researcher working in a relatively ~costless" environment may lose sight of the cost factors that must nonntrain realistic solution and thus may impair the interest of his work as seen by industry. Some encouragement for university systems building efforts does come from the continuing decline in computation data storage and communication costs. As already noted, this strengthens the tendency to construct small specialized text processing, data base, process control, and personal computer systems. These systems will generally be simpler than the centralized time-sharing, multiprogramming, multiprocessing systems associated with large computers today. Their relative simplicity will make it somewhat easier for university-sized projects to produce interesting systems. Here, however, the question of contact with real field requirements becomes all-important. As systems grow to be more specialized and application-dependent, high- quality human factors design comes to be a central issue. The sales arms of commercial organizations alert them immediately to field requirements of this sort. University researchers tend to disdain predilections of applications-oriented end users, which may blind them to the issues that small-systems designers will need to face. If this tendency can be overcome, the availability of powerful small machines may make it possible for university software developers to affect developments to affect practice more directly; otherwise commercial software developers and practitioners with backgrounds in a variety of application areas will call the tune. At any rate, it would certainly be unfortunate for university researchers to ignore the views of this latter group.
Representative terms from entire chapter: