Click for next page ( 22


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 21
Tee E~olatior~ of Computer Systems HERBERT SCHORR For large computers since l9S5, the cost, in current dollars, has dropped by a factor of more than 200, while processing speed has increased by a factor of nearly 400. It is expected that the evolution will continue for systems and that large-machine performance will increase 8 times in the next decade. This growth will come both from the continued improve- ments in technology that will reduce cycle times and from improvements in the design of processors that will reduce the number of cycles required per instruction. The amount of information stored in electronic form is increasing at some 40 percent a year. The management of massive data bases to which many people need quick access requires a lot of computational power, and once these data bases are on-line, business growth typically drives the demand for workstation access to them even higher. Today's state-of-the-art IBM 3380 stores 6,000 times more informa- tion per area of disk surface at 150 times less cost per character- than did the first disk storage unit announced by IBM in 1956. Thin- film read/write head technology pioneered by IBM makes it possible to transfer data at up to 3 million characters per second. Even so, today's faster computers are restrained by the time it takes to locate information, so in this area new inventions and memory hierarchies will be required. The need of IBM's large commercial customers for computational power is growing at about 40 percent a year, and this need will continue to be responded to by multiple-machine environments combining uni- processors of ever-increasing power. Machines will be combined in a 21

OCR for page 21
22 COMPUTERS OF THE FUTURE variety of ways: closely coupled, loosely coupled, through shared re- sources, and over high-speed buses. However, the degrees of parallelism are likely to be moderate in that only 16 or 32 main processors will be combined in single complexes. Large-scale integration has made possible processors of moderate capability on a single chip. The performance of these microprocessors is growing steadily. Data-flow widths have increased from 4 to 32 bits. Some new microprocessors have on-chip caches, and cycle times can be expected to decrease significantly from the current level of 200 to 400 nanoseconds. Increases in performance of 25 to 50 times can be expected in the coming 10 years. Microprocessors offer the possibility of alternative architectures to today's configurations where most processing is done by the central host. These alternatives involve moving function from the host out to the attached devices in the form of intelligent workstations, printer servers, and file servers. Custom and semicustom very large scale integration (VLSI) is also being used to migrate function from software to hardware, to implement specialized functions (such as data compression), and to reduce the amount of random logic in small systems. The availability of inexpensive, high-performance microprocessors has been responsible for the explosive growth in personal systems today, and this trend is expected to continue. The performance of these micro- processors will enable workstations to handle multimedia data types, such as image and digitized voice. It should be possible in the future to build computers that accept and "understand" natural human speech and respond in a way that is ap- propriate, useful, and intelligible. Machine recognition of discrete-word speech by a speaker known to the machine is practical today. Continuous speech of high complexity has been understood by a computer at IBM with greater than 90 percent accuracy, but the required computational power is still uneconomically large. Increased performance, specialized hardware, and improved techniques will enable speech-recognition func- tions to he commonly integrated into workstations in the coming decade. The growing trend toward personal computing parallels a phenomenon that took place with general-purpose minicomputers in the mid-1970s. The so-called minirevolution helped provide much-needed productivity in a hurry, but it eventually resulted in information bottlenecks through- out many enterprises. Because there was no commonality or central planning, information collected or produced on local processors was not compatible with information collected or produced in the rest of the organization. To avoid a repetition of that difficulty, serious thought must be given to interconnecting an organization's information-handling

OCR for page 21
EVOLUTION OF COMPUTER SYSTEMS 23 systems whether word processors, host-attached terminals, or per- sonal computers with each other and with current host systems. The term systems interconnection denotes the distribution of function and data among different systems within a network. Systems intercon- nections comprise several key components, including the network trans- port facility; a set of higher-level protocols for session and application; and methods for locating, requesting, and managing distributed re- sources. The goals of systems interconnection include resource and data sharing, modular systems growth, high availability, and growth that reflects the distributed nature of organizational structure. An important model is local clusters of intelligent workstations and servers on local- area networks (LANs) for the purpose of resource sharing. These sys- tems consist of intelligent workstations accessing common storage and printing devices called file servers and printer servers. A file server provides disk space for stations that are not equipped with hard disks and allows storage, retrieval, and sharing of files. One main objective is to allow users at intelligent workstations to access large, shared-host data bases as well as expensive, host-based resources such as high-quality printers. As computers become a daily part of more people's lives, attention is shifting from the central electronics complex to systems issues: pro- gramming, human factors, communications, and applications. The pro- gramming and the end-user interface are in many ways the limiting factors that determine how broadly the productivity benefits of infor- mation technology can be realized. Over the years computer users have found that an increasing fraction of their total cost lies in programming. This includes both the programming supplied by the computer manufacturer or other vendors and the pro- gramming done by users themselves as they evolve from batch processing to telecommunications-oriented systems, and finally to distributed com- plex systems. More circuits and hardware will pay for themselves if they make this programming job more productive. During the 1980s, support of the world's 30 million or so office prin- cipals will become a major focus of effort. Eventually office systems will have to be able to merge today's separately handled data, text, voice, and image information into integrated electronic documents that can be communicated, retrieved, and otherwise dealt with, without the user's being conscious of the form of the data when stored. Artificial distinctions between data, word, and image processing will gradually disappear as document distribution, filing, and retrieval systems for all kinds of documents evolve. What about the university of the future? Universities across the land

OCR for page 21
24 COMPUTERS OF THE FUTURE are entering an era of information systems experimentation. One pro- totype is beginning to emerge at Carnegie-Mellon University (CMU) in Pittsburgh where IBM and the university have just begun the joint development of a unique, personal-computing network. One of CMU's primary goals is the full integration of computing into undergraduate and graduate education. Eventually, it is expected that all CMU students, faculty, researchers, and professional staff will have access to personal-computer workstations that are effectively 20 to 100 times more powerful than current home computers. Each will also have access to shared, central data bases through a high-speed, local-area network. By 1986 several thousand of these personal workstations should be in place. The planned configuration will consist of four system elements: the workstations, local-area networks or clusters, a backbone communica- tions network with gateways and bridges to the other elements, and a central computing and data base facility. In addition to providing tra- ditional computing capabilities, the personal workstations will enable users to work on several projects simultaneously, to create drawings and diagrams, and to see these exactly as they will be printed; the workstations will have provision for audio input and output. During the 1980s work on communications system architecture will have to focus on the three fronts discussed below: (1) the geographically distributed communications network, (2) communications within the establishment, and (3) the gateways between them. Modern data net- works will adapt themselves to the needs of the people instead of making the organization conform to the system's structure. The U.S. Department of Defense made a greater contribution than it knew in establishing ARPANET, a network linking research groups in U.S. universities, in the early 1970s. The original idea was to permit sharing of computers that were not equally loaded. It turned out that this network was really used to discover the power of electronic mail and to document creation and distribution. This led to a new capability for collaboration by scientists thousands of miles apart. IBM has also had this experience on a much larger scale with an open-ended, peer- connected network called VNET, which contains more than 1,000 host processor nodes in hundreds of cities in 18 countries. VNET grew "bot- tom-up," so to speak, when two laboratories working on a joint project needed to exchange data. Soon other related sites were added, and the network grew until virtually all of IBM's scientific engineering locations worldwide are part of it. It is still growing. About 50 university com- puters are also hooked up, using this capability under the name BITNET (the acronym means "Because It's There'd. Such networks have an

OCR for page 21
EVOLUTION OF COMPUTER SYSTEMS 25 almost unique ability not only to improve the effectiveness of working groups, but to foster subcommunities of interest that the host organi- zation is not even aware of. For communications among establishments, computers and their users are already starting to benefit from the many public digital communi- cations services and data networks now offered or planned and from communications satellites through which data can flow at the same speed at which the computers themselves operate. This external telecommu- nications environment will, no doubt, continue to evolve, and agreement on and development of new standard interfaces for voice and data by common carriers around the world is another virtual certainty. But the main technical thrust of the l980s lies elsewhere within the establishment, where local-area networks (LANs), the private branch exchange (PBX), and host attachments are the key elements in this rapidly changing environment. An LAN can be defined as any information transportation system that provides for the high-speed connection between users within a single building or a campus complex, through a common wiring system, a common communications adapter, a common access protocol to allow connection between users, and common shared resources, such as the largest files and the most powerful printers. Several LAN approaches are in contention, including IBM's token ring architecture. Sorting out the technical merits and application domains of the token ring, contention bus, and competing LAN approaches will be a major activity of the 1980s. However that debate turns out, clearly, local-area networks must be integrated into network architectures. And that is driving the further evolution of IBM's System Networking Architecture (SNA) from its original hierarchical, large-host orientation toward one that increasingly fosters peer-to-peer communications. The typical 1980s PBX will have a significant and growing role. It can be expected to provide not only traditional PBX functions such as attachment and control of conventional telephones, and data devices using modems but advanced voice capabilities such as speech mes- saging. In addition to attachment of analog telephones and data devices with modems, there will likely be digital telephone attachments and direct digital attachment of terminals to PBXs. A facility that will continue.to have a significant role in establishment and office systems of the future is the host-attached controller. It will evolve to include local-area network functions for workstations and terminals while offering the facilities to a central host. All three of these intraestablishment media will be required to support the full range of applications that will evolve in future establishment

OCR for page 21
26 COMPUTERS OF THE FUTURE systems. Many customers will demand capabilities that can be provided only by having more than one. This will increase the need for gateways that can bridge between both these different local forms and between local and wide-area communications systems. As the success of data base/data communications applications spurs the rapid growth of intelligent, programmable workstations, the per- sonal computer and the data terminal are beginning to merge. The wave of the future may well be something that looks very much like an easy- to-program, more powerful version of a personal computer and that is networked both to similar workstations and to shared input/output de- vices as well as to shared data on larger host systems.