Click for next page ( 65


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 64
APPENDIX Computer-Related Technologies The computer-related technologies discussed in this report are applicable, of course, not only to other large data storage and information systems such as the one proposed for the Social Security Administration, but also to other kinds of computer-based applications. The purpose of this appendix is to pull together the discussion found in the body of the report concerning the technologies of storage and memory, alternative design concepts, data base, programming and software, and semiconductor components, including integrated circuits and microprocessors. These last components, especially, find diverse applications in the fields of transportation, communications, and consumer appliances in the home. Data Base Storage The design of the Social Security Administration's future process, as conceived by the agency, calls for the primary data base to be stored at a central site--presumably the facility under construction in Baltimore. This central data base would be connected to a group of interactive terminals over a nationwide network. In evaluating this concept, the panel considered alternative solutions for the storage of data. One alternative is the extensive distribution of both the process- ing and storage of the information in the system's data base. The distribution would be essentially to the district office level, with 1,200-1,500 data storage and processing sites. Another alternative is a regional separation of the data base into several large data storage and processing centers, a design concept commonly referred to as regional. This would provide access to the information through the same techniques as centralized design. The major difference turns on the storage points, in this case from two to six, each providing capability for the backup of other portions of the data base. Thus, the three alternatives described more fully in this Appendix are the central, distributed, and regional design concepts. 64

OCR for page 64
65 Central Design Concept , The panel recognizes the advantages of the central design concept in the following ways: The SSA's existing system incorporates a central design and, therefore, the transition plan, involving personnel and facilities, could be more readily adapted to this design than to the other concepts. . The SSA is constructing a central data processing site in Baltimore that could incorporate this data processing design. It permits more ready resource allocation toward modification of the basic design and could be adapted more easily to changes in legislative requirements or operating procedures than the two other concepts. Distributed Design Concept The panel found the following significant shortcomings in the distributed design concept: It lacks the flexibility to accommodate easily any required changes in data format, content, processing capability, or operating procedures. While the storage of the information could reasonably be accom- modated at the local office where the information would be used frequently, the mobility of the U.S. population makes it likely that a significant percentage of the accesses to a particular file will involve more than one district office. Based on statistical information and the practical experience of SSA field personnel, provided to the panel during the review, the panel has concluded that there is considerable interchange of information among various district offices. This would require significant processing capability in each one. The processing required to carry out the SSA process is substan- tial. The data could either be processed at points within the network or, alternatively, would have to be moved to a substan- tive processing point on a periodic basis. [his would mean either major processing capability at each distributed point, or major communications capability between the distributed storage points and significant processing sites, or both. The integrity of this data base is, of course, uppermost in the minds of the SSA planners. To maintain a high degree of security, with uniform policies and procedures, would be more difficult if the information were to be located at a large number

OCR for page 64
66 of storage points. The panel recognizes that significant security precautions are in place now and would always be in effect at the local offices. However, at any time only a small portion of the data base is located at a district office in the present system--or in either of the proposed central or regional design concepts. The degree of risk is less, therefore, with a central or regional design. To provide for the security of the distributed system would require that maximum security be main- tained at all times at all district offices. While this would not appear to be feasible from an economic standpoint, because of the large number of district offices, it is feasible to provide maximum security at a central site or even at several regional centers. Furthermore, the access to a central or regional data base from a district office could be regulated with strict security procedures at all times, particularly during hours when a district office is not manned. . The panel found that the potential for fraud against the SSA organization would be greater with the distributed concept because of the larger number of personnel who would have intimate access to the processing capability and the storage media in the district offices than the number with access in the central or regional designs. In the latter alternatives, personnel having access to the data and the processing capability would be more restricted and could be more readily audited. The Regional Design Concept The panel has observed during its deliberations that the regional design alternative should be considered for the future SSA process as it evolves. As it has happened, the GAS has relaxed the fixed requirement that the data-base be located in a single central site. The regional concept is more expensive than the centralized design, but a major consideration for holding this option open is the opportun- ities for security and redundancy of the data base. Although it will complicate the design to provide access from district offices to multiple centers, the panel noted that this design offers great security for information. Data Base Structure An examination of the basic structure of the SSA data base is a necessary prelude to some of the fundamental considerations of the panel. The data base is keyed primarily by the social security number (SSN), and each client's record contains data that has an affinity for certain functional requirements, such as the data that identifies the holder of a particular SSN, the earnings data describing the client's earnings history, and claims data, if, in fact, the client has made claims against his social security account . Therefore, the data base may be viewed as a two-dimensional matrix, as shown in Figures 6 through 9. SSN's are

OCR for page 64
67 \Function SSN \ f~xx-xxxx Identity Data Earnings Data Claims Data Address Data _ XXX-XX-XXXX E XXX-XX-XXXX .,XXX-XX-XXXX XX X-XX -XX X X (XXX-XX -XXXX XXX-XX-XXXX E XXX-XX-XXXX ~n ~ XXX-XX-XXXX ~ X2 OCR for page 64
68 located on the vertical axis and the functional data categories on the horizontal axis of the Figures. The panel has attempted to itemize the advantages of the symmetric and asymmetric approaches shown in Figures 6 and 7 respectively, to segmentation of the data base. The advantages of the asymmetric configuration are: Improved capability to add new applications without disturbing existing ones, inasmuch as the applications tend to have an affinity primarily for a corresponding part of the data. Capability to transfer applications from one processing site to another, or if an application is shifted to another agency, to disengage it from the SSA system. O As a result of these two capabilities, the system would be better able to respond to external priorities. An example of this might be the implementation of the post-entitlement function initially in the conversion process--one of the desired objectives of the GAS plan. Because the current data base is basically structured asymmet- ricallY. conversion to an asymmetric configuration would probably be simpler than to a symmetric conrlguraclon. An asymmetric data base and configuration would offer great potential to vary the hardware and software configurations. For example, different central processing unit (CPU) architectures could be used to support different applications. Such variability is characteristic of the current SSA process in that there are various CPU architectures supporting subsets of the current SSA process. Program distribution and maintenance could be simplified, because the process for one function would be performed primarily in one location, as opposed to the symmetric configuration where the same processing would be done at all centers. The one-site-per-function structure in the process would enable better process specialization and possibly better utilization of personnel by allowing them to specialize. Improved capability would be offered, from a cost management standpoint, to isolate the cost for each function. O By segregating the functions with the asymmetric file, the con- version and development process could be segmented into smaller pieces.

OCR for page 64
69 The advantages of the symmetric segmentation of the data base are: Easier load leveling by splitting the work symmetrically and creating new centers as increased processing demand is identi- fied. Better capability for integration and synchronization of the files and, as a result, better support of the '~whole-person'' design concept. Simplified network routing because interrogation of the data base in most cases is by specific account numbers or SSN's; the network could route inquiries to the specific location that maintains all the information needed for response. Better traffic balance for the data communications network. Less redundancy of data is offered in the data base. While this is an advantage of symmetric segmentation, either symmetric or asymmetric structuring of the data base in the future process would offer less redundancy than is inherent in the current SSA process. Improved economics of the backup process would result, because all of the centers would be similar. A need would exist for only a generalized backup capability. A single point of control at the central facility would result in a more cohesive design. Given these fundamental characterizations of symmetric and asym- metric segmentation of the data base, the implications of the various modes of executing this segmentation can be evaluated in some detail. While the size of the data base suggests that segmentation is the best format, the method used to effect it is critical to the fundamental design of the system. Despite the attempt to keep the data base independent of the application systems as a design objective, there will always be a strong relationship between the applications processes and the data structures. Therefore, the processing requirements are important in making decisions on data base segmentation. While the panel has concluded that the segmentation of the data base does not need to be entirely symmetric or asymmetric, it has observed that certain portions of the data base will have to be seg- mented in an asymmetric format. For example, the identity data must be maintained in a single center within the future process so that single point assignment, control, and management of the identity data can be maintained. Similarly, because of the nature of the earnings data that are filed by the SSA, it may be desirable to maintain the data in an asymmetric way.

OCR for page 64
70 If, however, the identity data were determined to be the only information required to support the asymmetric format, then the primary segmentation of the data base could be symmetric, with asymmetric sub- segmentation, illustrated in Figure 8. In contrast to the hybrid segmentation shown in Figure 8, the primary segmentation for all of the data base could be asymmetric, with symmetric subsegmentation depicted in Figure 9. The panel has concluded that there might well be a hybrid segmentation similar to those shown in Figures 8 and 9, with the data base segmented 'both symmetrically and asymmetrically at various levels. For example, in handling claims, the records that are referred to most frequently could be maintained on the faster access storage media than those with a lower rate of recall. In essence, media segmentation of the information within the data base could be carried out in a symmetric subsegmentation within the primary asymmetric segmentation of the data base. The Whole-Person Concept The design objective of the whole-person concept has been endorsed by the panel as a desirable objective for the future SSA process. Adop- tion of this concept will simplify operations and make it possible to support the basically on-line data base to support the future service objectives. While the symmetric segmentation of the data base appears to align itself more readily with the whole-person concept, the data base need not be segmented symmetrically in order to achieve this objective. It is necessary, however, that the information be maintained on-line in order to achieve an effective whole-person concept and to provide the timely access that is a primary objective of the future process. Data Base Technologies `- The panel has encouraged the use of standard technology and the division of processes into discrete processing capabilities as a necessity in the design of the data base. Future technology, including storage devices, will continue to provide alternatives that will need to be considered in the design effort to improve service levels and reduce service costs. As technology improves, the design concept for the direct on-line access to information should be able to accommodate to the technological advances. The same considerations holds for software development. The panel also has suggested that, wherever possible, the data base design incorporate software subsystems in wide use and maintained as standard products. The reason for this is to minimize the need to maintain a highly specialized technical staff for the support of specialized software. The large size of the data base imposes some significant limitations on what products are available from the commercial hardware and software markets. As a result, some exceptions may have to 'be made. The overall design of the future data base calls for an on-line system. However, there will be variations in the speed of access to

OCR for page 64
71 \Function SSN \ / XXX-XX-X~XX _ XXX-XX-XXXX c XXX-XX-XXXX c~ XXX-XX-XXXX ~ XXX-XX~-XXXX / X X X-XX-XX XX / XXX-XX-XXXX c XXX-XX-XXXX XXX-XX-XXXX \ XXX-XX-XX XX XXX-XX-XXXX XXX-XX-XXXX E XXX-XX-XXXX a> ~n XXX-XX-XXXX ~, XXX-XX-XXXX / XX X-XX-XXX X I XXX-XX-XXXX c XXX-XX-XXXX cn XXX-XX-XXXX ~ XXX-XX-XXXX Claims l~t:\ Address Data FIGURE 8 SSA Data Base - Hybrid Segmentation (Symmetric/Asymmetric) \Function SSN \ XX X-XX -XXX X XX X-X X -XX X X XX X-XX -XXXX X X X-X X -XX XX - XXX-XX-XXXX XXX-XX-XX XX XXX-XX-XXXX XXX-XX-XXXX XX X-XX -XXXX XXX-XX-XXXX XXX-XX-XXXX XXX-XX-XXXX XXX-XX-XXXX XXX-XX-XXXX XX X-XX-XXX X XXX-XX-XXXX XXX-XX-XXXX - XXX-XX-XXXX - XXX-XX-XXXX XXX-XX-XXXX . I denti ty Data Segment 3 Segment 4 Data Da ta : :: . ~ , <;Pnm`!nt :) FIGURE 9 SSA Data Base - Hybrid Segmentation (Asymmetric/Symmetric)

OCR for page 64
72 this information because of the economics of the storage media and the options that are available for providing lower response speeds for portions of the data base. Because of the size of the data base using hierarchical storage, the ready availability of commercial hardware and software systems may be somewhat limited. Accordingly, a cost tradeoff will need to be made as to the degree of specialization required to' support the data base. The programming and maintenance costs of specialized support will need to be evaluated against any additional hardware operating costs that arise from a generalized data base system that is commercially available. Stability of the data base is critical to the successful opera-lion of the entire SSA process. Consideration is often given to the degree of stability that is possible with specialized software as opposed to software having more common usage and, therefore, a potentially higher degree of stability. Storage (Memory Technology) _ memory: At present, most computer systems include the following levels of Archival (magnetic tape) File (moving head disk) Intermediate (drum or fixed head disk) High-speed (core or semiconductor) Buffer or cache memory associated with high-speed memory is not included because it is invisible to the programmer. For archival purposes, magnetic tape is satisfactory and will continue to be used for this purpose during the foreseeable future. File disks are both efficient and reliable, but a non-mechanical replacement for them is desirable if a total reduction of environmental restrictions on computer usage is to be achieved. Paging from drums is very successful when used for batch appli- cations. The extra amount of paging required for time-sharing and the consequent need for a high level of multiprogramming requires large high-speed memories. The development of such systems has been hampered by problems of efficient and responsive operation under varying conditions, and it is likely that a limit has been reached in the complexity of operating systems. The replacement of the drum or the fixed head disk by a device with similar characteristics, but with greater speed, is necessary for future systems. Thus, the need exists for bubble memories or charge-coupled memories, if they can be produced at suitable levels of performance and cost. A body of thought prevails that computer technology is at a point at which drums or fixed head disks will disappear altogether. This will happen as soon as the cost of high-speed semiconductor memory becomes low enough that the economic advantage of providing the additional capacity outweighs the complications to hardware and software.

OCR for page 64
73 Main Memory Technology The same Very Large Scale Integration (VLSI) technology that will bring about improvements in cost/performance in the CPU will also bring about drastic improvements in the cost/performance of the main memory. The problems of sharing the main memory among users as well as the limitations on its size because of its cost probably will not arise in a decade. Distributed processing will be encouraged by price and performance improvements in main memory systems. Auxiliary Storage Much research is being performed on novel auxiliary storage technologies, including magnetic bubbles, charge-coupled devices, laser-holographic devices, and cryogenic devices. Such research will gradually result in the introduction of new types of auxiliary storage subsystems. Still, the potential for improvement exists in today's conventional magnetic technology. Most of the improvement will be in the form of increased area density of recording--more tracks per inch horizontally across magnetic disk faces and more bits per inch vertically. An area density improvement factor of at least 40 appears theoretically possible. This will result in a lower near-term cost-per-bit for magnetic disks than either bubble or charge-coupled device (CCD) technologies can match. However, access time to magnetic disks will remain a problem even if head-per-track arrangements become general. For this reason, it is likely that a variety of auxiliary storage devices will continue to be used through at least 1985, with the newer technologies appearing first at the high- speed, low capacity end of the spectrum, then gradually superseding slower technologies as the costs drop of the former. Both magnetic bubbles and charge-coupled devices could be in widespread use in, say, ~ . rive years. Summary of Storage Technologies The SSA has limited its view of storage technologies to those available today. Although this approach narrows the number of system alternatives, it is practical unless some dramatic change, which is not now foreseen, occurs in the period during which the system will be implemented. The SSA has correctly assessed the state-of-the-art in storage technology, by assuming the continuing availability of a range of stor- age devices based on relative cost/performance relationships. Large capacity library stores are currently available with capacities of 472 gigabits, and access times of 15-30 milliseconds, at a cost of 30-50 cents per megabit. Continuing developments in optics and magnetic media offer the potential for significant cost and capacity improvements. In the next five to ten years, potential improvements of three to five times in area density with a two to three times reduction in cost are possible. Similar improvements are possible in 1/2' tape, but the semi-automatic

OCR for page 64
74 operation of libraries and space economics will limit its potential usefulness, except as an interchange medium for comparability purposes or as a small system library. Optical storage technologies with read- only/write-once capability are available, with the potential for future density recording 100 to 300 times greater than magnetic media at relative costs of 1 to 10 percent. Intermediate storage products such as hard surface and flexible magnetic disks, video disks, electron beam devices, bubble and charge-coupled devices will be available for use in electronic libraries as intermediate storage for on-line access. A number of factors are stimulating the use of these devices: Low cost processors Diverse processor architectures o Data sharing among several processors and applications Increasing programming costs Transition problems in changing several system elements simultaneously Hard surface non-removable disks are available with single spindle capacities of 317 megabits (MB) at a cost of $1.81/MB/mo and access time of 25 milliseconds. Improvements of four times in density and access in the next decade with decreases of three to four times in cost are potentially possible. The removable disks have capacities of 140 MB at a cost of $5.56/MB/mo. Improvements similar in magnitude to those on hard disks are potentially possible in a similar time frame. Bubble and COD technologies are already being introduced to the market. These will provide memories at a projected cost of about 30 to 40 millicents per bit in, say, ten years. Computer Storage Timely access to data elements (social security number, earnings, etc.) is an important and persistent factor in the design of the SSA system. In the past, programs have been written that access specific records stored on tapes or disks in a unique sequence. When the size of an individual data element is altered or new data elements are added, major programming changes may be required. Frequently, the same data elements may be required in differing groupings and/or different sequences for other programs. The result is the need to synchronize the updating of several different records on a controlled basis. The SSA has recognized this problem for its current set of applications by recommending the whole-person concept. In order to accommodate changes in technology as well as modif- ications to the system, both within existing and new programs, the panel

OCR for page 64
76 The same basic concept as above, but with two to six locations providing redundancy of processing as well as permanent data storage capability. Distribution of the permanent data to the many district offices. Because the potential exists for innovative approaches leading to decreased on-line storage costs which can make feasible redundant storage of data, analysis of potential design techniques always needs to be made independently of the central processing complexes. Programming . _ . Substantial progress is being made in improving the productivity and effectiveness with which computers are being programmed. In addi- tion, the dramatic decreases in computer hardware costs due to large scale integration continue to make problem-solving by computer more practical. As a result, the information processing industry has clearly passed from being capital intensive to being people intensive. The principal cost of using computers will continue to be that of the people associated with these machines. Programming the computer and planning its operations so that it is easy and economical to use will be the technical and operational challenge of the future. The most successful way of increasing computer programming productivity is through high-level languages. Such languages enable the computer user to perceive and solve the real applications problems with- out getting entangled and confused with detail. Trends in high-level languages are: Building of special purpose high-level languages for data base manipulation and for professional jargon language systems that make them useful for unique applications. Facilitating the structuring of the applications problems and their solutions, so that the resultant programs are probably correct when written. Employing interactive graphics in both the definition of the application and its programmatic solution. Minimizing machine dependencies in the high-level languages. Programming, in the meaning used here, is the art of describing the problem to be solved in a form that results in effective computer solutions to the problem, produced economically. Skilled people are the essential ingredient in programming. System design is part of the programming process in this context. One of the reasons that computer programs have been so large and intractable historically is that the programs were a people-built bridge between whatever hardware could be obtained and an inflexible definition of the problem to be solved.

OCR for page 64
77 Programming is now evolving so that the system and hardware designs must be done in conjunction with the coding of the designs in order to achieve effective solutions. When the principles of top-down design and high-level language programming are applied to the full power of computers, communications and interactive display graphics, modern information systems become flexible, easy to use and understand, and economical to program and operate. The rate of improvement in software technologies is appreciably smaller than that in hardware. In each of the last three decades, programmer or software effectiveness has increased by only a factor of about five. By contrast, for each of the decades, computer hardware as measured by cost/performance has improved by at least a factor of 20. These relative rates of improvement probably will continue for the next decade. Considering the high costs and difficulties now being experi- enced in the programming and operation of computer systems, significant leverage can be obtained by taking advantage of each new programming technique, even if it only doubles the effectiveness of its users. System Software High-Level Languages The burden for ease of use falls on high-level languages. Such current programming languages as COBOL and FORTRAN will be enhanced to take advantage of new easier to use features as they appear in sub- systems, such as communications and data base management. However, the 1980's are likely to see a shift away from procedural languages toward those that tend to describe a problem rather than state the solution. There now exist rudimentary problem-definition languages or dialog processors that are used to tailor an applications package--a payroll, for example--to the needs of a specific user. Many more generalized products will appear through the 1980's, which will provide significant advances in ease of use. Most systems try to be "English-like" (as, for instance, COBOL), so ease of use and intelligence are generally measured by how well the vocabulary and grammar of the product match those of the user and the application. Much more success Is being achieved in matching the application than the user. The mid-1980's should see adequate dialog processes for computer operation, data definition, report generation, generalized query-update, job control language definition, and many applications areas. By late 1980's there may be some really significant English-like language recognition, with the system resolving ambiguities through dialog with a user and memory of that user's characteristics. It may be possible then to contemplate the body of knowledge that would have to be encoded to provide a system with intelligence. It might be useful for SSA to study in detail which high-level languages would be most appropriate for the long term. In this connec- tion, attention is called to the study underway in the Department of Defense looking toward the standardization of certain high-level languages for particular uses. The SSA might consider a similar study

OCR for page 64
78 to decide on the appropriate languages for its use in applications software and systems programming. Operating Systems Many of the functions now performed by operating system software are likely by the mid-1980's to be perforated by computer microcode. The major functions that remain, such as job scheduling, non-shareable device allocation, error monitoring, and recovery, will be performed by relatively simple monitors dedicated to specific modes of operation (e.g., batch, time-sharing) in some form of virtual machine environment. Evolution to this functional pattern will be slow, but the trend is obvious. System Management Software By 1985, say, computer systems should automatically log and report the data needed to control related external activities, including tape and disk library control, external job scheduling, and user accounting and billing. Logging will also be automatic for references to protected files. The file management system will control access symbolically, and the logging system (inaccessible to most users) will record all references. This capability, a subset of the automatic recovery logging process, should provide adequate file access control for multiple users. Measurement facilities for system performance will be needed, in addition to basic logging facilities, so that managers can observe the performance of programs, the balancing of system resources, and so forth. Such measurement facilities probably will interface with the diagnostic and error-detection software. System manufacturers and specialized software firms have developed competent performance measurement software. Little further evolution is needed for adequacy of measurement at an overall level. System simulation software, used to help users predict the behavior of changed systems and configurations, will be based on the results of the measurement software. Software Costs The trend toward separate pricing of software is expected to continue. The operating system may be priced separately, but this strat- egy is currently in the evolutionary stage among the suppliers and not as clear as with other software components. Other varieties of software will be separately priced. Prices will vary by the function and the level of computer system for which they are designed. For the large multiprocessor system of 1985, the following-software prices are forecast (in 1977 dollars): - Data Management System Language Processor (each) System Management Complex Message Control Program $60,000 $12,000 $60,000 $50,000

OCR for page 64
79 These are generally higher than prices for equivalent products today because of their greater value and complexity. The data management and system management software will often dominate a user's involvement with the computer. Central Processing Units l By 1987, say' the predominant technology used in the design of Central Processing Units (CP0's) is likely to be Very Large Scale Integration. This technology will be based on improvements in inte- grated circuit design and fabrication. Within a decade, this development will enable from 10,000 to 50,000 logic gates to be placed on a single integrated circuit chip, with performance equivalent to that of today's most powerful types (subnanosecond emitter-coupled logic). VLSI should decrease the cost of today's mid-range mainframe CPU's by one to two orders of magnitude. Upper performance limits are not as clear, but the large decrease in gate-to-gate interconnection distances made possible by placing such large numbers of gates on a single chip should result in hardware performance two to four times greater than that attainable with present technology. Using VLSI, the physical size of an equivalent mainframe CPU will be reduced drastically. Today's freestanding mainframe computers would be equivalent in size a decade from now to today's desk-top micropro- cessors. The size of the CPU's will be determined by the human interface devices, such as the keyboard input and the cathode ray tube display terminal, and not by the internal CPU logic. System Architecture Future architectures will stress direct high-level language-based execution, protection and security of data and data independence, as well as an increasing trend toward dedication of functions such as input/output control and file management. A major thrust will be toward systems organizations that can interface flexibly with formalized communications networks and allocate program execution among various locations. The large centralized processing complexes of today will be replaced by very loosely coupled and highly distributed systems that will be used in an on-line fashion, as opposed to the batch and time- sharing modes of usage prevalent in current systems. Processing functions that require access to large amounts of shared data or extremely large amounts of specialized processing power will be centralized and available through communications links. Examples of such centralized functions are data base management systems requiring access to large amounts of shared data and specialized arrays of scientific processors, which are unique to a specific type of data processing capability. Distributed processing and parallel processing will have large roles to play in the computer of the future. However, it would be wrong to assume that multi-programming systems of the type in use today are incapable of further development. Many of the problems experienced with

OCR for page 64
80 such systems arise from the extreme disparity in speed between their high-speed memories and the fixed head disks or drums used for data storage. Such problems will disappear when either fixed head disks or drums are replaced by bubble or charge-coupled memories or when high- speed semiconductor memories become large enough for the disks and drums to be dispensed with altogether. The difficulty in forming a clear view of the direction in which office data processing will develop is that there are so many options open. Office data processing is used to cover accounting, inventory control, invoicing, and the like, and to exclude linear programming, economic modeling, and similar applications. Even though the latter may be pursued in an office environment, they are considered to be of a scientific nature. Computations to be performed in office data processing may readily be broken down into small packages. There is no compelling need for fast processors with very large memories; many tasks can be accomplished with distributed systems using microprocessors with modest memories. On the other hand, the advantages to an organization of centralizing its data processing operations are also apparent. Office data processing is concerned with the handling of data and at this moment the state of data base technology is in rapid flux. This results from the development of large capacity disk files and the emergence of a need for some advances over the filing systems developed since the late 1950's. Microelectronics will enable the benefits of electronic data processing to be brought to small enterprises, including those that employ only a few clerks. It is not hard to imagine machines resembling the multi-register accounting machines of the pre-electronic era but containing powerful microprocessors. In the case of a small office, these would plug into a low-cost disk unit or bubble memory. The same machines would have applications in larger offices, where they would be connected to an agency-wide network. Computers and computer-based terminals developed for the growing market are likely to become available in quantity and at low cost. They may be expected by their very existence to have a broad impact across the whole computer field. SEMICONDUCTOR TECHNOLOGY The increased complexity and improved cost/performance of semi- conductor devices have closely paralleled the progress in cost/ performance of computer hardware over the past quarter century. Advances in semiconductor technology will continue to make marked contributions to the evolution of computer system in the decade ahead. The primary impact, though, has shifted from the central processor to the terminal, peripheral, and memory areas. Integrated Circuits Because of problems in electronic interconnections such as power loss, and noise, the packing of functions into integrated circuits (IC's) has become the industry's method of making more powerful and

OCR for page 64
81 reliable computers. The major costs involved in IC production are (1) making the silicon chips and (2) assembling and testing the devices. In manufacturing, because of yield losses, increasing the number of functions--and chip area--raises the cost of the chip exponentially. In assembly and testing, the cost is relatively independent of the number of functions. The cost per function of IC manufacture has a curve in the shape of a U--the sum of the two other curves--with a minimum cost point (Figure 10, page 82~. As the manufacturing process improves and yields get better, the minimum cost point moves to the right. In general, the complexity of products at the minimum cost point has doubled every year since the introduction of the integrated circuit. This means that within 20 years, if the present rate continues, IC's will be available with 1 billion elements. Attaining higher levels of integration has so far been achieved primarily in three ways: Increasing chip size by reducing the random defects that cause yield losses. Introducing circuit innovations allowing higher function densities. Making individual circuit elements smaller. Making circuit elements smaller has been the primary method of increasing integration so far and will probably remain so in the near term. Reductions by a factor of two have been occurring every five years. As dimensions are decreased, speed and density increase, but power density does not. This means that circuit densities could increase by a factor of 64 and speeds by a factor of eight over the next 15 years. Combined density and chip size extrapolations indicate an ultimate increase of functional complexity by a factor of 2,000 during the next 15 years, with costs increasing only slowly from those of today's complex chips--resulting in cost per function drops of 100 (or even 1,000) to 1. Microprocessors - As semiconductor technology has developed over the last 15 years, it has become the cornerstone of the electronic industry. In the last five years, the advent of mass-produced microprocessors has accelerated the pace. Figure 11 shows past and future cost trends for transistor- transistor logic (TTL) microprocessors. The cost per active element group (AEG)--a measure equivalent to logic gates and memory cells--of transistor-transistor logic (TTL) has been reduced by a factor of 60 in the past ten years, while assembled TTL has been reduced by a factor of only 15. Cost reductions for interconnection and packaging have not kept pace, but microprocessors have broken much of the cost barrier by decreasing the number of system assembly operations.

OCR for page 64
82 in o 10.00 <~' 1.00 LLI a: `~, 0. 10 J o 0.01 0.001 \\\ Total Cost // / / Silicon Chip Cost , ~ ~ , (Increases with \ \ // Complexity) \\ / / \ _ _ , \, \` / ~_ Assembly and Test Cost (Decreases per Function) N NUMBER OF FUNCTIONS PER CIRCUIT Minimum Cost Point: As technology improves the component cost curves move to the right and the minimum cost point reaches a higher number of func- tions per circuit. FIGURE 10 Integrated Circuit Production Cost YEAR '67 '68 '69 '71 '73 '75 '77 '79 '82 '85 I I I I I I 1 1 ~ 1 1 1 1 1 1 -' Assembled TTL Or Lonic GrouD _ _ _ _ <__ Microprocessor ~ ~ _ _ _ CUMU LATI VE AEGS (mil lions) 10~ 106 : 107 FIGURE 11 Cost Per Active Element Group For Transistor-Transistor Logic and Microprocessors

OCR for page 64
83 The microprocessor first integrates the processor, then memory (now called the microcomputer) and later peripheral functions. Increases in functional capability, shown in Figure 12, result from increasing the component density on a silicon chip by advances in: circuit architecture device structures processing technology imaging techniques Further progress will be made in circuit architecture and device structures, but much of the future improvement in AEG's per chip must come from processing and imaging. The 4-bit microprocessor appeared in 1971 and the 8-bit micro- processor was in production two years later. The 16-bit microprocessor was introduced in 1975, and a 16-bit microcomputer with 32,000 bit memory is forecast by 1980. With reasonable confidence, the industry can predict that by the 1980's it will have the technical capability to build a single chip 32-bit microcomputer with 1 million bits of memory. The rising curve in Figure 13 shows the increasing number o f active element groups (AEGs) per chip as a function of time. The functional equivalent of a medium-scale computer, depicted in Figure 12, cost $30,000 in the early 1960' s . Its equivalent now has dropped to $4,000 and is projected to be less than $100 by 1985, putting it in the price range of the personal computer market. As this is accomplished, greater challenges will be encountered in the costs of sales, service, and maintenance, requiring that the industry learn to incorporate self-diagnostic and self-repair functions into systems. Figures 10, 11, and 12 show the expected cost and complexity improve- ments of microprocessors. During the next decade cost per circuit element will decrease by a factor of LOO. Cost of a specific product, such as the medium scale computer, will decrease by a factor of 50. Circuit complexity of microprocessor chips will increase 100 times. Functional complexity will increase from a 16-bit micropro- cessor to a 32-bit microcomputer with 1 million bits of memory. The most significant change in microprocessors will be in the reduced number of chips or devices required in a system. The one chip microcomputer, having both the memory and processor on a single chip, is the leading edge of this trend. As circuit density increases 3 the amount of memory will increase, with peripheral and ~nput/outpuc functions, such as analog to digital converters, also included on the chip. This increase in function on the chip will reduce the cost of the

OCR for page 64
84 1011r I~ 109 UJ Q an 3 107 CC A llJ 5 ~ 10 111 J 111 '_ 1 0 0 1 960 1o6 05 104 I 103 1o2 ~0 1 .0 0.] Resolution Limits ..:::.X-Ray...... .................................. A E-Beam - .... ...... 64K RAM 1 6K RAM I , ail - - , .~. I . ..~...~ . - 4K RAM O ~ ~ - - - 1 -Chip Calculator 0 ~ 1 6-Bit ~ Microprocessor - 32-Bit Microcomputer. 1000K-Bit Memory ., 16-Bit Microcomputer. 32K-Bit Memory 970 1 980 YEAR FIGURE 12 Semiconductor Chip Complexity rrc Ice 1K RAM _ ,~5\' ,_ ~ _ ~ / 1~ occults I ._ Svstems- 1 990 4K~.: ~ _~5 ~~ ~ ~~Microprocessor _ Micro- computer - ,_ _ - ' 1960 1965 1970 1975 1980 1985 YEAR FIGURE 13 [Distributed Semiconductor Power 05 Cal 1 04 O 1 .... ~ 10 , D :D < 1 o2 (* Z 1Q

OCR for page 64
85 New Applications . Electromechanical Logic Replacement SC Logic Replacement - 1970 1975 1980 1985 YEAR FIGURE 14 Microprocessor Application Evolution end product and open new markets for microcomputers unavailable today because of their costs. The product and application impacts of microprocessors have been very profound. As shown in Figure 14, the first microprocessor applications were as replacements for semiconductor logic devices. The data processing industry was the leading user of microprocessors. As microprocessors have become faster and more powerful, data processing products such as minicomputers, small business computers, terminals and peripheral devices now use large numbers of them. By 1975, microprocessor prices had declined enough to start replac- ing electromechanical devices. The appliance industry is now progressing rapidly toward microprocessor-controlled products. The transportation industry will probably be the next major microprocessor user. By 1980, it is likely that automobiles will use millions of microprocessors to control engine functions. The "smart" telephone will be an important future application; the installed U.S. base of telephones is 150 million units, with yearly additions or replacements of about 6 million units. Other large potential microprocessor applications are the control of TV's, tape recorders and record players. A current entertainment product is the programmable video game. It is likely to evolve into the home computer. These and other potential applications are summarized in the following table.

OCR for page 64
86 Microprocessor Application Potential MILLIONS OF MICROPROCESSORS l USED ANNUALLY* PRODUCT LOW HIGH Data Processing Equipment 8 10 Business Equipment 3 4 Consumer Equipment 0 Appliances 20 30 Audio-Visual Equipment 20 30 Phones 2 6 ~ Other 4 Transportation Equipment 10 15 Communication Equipment 1 2 Industrial Equipment 1 2 Miscellaneous 1 1 66 104 As microprocessors penetrate the energy-consuming equipment markets, more advanced features will be implemented. Energy conservation features will be particularly important. Microprocessors will be able to save energy through accurate sensing and control of energy-consuming equipment *Estimated for 1980. .