Click for next page ( 31


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 30
Paper 2 I NTEGRATED CIK:UITS: A DEVELOPMENTAL AREA DOMINATED BY INDUSTRY The fabrication of electronic circuits is the best-known example of innovation and industrial effectiveness today. Our ability to put tens or even hundreds of thousands of computing and logic elements onto small chips that can be produced in large quantities by highly specialized but moderately sized laboratory-factories has opened many new markets for electronics. m is very successful line of work furnishes a prime example of a heavily development-oriented area of technological innovation that has been dominated by industry. Industrial organizations have the capital necessary for activity in this area, have had an immediate interest in its outcome, and were the ~ , I, however, the problems faced in perfecting complex chip designs have encouraged universities to build up VLSI design automation activities. first to confront its problems. More recently, THE PROCESS TECHNOLOGY A brief review of the process by which integrated circuits are produced will help us understand why this vital area of the computer industry has been so very strongly development-oriented. Integrated circuits are manufactured by introducing microscopic quantities of carefully chosen impurities into a pure crystal of a semiconductor, typically silicon. m e impurities introduced, and their macroscopic geometric pattern, determine the electrical properties and the switching behavior of the circuits that result. m e manufacturing process is largely a photolithographic one, in which a silicon wafer 4 to 5 inches in diameter is coated with a photosensitive mask. Exposed areas are then etched, and carefully controlled quantities of impurities are introduced into them; unexposed areas are unperturbed. Through a series of steps involving exposure to several successive masks, followed by processing related to each mask, numerous microscopic transistors are formed in small rectangular regions on the wafers. Each quarter-inch-square region comes to contain 10,000 to 300,000 transistors. On completion of the wafer processing, the wafer is cut along the region boundaries to form separate "chips" about 1/2 square inch in size. One wafer can yield about 100 to 200 such chips. Since a fixed number of masking steps are involved, roughly 5 to 10 depending on the precise technology 30

OCR for page 30
31 used, the cost of a chip is more or less independent of the number of transistors it contains. Hence one aims to put the greatest possible number of transistors on a chip. The success of this effort, which has doubled circuit densities each year for over a decade, has resulted in tremendous increases in processing capability per chip, while the cost per chip in volume production has been more or less constant. Think smalls has been the guiding slogan of the astonishing series of technological triumphs that have characterized the microelectronics industry. Major advances in photolithography have been required to sustain this development. Channel widths have dropped to 1 or 2 microns, from 10 to 12 microns a decade ago. New improvements, which it is hoped will lead to a still more refined submicron technology, promise further improvements. However, since the physics of visible and even of ultraviolet light constrains minimum feature sizes to something in the range of 0.1 to 0.5 microns, the industry is approaching the limits of optical techniques. In attempts to attain higher density, some - manufacturers are experimenting with elector beam machining in place of exposure to light through a mask. The line widths that can be attained in this way are much finer than those attainable by conventional means, but processing is a good deal more costly, since the electron beam exposes just one point on a wafer at a time, and thus must draw a pattern serially. Currently fabricated chips generally use a MOS (metal oxide semiconductor) technology, though limited but significant use is also made of other integrated circuit technologies. MOS has moved into the forefront because it is generally denser and simpler to fabricate than other processes, characteristics that are crucial to the success of VLSI. The speed disadvantage that MOS suffers with respect to other processes is not significant for low-end computing devices, and MOS speeds are improving relative to those of other technologies. The competing, faster bipolar microprocessors tend to be more expensive and to require more power and are used primarily in more demanding applica- tions such as real-time control or in large scientific mainframes, where speed is of the essence. The dominance of MOS is noteworthy because MOS developed several years later than some of the technologies that compete with it. Its success contradicts the common observation that in high technology the later developments often fail to dominate, or even to survive, because investment in earlier developments faces them with an overwhelming competitive handicap. DEVELOPMENTS LEADING TO VLSI Both the bipolar and the MOS technologies derive from the original invention of the transistor by Shockley, Bardeen, and Brattain of Bell Laboratories, which carried forward lines of development originating in projects (ca. 1952) of the inventors Lillienfield and Hell, both expatriates from Germany. Shockley's 1939 notes show a grasp of the principles of MOS field- effect transistors (FETS) , some 19 years before Teszner demonstrated a

OCR for page 30
32 working FET at CFTH in France. However, although Shockley and Brattain were collaborating at Bell Laboratories as early as 1939, the war interrupted their work on transistors, and they did not return to intensive work on the transistor until 1945, when they were joined by Bardeen, Pearson, and others in a solid-state laboratory at Bell Laboratories. During the war, notable research on semiconductors had also been performed at Purdue by Lark-Horvitz, but this research terminated with the end of the war before yielding a solid-state amplifier. The invention of the transistor conventionally dates from December 23, 1947, when a point-contact transistor amplifier was successfully constructed at Bell Laboratories. Bipolar transistor technology developed rapidly in the following decade. The bipolar transistor was joined by the fledgling field-effect transistor in 1958. Two major breakthroughs occurred as the 1950s ended and the 1960s began. In 1958-1959, Noyce invented the planar transistor, which could be fabricated by a simple sequence of diffusion and masking steps. In 1958 and 1959, Kilby at Texas Instruments demonstrated that a complete integrated device could be fabricated from planar transistors, resistors, and capacitors formed together out of semiconductor materials on a common substrate. These two inventions opened the path to what is now the integrated circuit. Today a series of about 10 to 20 steps involving masking, diffusion, and implantation suffice to produce about 200 chips on a single wafer, each chip having upwards of 10,000 transistors. Formerly, the fabrication of a complete device of this complexity would have meant that each transistor had to be individually and tediously connected by wires running from point to point, with only some limited application of automation speeding up to the wiring process. Between 1959 and the construction of the first microprocessor, the Intel 4004, many more developments in integrated circuit technology occurred, but none of these were so fundamental as the original inventions of the transistor, the planar transistor, and the integrated circuit, all three of which are achievements of industry. THE DESIGN AUTOMATION 80TTLENECK In the 1960s it became apparent that rapidly improving integrated circuit technology would soon permit several hundred to several thousand circuits to be fabricated on a single chip. As noted, the interconnections between chips that were previously made on printed circuit boards could now be made on the chip itself. This promised great reductions in cost, but gave rise to a serious debugging problem. A design error on a printed circuit board can be repaired by patching in a discrete wire, after which debugging can proceed immediately. By contrast, if an error occurs in an integrated circuit chip design, the whole chip has to be refabricated after the design error is removed, and this can take from days to months (depending on the availability of a processing line, priority of the various efforts sharing it, any special requirements that a given chip may impose, etch. Thus successful integrated circuit chip fabrication calls for a

OCR for page 30
33 first-class design automation system, using which, a designer can debug his circuits before he commits the design to silicon fabrication. If many different chips are needed, a design automation system efficient enough to handle a large number of such designs is required. The semiconductor industry has invested large sums of money in attempts to develop such design automation systems, but these efforts have so far met with only modest success. Some microelectronics producers have reacted to this difficulty by surrendering some of the density and performance otherwise possible, specifically by restricting the chips to a few allowed diffusion patterns, and allowing designers to specify the subsequent metallization layers only. This so-called "gate array" approach is still the only one that can be used to turn out large numbers of functionally distinct chips without manual intervention. Another approach is to sidestep the design automation bottleneck by producing a few carefully designed microprocessor chips only; customization is then attained by changing the program that these microprocessors execute. THE FIRST MICROPROCESSOR The first microprocessor was an evolutionary device rather than one involving any totally new concept. The first such processor is generally considered to be the Intel 4004. Marcian E. Hoff is credited with leading the development effort that led to this very significant device; Hoff and others at Tntel hold patents on it. In the years immediately preceding the development of the 4004, several companies had moved toward large-scale integration of control functions in computer-like devices. Typical LSI products were single- chip devices for very simple calculators, and a few multichip sets for somewhat more complex calculators. Of course, the history of these calculator chips showed the usual technical difficulties and abortive starts. For example, an early effort in 1964 and 1965 by Victor Comptometer Corporation developed an LSI type of calculator that was never put into production because of problems encountered in building satisfactory chips. Between 1965 and 1970, several companies did succeed in producing workable LSI devices that found their way into handheld and desktop calculators. me Sharp Company made important advances in devices design during this period and took an early lead in electronic calculators. The commerical success of these calculator chips set the stage for the development of true microprocessors. When Hoff initiated work on the Tntel 4004 in 1969, intelligent controller chip designs generally involved an embedded, unchangeable program, usually fabricated using an on-chip ROM (read-only memory) or a logic array. The Busicom Company had just contracted with Intel to produce an intelligent controller for a calculator family that they intended to market. Hoff's response was to design a microprocessor chip set having a very broad capability rather than the narrow set of capabilities that had typically been associated with calculator chips. The instruction set chosen for Hoff's microprocessor may have been influenced by many sources, including several outside of Tntel, but the

OCR for page 30
34 design of this chip is clearly original to Intel. Hoff's team created working breadboards of the 4004 chip set in 1969 and 1970 and were able to convince Busicom of the viability of this new product approach before 4004 chips actually became available in quantity. By mid-1971, Intel was marketing the 4004 chip set as a product. The motivation for putting together a microprocessor instead of following the more restricted approach conventional in the late 1960s is interesting in itself. m e market for microprocessors did not exist in 1970, and its potential could at best be estimated crudely. A good deal of risk was involved in committing major resources to chip development for a microprocessor having limited computational power. While the new chip was clearly going to be useful for building hand calculators, the market there was highly competitive and volatile. Beyond the calculator market, the uses of the 4004 were highly speculative. Designers of electronic control equipment, which represented another potential market for the 4004 chip, were used to thinking in terms of discrete devices, including mechanical or analog devices, for control purposes. Most designers were not trained to design with microprocessors and generally did not understand how to use software to accomplish functions that they were accustomed to realize in other ways. Nor was it clear at first that the 4004 would be cost effective in control applications because its initial price was rather high. Nevertheless, Hoff believed that there was a very large potential for a primitive device that was itself a computer, provided the cost was low enough and the computational power was great enough. He succeeded in getting the 4004 into production, and the resulting demand showed that the development was well worth the risk. Without detracting from his credit as a pioneer, it is also fair to remark that if Hoff had not developed the 4004, someone else would probably have constructed some other microprocessor within a few years: the capability of doing so was becoming widespread and the idea behind the 4004 grew naturally out of prior technological advances. Subsequent to its introduction of the 4004, Intel enhanced this product greatly to create a line of microprocessors constituting a major product family within the company. By 1975, Intel's product line was dominated by the 8080, a third-generation 8-bit micro that had several times the computational power of the 4004. Although initially the price of the 8080 was upwards of $2000, by 1980 the price had dropped to less than $10. Total sales of this device, which is currently supplied by several producers, now number in the tens of millions. In retrospect, we can remark that the microprocessor development required mainly a strong development team and an integrated circuit facility. m ough substantial, its research aspect represents only a small fraction of the total effort that was required to make the product a practical success. A similar study carried out in a university or in an industrial research laboratory might well have stopped at the paper design or at the breadboard level. The key to the success of the 4004 project is that a product actually appeared and that through its availability and low cost this product created a market that had not existed.

OCR for page 30
35 THE NEW VLSI BOTTLENECK AND THE AWAKENING OF UNIVERSITY INTERESTS Up to the middle of the 1970s, most chips were designed with little computer assistance. {Memory chips, because of their highly repetitive patterns, are an exception to this statement.) The typical industrial electronic chip designer drafted a design by hand, checked his design by hand, and then digitized the design manually for computer input and generation of photographic masks. Many of the most successful products of the mid-1970s were produced in this relatively primitive fashion. However, since chip complexity has increased tenfold in the last few years, hand design has become infeasible, and extensive computer-aided design and layout aids have become necessary. University contributions to this increasingly important area supplement the large industrial effort that has been devoted to these tools. Design automation aids in common use today allow full-chip designs to be constructed as compositions of prestored subdesigns. mese aids are able to elaborate preliminary design sketches into fully detailed drawings, to route interconnections between specified points, to replicate predesigned cells in specified areas of a chip, and to produce logic arrays automatically from high-level specifications of a control function. Some of the design tools currently being used in industry reflect university contributions. In particular, the computerized graphics systems used to replace hand sketching techniques were stimulated in significant ways by university research, in particular by experimental graphics systems developed at Utah and MIT. m e pioneers in the application of this graphic software to VLSI design systems were either academics or industrial researchers who had access to the work done in universities. (However, as the economic importance of these tools has grown, commercial systems, into which large investments have gone, have taken the lead, and university work has fallen behind.) Until quite recently, virtually no universities had state-of-the- art, on-campus facilities for manufacturing integrated circuits. This meant that design engineers emerging from universities generally had little practice in chip design. Stanford and Berkeley were exceptions, but even at these institutions chip fabrication specialists were not computer designers, and education in computer hardware design was organized on a conventional basis, academic designers developing systems from commercially available chips, but not designing new chips themselves. This situation is now changing. An important and active VLSI design center developed by Carver Mead at CalTech has shown the feasibility of university involvement in VLSI education. Mead got his start in VLSI research by working closely with semiconductor companies such as Intel and observing that their essentially manual methods were about to run out of steam. He therefore began to study more systematic ways of producing chips, and also began to sensitize other university researchers to the importance of this problem. In connection with his other efforts, he co-authored a significant text presenting VLSI techniques to university computer designers, thereby furthering university access to this area. Mead's demonstration of feasibility, his useful book, and a greatly increased flow of government and industrial funding into university VLSI laboratories have all helped to increase university interest in this

OCR for page 30
36 field of research dramatically. Several universities are now in the process of creating semiconductor fabrication facilities, and still more are teaching chip design but having student projects fabricated off-campus by cooperating industrial establishments. Student papers on VLSI design, once totally unknown, are becoming a regular part of professional meetings. However, significant adverse factors still constrain the ability of universities to make significant contributions (other than educational contributions) to this area. Universities still do not have easy access to the most advanced technology; disciplined functioning of a chip fabrication line requires large numbers of technicians, as well as the commitment of substantial resources to the development and maintenance of software. (However, some of this access may be available through cooperation with design automation groups in industry, including the major industrial research laboratories.) Universities have traditionally been strong in theoretical areas and in areas spanning theory and application. These may turn out to be the aspects of VLSI technology where they can most readily make impor- tant contributions. For example, universities have already contributed significantly to the modeling of semiconductors, and are more likely to continue productive work in this area than are most industrial labora- tories. Though Bell Laboratories and IBM are exceptions, most semicon- ductor firms sponsor very little of this kind of research. Work on algorithms and data structures for VLSI design is another area in which university efforts can easily surpass those of industry. Little work of this kind has been done thus far at universities, largely because university researchers have not generally been aware of the problems that need solving, but such awareness is now growing very rapidly. Activity in this area has already begun at MIT, Stanford, Carnegie- Mellon, and other schools, and this work should swiftly become a valuable supplement to the more pragmatic efforts of industry. To summarize, universities have done best and are likely to continue to do best in areas that lie within reach of the talents of their faculty and students and that permit work within the constraints imposed by the relatively modest capital investments that universities can justify. Universities can initiate important streams of work, but when specific lines of research and development come to stand at the focus of industrial interest, universities cannot expect to match the manpower and capital investment that industry is able to mobilize. The university's real advantage is that it can pursue basic questions more readily and consistently than industry can. University researchers are free to examine many different lines of investigation in an environment protected from deadline-generated pressures. How- ever, university groups lack the focus that product concerns gives to industry. A desirable ingredient, conspicuously missing in the past, has been greater involvement of the university community in the practical activities of the VLSI industry. Today there is almost a rush for involvement. This may be both healthy and unhealthy: Overall valuable new research perspectives should result, but some institutions may prematurely commit to on-site semiconductor fabrication, and serious problems may develop if this financially substantial commitment becomes too burdensome.