National Academies Press: OpenBook

Deconstructing the Computer: Report of a Symposium (2005)

Chapter: Panel II: Computer Hardware and Components

« Previous: Panel I : Performance Measurement and Current Trends
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Panel II
Computer Hardware and Components

INTRODUCTION

William J. Spencer

International SEMATECH, retired


Dr. Spencer, invoking the Silicon Valley saw that “the cost of every integrated circuit will ultimately be $5 except for those that cost less,” observed that rapid decline in semiconductor cost and consequent growth in the speed and density of processors and memory have allowed software and systems designers “to get very sloppy.” Therefore, as a technology wall looms for hardware, productivity can be expected to increase in software and systems, where a great deal of capability remains untapped.

Endorsing Dr. Jorgenson’s proposed road map for computers, software, and communications, Dr. Spencer recalled a 1991 Dallas meeting at which 250–300 engineers assembled for two or three days under SEMATECH’s leadership to get the Semiconductor Road Map off the ground. Today the project is run by the consortium’s successor, International SEMATECH, with its original editor, Linda Wilson, remaining in charge, but participation by engineers and scientists has increased by an order of magnitude, to around 3,000. And whereas the Roadmap started out in a paper version that was revised every two years, it is now in elec-

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

tronic form and receives continuous updates. Placing SEMATECH’s expenditure for data collection in the early 1990s at $1 million annually, Dr. Spencer remarked that the Roadmap is not an inexpensive endeavor.

He then promised the audience that the following two presentations, on microprocessors and magnetic storage, would go to the heart of computer performance. These important technologies, including LCDs [liquid-crystal displays], have provided productivity advances that have “driven the rest of the electronics revolution.” Recalling a past prediction that a clock speed of 2 GHz would make possible relatively good voice-recognition capability, he noted that the industry was getting close to this and called upon William Siegle of Advanced Micro Devices to chart the future of processors. Thereafter Robert Whitmore of Seagate would talk about magnetic storage, which, Dr. Spencer said, has been progressing even more rapidly than semiconductor capability as measured in cost per bit.

PROCESSOR EVOLUTION

William T. Siegle

Advanced Micro Devices

Dr. Siegle began by crediting the Semiconductor Roadmap for the speed of the information technology industry’s recent advance. He offered two reasons for what he regarded as a direct causal connection between the road-mapping process and the acceleration that had taken place in the decline of logic cost:

  • Making meaningful improvements in capability requires the coordination of many different pieces of technology, and the Roadmap has made very visible both what those pieces are and what advances are required in different sectors of the industry to achieve that coordination.

  • As companies believe that staying ahead of the Roadmap is a component of success and strive to do so, the existence of a published Roadmap heightens competition.

If the industry feels it is moving ahead too rapidly, he jested, “we should just stop publishing the Roadmap for a little while and descend back into chaos.”

The aim of his talk, Dr. Siegle announced, was to survey the evolution of the microprocessor and to pin down the factors responsible for it in order to determine what must be nurtured and sustained so that similar progress might be achieved in the future. Focusing on the previous 10 years—which he considered a representative period, and which coincided with his direct involvement in the area at Advanced Micro Devices (AMD)—he posited that advance had resulted from improvements on four fronts: the architecture of microprocessors; the tools that are used to translate architecture into a physical design that can be imple-

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

mented; fabrication technology; and the equipment and materials sector, the vast infrastructure supporting chip makers. Before launching into a discussion of microprocessors, however, he warned that computing progress depends on achieving equally significant gains in other areas of hardware and in software as well: “While microprocessors are important, you can’t make meaningful systems and applications if there are advances in just the microprocessor.” He then pointed to a chicken-or-egg question concerning the relationship between hardware and software: Have advances in hardware been driven by applications; have the latter expanded to make use of the hardware capability available; or have the two evolved together?

To dramatize the extent to which microprocessors have developed in little more than a decade, Dr. Siegle compared two AMD-made chips of the sort found in the IBM PC: the Am386, a 386-class product introduced in 1991, and the Opteron, which was scheduled to go onto the market in April 2003. He cited operating frequency, although in his opinion it is an imperfect measure of performance, as having increased more than 50 times, from around 33 MHz to 2 GHz, while the transistor count was jumping 500-fold, from 200,000 to 100 million. The Am386’s transistor gate length of 800 nm made it AMD’s first logic device to feature a submicron transistor; the chip, rather small by 2003 standards at 46 mm2, was the company’s first with 32-bit data handling as well. The Opteron, in contrast, specifies 60-nm transistor gates and a 180-mm2 die size while supporting 64-bit processing of both instructions and data. Dr. Siegle noted that the Opteron’s gate length is far smaller than 130 nm, the nominal dimension for its technology generation.

Turning to architecture and design, he said significant progress on these fronts has been a necessary accompaniment to improvements occurring in fabrication technology. Analogizing to the high-level concept that a building architect works out based on the needs and wishes of a prospective homeowner, he defined computer architecture as a high-level model of data flow and data processing that enables a prescribed series of instructions to be executed; although not a blueprint from which a device can be built, it is still a necessary starting point. He described the design process as “a matter of translating that architecture into something that a ‘fab grunt’ can build.” Designers, he said, see a concept in the form of a set of mask patterns that will build multiple layers of a semiconductor process; their task is to translate that architecture into electrical models and an actual physical layout of the shapes that will be implemented.

During the interval separating the Am386 and the Opteron, improvements occurred in a number of areas detailed in Figure 8: the efficiency of instruction processing; memory hierarchy; branch prediction, which Dr. Siegle described as a way of dealing with deviations in the orderly flow of executing instructions; and functional integration, which he explained as “ways of spending the increased integration available by putting more and more function on the processor chip.” These advances in architecture—most of which have taken place in the United

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 8 Architecture.

States—he credited not only to computer firms, “whose business it is to make advancements of this nature,” but also to early-stage work by creative thinkers in university computer science departments. In addition, there have been a number of concepts applied in the microprocessor business, among which he singled out memory hierarchy, that in some cases had been implemented 20 years earlier in mainframes and were borrowed from them.

One change in the design process most notable from AMD’s point of view came in the move from what had basically been transistor-level designs to high-level models that describe a design and are employed in tandem with a variety of tools that excel in providing functional verification that those models will in fact do what is intended. Dr. Siegle characterized several other advances as startling in their effectiveness as well: improvements in tools used in electrical timing and “for the placement of functions so they can be wired up”; increased ability to put the burden on the fabrication area to provide more layers of wiring to hook up all the transistors in place; and the use of higher-performance computers to do the very heavy-duty simulation and computation work that is required to support such design efforts. Looking back on the first version of Opteron, he recalled being “almost dumbfounded that first silicon was good enough to give samples to Microsoft to begin playing with.”

Joining with chip makers and academic institutions in moving design capability forward have been design automation (EDA) companies that supply device

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

makers with tools. Dr. Siegle stressed, however, that in-house capability—“the ability to optimize the technology, to exploit it”—is necessary to get a truly competitive microprocessor. So rather than relying entirely on EDA companies, AMD and, he speculated, most other device manufacturers develop hand-honed design tools of their own that work their way to the general community after a period during which hardening up makes them accessible to “a perhaps less sophisticated set of users.” Similarly, in the fabrication end of the business, working with captive capability offers advantages unavailable through an arm’s-length relationship with a foundry. And just as foundries, which flourish in Taiwan, are situated largely offshore, there is an increasing tendency for software and design centers to be set up in various parts of the world to take advantage of talent and costs that are more advantageous than they might be in the U.S.—where, nonetheless, most new technology continues to be generated.

Dr. Siegle then turned to fabrication technology, which he described as the realization of the designer’s intent through a sequence of wafer-processing steps that is split into two stages. In the first, the development process, a recipe is developed, formulated, and perfected. After a number of trials comes the second, production phase, at which replication of a process is required at high volume. Testing and assembly of the finished wafer, a significant step in its own right, ensues; technical sophistication is growing in this area of manufacturing as well.

Returning to the comparison of the Am386 and the Opteron, Dr. Siegle pointed out that the older chip, a two-level metal structure, used a technique called “wire bonding” in which discrete wires ran from the periphery of the chip out to the package terminals and the subsequent pins. The Opteron, in contrast, has nine levels of metal interconnect that allow hooking up all of its 100 million transistors and can accommodate one megabyte of cache on the device itself. A result of the recent evolution in design and architecture, this change was made possible by a number of breakthroughs, one of the most significant being fine-line patterning. This was permitted in turn by dramatic changes in the equipment used in lithography, a process for putting patterns onto wafers, as well as in resist technology, a term covering both the photosensitive film placed on the wafer to be patterned and etching methods that transfer the pattern. One of the primary lithography tools, the projection stepper, moved in a decade from 436 nm and 363 nm illumination sources down to excimer-laser-driven sources at wavelengths of 248 nm and 193 nm. This improvement was accompanied by a rise in the stepper price from less than $2 million to more than $20 million, he noted, adding that today’s price for the lens exceeds the 1991 price of a whole system. As an indication of the magnitude of the progress that has resulted from these and other changes, Dr. Siegle offered a remark he made at the beginning of the current decade: “If you had shown me a picture in 1990 of what we would be building at the turn of the century, I would have thought it was a page from science fiction.” Underestimation of the rate of progress is not uncommon among technologists, who, he suggested, are not very good at seeing beyond the next two to three years and tend to

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

err on the side of caution. “Those red bricks in the red brick wall that shows up in the road maps,” he noted, “have a tendency of going away as we get closer to them and understand a little bit more.”

Dr. Siegle then took up interconnect planarization, a technical breakthrough he rated as equally significant. The thought that this technology, which calls for putting the wafer into a machine and rubbing on it to make it planar, could be used for the fine patterning of wafers initially left people in the field “aghast,” he recalled. In fact, chemical-mechanical planarization, in commercial use at AMD from the mid-1990s, enabled an almost unlimited number of layers of wiring to be constructed, whereas the process employed just a few years earlier for doing a mere two and three levels had been relatively painful. While aluminum continued to be used for wiring, tungsten contacts were employed to make connections between the layers, and the planarization method ensured flatness between levels. This approach allowed designers to add layers to the chip, something that Dr. Siegle speculated may well continue almost ad infinitum.

With the move to planarization, the stage was set for switching from aluminum to copper, which further unblocked the ability to miniaturize. Pointing to photographs of such copper-interconnected AMD devices as the Opteron and its predecessor, the Athlon, Dr. Siegle called the audience’s attention to the planar repetitive structure, featuring connections between layers that make the layers largely indistinguishable from one another. In the miniaturization of devices, he stated, the smaller the design grid at the transistor level, the more levels of wire will be needed to make the connections because one is growing as a square and the other is growing only linearly, creating a fundamental imbalance that requires more and more levels of wiring. So, without this breakthrough, it would have been very difficult to exploit the fine-patterning capability that has resulted in improving transistor density.

Changing the basic materials used in building devices has been critical as well, because the transistor does not just go faster automatically as it is made smaller. In order to keep resistance as low as possible at the transistor-gate level, changes have been made repeatedly: The tungsten silicide material used in the Am386 had evolved into titanium silicide by the mid-1990s and into cobalt silicide by the late 1990s, and it was expected to evolve again, into nickel silicide, over the next few years. The material that insulates one wire from another in the interconnect space has for a long time been “rather plain” silicon dioxide, but in the last few years a significant amount of energy has been poured into reducing the rather high dielectric constant of that material. First came a transition to a fluorine-doped oxide and then, more recently, to a more complex carbon-based organic material; both were aimed at driving down the dielectric constant and driving up performance by reducing the RC characteristics of the network.

Additional changes are on the way. With no more than a few atomic layers separating the gate from the silicon substrate, leakage of current from the gate to the substrate has become a serious issue—one that, according to Dr. Siegle, “has

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

the potential to really represent a spanner in the works”—and needs to be accounted for in the circuit design. Since the materials in current use had been pushed about as far as they could go, researchers were working with more exotic alternatives, such as hafnium-based oxides, looking for a way to continue improving electric field at the silicon surface without the material becoming so thin that it would totally break down.

And progress has been required in other areas: in junction engineering, a term that applies to tailoring the actual profiles of dopants in the silicon so as to ensure acceptable electrical properties as conducting channels grow shorter; in scaling down operating voltage, which has gone from 5 volts to around 1 volt; and in obtaining performance at lower levels of power, which has been accomplished by the introduction of silicon-on-insulator techniques.

Dr. Siegle then turned to infrastructure, which he divided into two areas, equipment and electronic materials. He contrasted the current industry landscape, which includes an extensive network of equipment suppliers that device makers have come to depend on, with that existing when IBM built in house most of the equipment made to manufacture its System 360; the transition, he said, dated to the 1970s. Besides being very mature and sophisticated, this network is global to the point that, with the possible exception of Japan, no region in the world has a set of equipment suppliers sufficiently complete that it can supply a fab. The network—which, he said, “every chip maker in the world is dependent on in one form or another”—spans the United States, Europe, and Japan.

Nowhere near as big in revenue, but equally critical to the device makers’ progress, is the network of materials suppliers. They furnish starting materials such as silicon and silicon-on-insulator substrates, high-purity gases and chemicals for fabs, as well as the photoresists needed with reductions in wavelength and feature size and, one of the more fragile elements of the infrastructure, precision photomasks. “If you look at the basic mechanics of the photomask business, it’s a rotten business, even worse than the semiconductor business,” he said, while adding that, without photomasks, it would be impossible to translate designs into fabrication.

A significant rise in the cost of manufacturing facilities has been a predictable consequence of the escalation of the cost of equipment, and of control and automation technology required to reduce variability, that has accompanied the overall increase of sophistication in the industry. The price of a chip plant has evolved from a few hundred million dollars at the time AMD was building the Am386 to well in excess of $2 billion for a modern 300-mm fab. And other factors have driven up the price of manufacturing as well. The process sequence that in the early 1990s comprised 15 masking levels and was turned in manufacturing cycles of about 30 days had, by 2003, reached 30-plus masking levels and cycle times in the neighborhood of 70 days. As a consequence, “any kind of a blip” requiring a design change can set a manufacturer back by a quarter or two: the time to make a mask, crank it through, verify it, and ship the production. Also

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 9 Factory investment issues.

SOURCE: Gartner, McKinsey analysis.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

driving up cost has been the necessity of driving down the density of manufacturing defects in order to achieve economic yields. Defect density, defined as the number of die-killer defects per square centimeter—with a die killer being anything from a piece of dirt to an imperfection in a photo pattern that causes a circuit to be nonyielding—had to be reduced by around 200 times during a decade in which transistor density increased about 100 times and die sizes grew significantly as well.

The level of investment is increasingly becoming a barrier to entry into semiconductor manufacturing, providing a contrast to conditions in the industry’s architectural and design sectors, where nothing carries a price tag equivalent to that of a fab. “If you want to own your own fab, you’d better have a revenue on the order of $7 billion [per year] to make the economics all work out,” Dr. Siegle declared. “If you’re willing to partner, you can get away with about $3 billion of revenue.” Displaying a chart illustrating how much annual revenue a company now needs to be able to afford a megafab, he noted that only Intel among the world’s major device makers would be placed in the former category based on 2001 revenues; if revenues for 2000 were used as the basis, the category would include another seven of the largest firms (see Figure 9). While numerous others would be able to afford such an investment in partnership, many more will have to keep older fabs in operation or rely on foundries if they are to remain in manufacturing. (For those entries in Figure 9 that are captives of larger systems companies, another option, of course, is for the system business to serve as “banker” to the captive semiconductor division.) AMD ranks among those who would need to partner on a new fab, “so the burden and the challenge of ‘how do we get to this next level of fabrication plant?’ is a nontrivial exercise for us,” Dr. Siegle said, adding: “It’s a pretty sobering picture.”

But where have these “key enabling enhancements” come from? Attributing many of the conceptual advances to what are now largely regarded as old-line industrial research labs, he stressed the importance of acknowledging that major advances often have their roots in research done even a decade before they are applied. One example of this long latency is copper interconnect, on which work was well under way at IBM when Dr. Siegle left that company in 1990 but which did not reach the stage of commercial production until around 1997; similarly, work on cobalt silicide, a technology that didn’t make it into production until the mid- to late 1990s, was taking place in the 1980s. “University research and research funded by our government colleagues has also been important,” he said, “not only in supporting early work in new technologies, but also in creating—at least [in the case of] the university-funded research—human personnel feedstock for our business.”

In addition, he praised the level of preparation for work on industry-relevant problems of young researchers who have gained experience at the Semiconductor Research Corporation (SRC) and, more recently, the Microelectronics Advanced Research Corporation (MARCO). “New graduate hires who have finished Ph.D.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

work in some of these areas are real plug-in-and-run kind of talent that has been indispensable to enabling not only the growth of the industry but the increasingly challenging work that’s being done,” he stated. The national laboratories, with intense areas of specialization in such fields as modeling and plasma physics, as well as consortia—not the least of which being SEMATECH, widely credited with helping the turnaround of the U.S. semiconductor industry in the early 1990s—have also played a key role. The Defense Advanced Research Projects Agency (DARPA) has been a particular supporter of lithography, providing funds over a long period for many lithographic-based programs that have helped make progress in lithography and patterning possible. And the large, highly experienced supplier base has brought equipment advances as it has evolved into a widely dispersed, global resource maintaining a close relationship with leading chip producers.

Dr. Siegle then turned to technology integration, which he specified as referring to the combining of the individual elements he had been describing into a modern silicon device. A substantial task requiring a leading-edge fab, technology integration has generally been the province of the major chip makers; it usually has a development cycle of two to three years “after the long lead research is far enough along for integration.” AMD and IBM had in recent months announced a joint development arrangement, a way for the companies to deal with the cost, which would be excessive for either on its own, of having an R&D facility to do the work needed to get ready for production.

Summing up, Dr. Siegle cited improvement in microprocessor capability over the previous decade of at least two orders of magnitude—but at comparable prices—in performance, integration level, and density. Those improvements resulted from parallel advances in architecture, design methods and tools, fabrication technology, and industry infrastructure. “Other than because of IP constraints, global dispersal of architecture, design, and infrastructure has already occurred or is likely to occur,” he stated. And fabrication technology—whose cost of entry, already high, was growing—would potentially be open only to the largest companies, to multicompany arrangements, and to subsidized ventures. Chip makers’ taking advantage of opportunities to participate in subsidized ventures and to contract with foundries in which they do not need to invest is behind the fact that such a preponderance of fabs is being built outside the United States. “Foundry arrangements are very compelling for many suppliers,” he concluded.

STORAGE

Robert Whitmore

Seagate


Mr. Whitmore began by noting that, being responsible for all Seagate products in the final three years of their development up through production launch,

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

he had a vested interest in the direction in which technology is taking the industry. He promised the audience a high-level view of a variety of topics, but he said he would begin by introducing the basics of the hard drive and offering an appraisal of the state of the business. Following that would be a review of storage trends; a discussion of the metrics used in the magnetic storage sector and of how they might be translated into a model that would be of benefit in assessing the path of computer productivity; and, finally, a look at what the future might have in store.

First, however, Mr. Whitmore introduced his company and provided a brief overview of the business of hard disk drives, which he described as “kind of the low item on the food chain.” Founded in 1979, some two decades after IBM invented the disk drive, Seagate employed 49,000 in 25 countries in 2003 and posted revenue of $6.1 billion in fiscal year 2002. In that period, the company devoted about 10.8 percent of revenue to R&D and another $535 million to capital expenditure in order to sustain the strategy of total vertical integration under which it does everything from making its own wafers to fabricating the product and selling it to the final user. Manufacturing around 200,000 drives per day, a production level that requires world-class manufacturing and design capabilities, Seagate is the world’s leading shipper of disk drives: It totaled around 18 million units in the final quarter of FY2002. The company handles development mainly in the United States, with one development center in Singapore, and manufactures mainly in the Far East, although it also has factories in the United States, Mexico, and Northern Ireland.

Turning to the basics of the hard disk, Mr. Whitmore said that, seen from above with its cover removed, a drive would appear as a number of rigid disks stacked one on top of the other with a small arm suspended over them. The fundamental technology that the industry has been scaling for the past half-century is the physics of sending a current to this arm that generates an electromagnetic field and changes the magnetics on the disk, a result he described as “pretty simple—basically, the 1 and the 0—but pretty tricky” to achieve. For, although the technology may appear macroscopic in photos, it is in fact very microscopic. The arm culminates in a head small enough that it would take six to cover a dime; it is able to read tracks equivalent in width to one-twelfth of the edge of a sheet of paper while literally flying above the disk at a distance of 100 atoms of air, or less than one micro-inch. To illustrate such spatial relationships and performance at a more easily pictured scale, he said that an equivalent head projected to the size of a Boeing 747 would be flying at Mach 800 less than an inch off the ground and have the ability to count the blades of grass as it went by. “Kind of a mind-boggling technology,” he observed.

Highlighting the competitiveness of the business, Mr. Whitmore noted that the number of hard disk suppliers in the world had dropped from 62 in 1985 to seven by 2003; Seagate and two others are based in the United States, three are based in Japan, and one is based in South Korea. Similarly dramatic consolida-

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

tion occurred among independent component suppliers over the same period. Amid this “craziness,” as he termed it, success has required a large and highly qualified staff such as that of Seagate, which has 3,300 employees in R&D, a quarter of whom hold an M.S. or a Ph.D.; significant outlays for both R&D and capital expenditure; state-of-the-art manufacturing capability; acute focus on OEM relationships; and the ability to contend with both short product life-cycles and huge pricing pressure. Such characteristics have prompted Seagate’s chairman and CEO, Steve Luczo, to call the disk drive industry “the extreme sport of the business world” and Clayton Christensen, author of The Innovator’s Dilemma, to describe Seagate and its competitors as “the closest things to fruit flies that the business world will ever see.”

This shakeout has nonetheless brought some stability to the sector, in Mr. Whitmore’s opinion, and he opined that in the previous year pricing had begun to stabilize somewhat. Market share over the 61.3 million drives shipped in the final quarter of 2002 went 69 percent to U.S.-based companies: 30 percent to Seagate, 22 percent to Maxtor, and 17 percent to WDC. Of the remainder, Hitachi had 17 percent, Toshiba 5 percent, and Fujitsu 4 percent—giving Japan 26 percent in all—while South Korea’s Samsung accounted for 5 percent. U.S.-based companies had 85 percent of the desktop computer market, which accounted for around three-quarters of the more than 200 million drives shipped in 2002, but their lead was only 59–41 in the enterprise (or business) market, and they were entirely absent from the mobile market, which he called “one of the biggest growing markets there is.” And even as he signaled that his company would later in 2003 announce steps meant to redress the imbalance in the market for mobile-computer disk drives, he added that Seagate, owing to vertical integration, was the only component supplier left in the United States at a time when non-U.S. companies were investing heavily in components. Here he specified the market segments as ranging from high-end server products like high-transaction Web-based storage to such mass-market products as camera microdrives or the iPod digital music player, the consumer segment being “the up-and-comer.” The requirements for the two segments differ, with high-end products focused on performance and reliability, consumer products on ruggedness, size, and power.

Moving to metrics, Mr. Whitmore pointed to capacity, price, performance, and reliability as the main factors for measurement, while saying that other metrics are appearing on the horizon. Beginning with capacity, he displayed a chart tracing the total number of bytes shipped annually by the industry from 1994 through 2002, which showed an evenly paced yet spectacular increase from around 1 million terabytes (TB) in 1998 to between 8 and 9 million TB in 2002 (see Figure 10). A subsequent chart, calibrated in petabytes (PB), projected a compound annual growth rate for storage capacity shipments of 62 percent between 2002 and 2006, which would take the total from between 8,000 and 9,000 TB in 2002 to nearly 60,000 TB in 2006; the number of storage units shipped was projected to post a 13 percent compound annual growth rate during the same five-year period

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 10 Capacity: Past storage growth.

(see Figure 11). “In the 1990s we just couldn’t seem to satisfy the need for capacity,” he said. “There is continued growth and need for storage, so it’s not going away. It’s really a question of how do we do it and what are the economics of it.”

The next chart showed that the price of rotating magnetic memory on a dol-

FIGURE 11 Capacity: Future storage growth.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 12 Cost: $/GB trends for desktop.

lar-per-gigabyte basis eroded at a compound annual rate of −45 percent between 1995 and 2002, with only a slightly gentler downward slope projected for 2003–2007 (see Figure 12). Silicon storage has been on a nearly parallel downward curve, but its curve began at a price between one and two orders of magnitude higher that that of rotary magnetic memory; the two curves have diverged rather than converged since the mid-1990s—a trend that, according to Seagate’s projections, will continue.10 While Mr. Whitmore acknowledged that using silicon storage will be appropriate at particular price and capacity points, he attributed the consolidation of the disk drive industry to the massive erosion in the price of its product, saying: “It’s kind of ‘the strong will survive’ here, because we’re going at a pace that’s just burning people out.”

Another chart indicated that input/output transactions per second (IOPS) had shown accelerated growth between the late 1980s and the late 1990s, when improvement leveled off (see Figure 13). Referring to the improvements made in this era as significant, Mr. Whitmore said: “Not only are you paying less, but it’s

10  

The difference in the costs of magnetic storage and semiconductor memory is explained by a vast difference in speed of access to the information.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 13 Performance: Enterprise growth.

going faster due to spindle speeds and seek times and a lot of different engineering techniques.” Finally, coming to the important subject of reliability, he displayed a chart indicating that mean time between failures (MTBF) had grown at a compound annual rate of 25 percent from 1977 through 2001—which he called phenomenal—with progress accelerating in the late 1980s and leveling off around 2000 (see Figure 14). Product currently being shipped is speced at 1.2 million hours MTBF or higher, and the future may well see an MTBF of 10 million hours.

To introduce his discussion of the future of the industry, Mr. Whitmore displayed what he called “the bible for hard-disk storage and growth”: a chart illustrating the history of areal density11 (see Figure 15). Beginning in 1957, when IBM produced the first hard-disk drive, areal density has grown at 42 percent per annum, although growth was limited to 9 percent per year during the period 1975–1990, when it abruptly assumed the pace of 100 percent per year which continued through 2002. He said the plot is made up of a series of S-curves that were described as advances in physics and materials processing took the industry through a number of technology transitions, from ferrite heads to thin-film heads and then

11  

Areal density is the amount of data that can be packed onto a storage medium. Areal densities are usually measured in gigabits per square inch. The term is useful for comparing different types of media, such as magnetic discs and optical discs. Current magnetic disks and optical disks have areal densities of several gigabits per square inch.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 14 Reliability: Enterprise growth.

on to MR heads and to TGMR. The flattening out of the 1970s and 1980s was attributed to lack of demand rather than lack of innovation by Mr. Whitmore, who said that the advent of the PC drove invention and the “whole competitive madness” of the past 15 years.

The question now before the sector is whether growth is going to continue at 100 percent per year. At the moment the industry is in transition from the longitudinal orientation of the bit on the disk to the perpendicular. While the growth of longitudinal technology has flattened out, Mr. Whitmore said perpendicular technology had been demonstrated to work and production of drives incorporating it would begin shortly. He predicted that, with the growth phase of another S-curve coming up, the next few years looked “pretty good”—and that once this new technology started to run out of steam, another would be invented (see Figure 16). Already in sight as a successor to perpendicular is heat-assisted magnetic recording (HAMR), in which a laser beam will shine through the head, heating up the media to several hundred degrees Celsius. This new technology, which has been demonstrated in the lab with the help of funding from the National Institute of Standards and Technology (NIST) and is considered five years from commercial production, will theoretically extend the density of recording beyond 10 Tbit per square inch while leaving the process of reading the data from the disk drive unchanged. “The beauty of this technology,” Mr. Whitmore stated, “is that it allows you to take all the stuff you’ve been doing before and go another round.” After an expected five years of HAMR, the industry hopes to deploy a technology

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 15 45 years of areal density growth.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 16 Recording technology transition.

known as SOMA for “self-ordered magnetic arrays,” which is seen as extending recording density to around 50 Tbit per square inch.

Turning to applications, he acknowledged that the PC market is saturated, although there is room for growth in some developing-country markets, and that while enterprise storage is an excellent business with very good margins, it has its limits. He looked for significant growth, however, to broad applications in smaller-capacity, more consumer-related products: in mobile PCs, a market that is “just starting”; in various hand held appliances, from PDAs and personal audio devices to cameras and multimedia cell phones; and in external storage devices. In addition, computerizing the infrastructure of the home—having a server driving all the PCs and other electronics in the house, as well as putting disk drives in televisions and other devices—is in its infancy but is “becoming real and will continue to grow.”

In conclusion, Mr. Whitmore expressed the hope that consolidation has reduced the number of competitors in the “brutal” disk drive business to a reasonable and sustainable level. “We are worried about non-U.S. based companies and their involvement,” he said, “but feel we’re strategically positioned to handle that.” Continuing to invest heavily in R&D is key to Seagate’s strategy, but the improved technology that results will enable growth only if it is employed in marketable products and applications for its use can be found.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

DISCUSSION

John Gardinier, a self-described “retired science junkie,” noted that he had seen an ad in a recent PC Magazine offering a $1,000 computer with RAID technology in the storage and asked whether Mr. Whitmore saw any sensible reason to employ RAID in a personal computer—and, if so, what it might be.

Mr. Whitmore explained that RAID is an architecture for arranging disk drives so that there is redundancy in data protection and pointed to different schemes, from a higher-end enterprise system to a method of packaging exemplified by Google that calls for arranging drives in a low-cost system. While acknowledging that the technology was very hot at that moment, he said he did not know why it would be advantageous for use in a personal computer. What those using it were really after was an inexpensive integration scheme that could compete with products offered by companies like EMC and Sun, which are more expensive.

Kenneth Flamm of the University of Texas at Austin, a member of the Steering Committee for Measuring and Sustaining the New Economy, prefaced his question by asking how many of those in attendance had backed up their home computer in the prior 60 days. He cited the lack of hands as a possible explanation for a market for RAID.

Taking the point of view of an economist attempting to ascertain the contribution of the PC’s components to its functionality, Dr. Flamm noted that the technical details, while interesting in that they reveal something about where the industry is going in the future and how industry insiders think it is going to get there, are utterly irrelevant from the point of view of the user, who does not care how many layers of interconnect are on the chip. He therefore asked Dr. Siegle to speak more about the functionality that has been added to the microprocessor above and beyond the pure improvements deriving from fabrication technology. He then put the same question to Mr. Whitmore regarding the disk drive, observing that when prospective buyers look at disk drives, besides the size of the disk they are interested in rotational speed and the amount of cache included on the disk—proxies for IOPS, which vendors calculate but do not supply to users.

Mr. Whitmore responded by outlining contrasting business strategies, both of which Seagate is planning to pursue: making the disk drive more powerful by putting such features as multiple performance interfaces and cache onto it, and making it cheaper by taking functions off. By way of example, he said that Linux or the logic of the MP3 player might be placed on the hard drive. “These are things that are very available,” he noted, adding that “the difficulty in getting them is more the relationship with the OEM suppliers we deal with than the technology to do it.”

Dr. Siegle observed that quantifying true performance is no simple matter, even at the processor level, and that it is probably best done by using a variety of benchmarks. Noting that frequency increased 50-fold over the period he had ad-

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

dressed, he said that about a third of the improvement was attributable to the architecture and the other two-thirds to transistor speed. But, stressing that that is merely a description of frequency, he said that application benchmarks are needed to take into account how much work gets done with each click of the clock. “AMD has started to use a performance metric for marketing processors in order to get away from merely the speed-related issue because of the difference between processors and how much gets done in a given clock cycle,” he said. In his judgment, the subjects of measuring performance and of where the various contributors to performance improvement come from need additional attention.

Michael Borrus of the Petkevich Group pointed to earlier remarks that only 5 percent or so of a PC’s capability is employed in most applications and that Google’s processing capability comes not from high-priced, high-performance components but from less sophisticated components cobbled together to achieve high functionality. He then asked the speakers: “Is there, in your industries, a different technological and market trajectory associated with the lagging edge which those at the leading edge are not paying attention to, but which the [STEP Board’s Measuring and Sustaining the New Economy] project ought to be paying attention to because of the impact it could have on the economy?”12

12  

Technical and market trajectory means a line of technical advance that delivers products with specifiable cost/performance parameters that are sold to a specifiable market of customers. Most technology industries are characterized by conventional, accepted lines of technical advance and most firms in the industry produce products premised on that line of development. Leading edge typically then refers to the latest generation of products delivering the latest advance in cost and performance (usually high performance at an initial high cost). In that context, “lagging edge” refers to cost/performance characteristics that are far away from the conventional leading edge—e.g., potentially much cheaper or with quite different performance characteristics—and thus typically used for completely different purposes.

The simplest example is the line of technical advance that Moore’s law characterizes, in which processor speed doubles every 18 months or so, resulting in a well-established technical and market trajectory for microprocessors with specifiable performance and features whose relatively high initial costs decline with the scale of production and that are sold at predictable, declining price points over time to PC makers and other customers. Intel’s newest, most advanced microprocessors would then characterize the leading edge, typically produced with the latest, most expensive process technology, and capable of outstanding performance.

By contrast, one example of a lagging-edge trajectory can be found at Berkeley and other places in work on simple semiconductors that can be printed on plastic using cheap laser or real-to-real printing techniques rather than the very high cost, capital-intensive process used to produce microprocessors. These are potentially very low cost and low performance but usable for simple sensor networks embedded in structures, toys, product tagging, and other applications that could never afford a leading-edge microprocessor. A lagging-edge trajectory can still be quite innovative as plastic semiconductor concepts surely are—it is lagging only in the sense that it is aiming for very different cost-performance points than the leading edge of the accepted line of technical advance.

The lagging edge can lead to whole new industries with profound economic impacts, or can disrupt established industries. In this sense, the scrap-iron processing minimills were a lagging-edge technical trajectory 25 years ago, capable of producing only a very limited range of steel products with

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

In response, Mr. Whitmore returned to his comment that emphasis is being placed on taking features off the hard drive. Sticking to the minimum sophistication needed could boost manufacturing efficiency, because backing off on the technology can get the yields up and the cost down at a faster rate. But while need for continued lower cost drives removing features on the one hand, on the other hand applications still exist that require higher processor speed. This “bifurcation,” he said, indicated that there was a business model for both paths.

Dr. Siegle added that, if enough bandwidth becomes available to link the systems that are being used only 5 percent of the time, the potential computing resource will be enormous. He saw the gating issues as getting adequate bandwidth to those systems and people being comfortable with others using their unused cycles.

Dr. Spencer observed that manufacturing of hard drives had moved almost entirely out of the United States and that semiconductor manufacturing was rapidly following along the same path, with foundries, most of which are abroad, taking more of the business. He asked whether, as that occurs, American universities will attract people to work in those areas who will be able to provide the kind of capability that Dr. Siegle described and that Mr. Whitmore indicated are already available in the magnetic storage area.

Mr. Whitmore answered in the affirmative, saying that although Seagate moved its manufacturing offshore long ago, he had not seen any lack of need in the United States for technologists in design, research, or manufacturing, and he did not anticipate that changing. While the need for higher-level skill sets in the magnetic storage industry had been flat, it had by no means tapered off.

In contrast, Dr. Siegle called attracting enough U.S. students into university programs that are relevant to the semiconductor industry a 20-year-old problem. “Somehow we’ve managed to deal with that adequately,” he said, but he added that “the hazard here has been that we have become dependent on foreign nationals who are coming to our universities, being trained, joining our work force, and it’s becoming increasingly attractive for them to go back home.” A certain level of capability needs to be retained in the U.S. if its firms are to remain on the leading edge of the business.

   

inferior quality compared to the huge, scale-intensive basic oxygen furnace steel making of leading Japanese producers. But the minimill trajectory evolved, becoming more and more capable and competitive with traditional steel-making techniques, eventually disrupting a large chunk of the steel market.

Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 27
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 28
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 29
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 30
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 31
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 32
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 33
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 34
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 35
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 36
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 37
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 38
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 39
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 40
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 41
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 42
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 43
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 44
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 45
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 46
Suggested Citation:"Panel II: Computer Hardware and Components." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 47
Next: Panel III: Peripherals: Current Technology Trends »
Deconstructing the Computer: Report of a Symposium Get This Book
×
Buy Paperback | $55.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Starting in the mid 1990s, the United States economy experienced an unprecedented upsurge in economic productivity. Rapid technological change in communications, computing, and information management continue to promise further gains in productivity, a phenomenon often referred to as the New Economy. To better understand this phenomenon, the National Academies Board on Science, Technology, and Economic Policy (STEP) has convened a series of workshops and commissioned papers on Measuring and Sustaining the New Economy.

This major workshop, entitled Deconstructing the Computer, brought together leading industrialists and academic researchers to explore the contribution of the different components of computers to improved price-performance and quality of information systems. The objective was to help understand the sources of the remarkable growth of American productivity in the 1990s, the relative contributions of computers and their underlying components, and the evolution and future contributions of the technologies supporting this positive economic performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!