Productivity Trends in the Semiconductor Industry
W. Clark McFadden
Mr. McFadden introduced Robert Doering of Texas Instruments as a man “with a long and distinguished history as a technologist in the semiconductor industry.” Dr. Doering is the U.S. Delegate to the International Technology Roadmap Committee for Semiconductors; he has also been very active in the Semiconductor Research Corporation (SRC) and in SEMATECH, the semiconductor industry consortium. He was a central figure in the development and maintenance of the technology roadmap for semiconductors.
PHYSICAL LIMITS OF SILICON CMOS AND SEMICONDUCTOR ROADMAP PREDICTIONS
Robert R. Doering
Dr. Doering began by describing the basic features of CMOS, a technology based on a Complementary Metal Oxide Silicon capacitor structure. Through patterning and associated processes, the capacitors are transformed into transistors. Today, the metal layer has been largely replaced by a polysilicon top plate
on the capacitor, acting as a gate. Beneath the top plate is an insulating layer—which is normally silicon dioxide, sometimes doped with nitrogen—and below that is the “bulk” silicon crystal.
He showed an illustration of the basic features of the transistor superimposed on a transmission electron micrograph to indicate the scale used in building transistors today. Small dots visible in the silicon substrate represented individual atoms of the transistor. This device, still in development, showed a separation between the source and the drain, called the “channel length,” which was only about 30 nanometers, or billionths of a meter. The smallest transistors in production today have channels about twice that long. Along the channel of the research transistor were only about 80 columns of silicon atoms, again about half as many as in products being made today. Given these tiny dimensions, said Dr. Doering, “we are rapidly approaching a scale where it is feasible and appropriate to talk about transistor structure in terms of counting the number of atoms.”
When a positive (for NMOS) voltage is placed on the gate, some electrons are attracted out of the substrate into a relatively thin layer near the surface called an “inversion layer.” This creates a conductive path between the source and the drain. If another voltage is then applied between the source and drain, a current is pulled through the device; when the gate voltage is off, there is (ideally) no current for any drain to source voltage. In this way, the transistor acts as an on-off control switch.
Decreasing Size, Increasing Sales
He suggested that one way to summarize the history of the industry was to track the continual diminution of the transistors and the wires that connect them into integrated circuits—a process that has been under way since the integrated circuit (IC) was invented in 1958. The IC feature sizes in 1962 were about a millimeter–25 microns, a micron equaling one-millionth of a meter. Today the feature sizes are described in nanometers, or billionths of a meter. We have currently reached interconnect and transistor feature sizes of about 130 nm and 70 nm, respectively, he said. IC makers hope to continue along this path within the next decade or so, toward feature sizes approaching 10 nm. Such infinitesimally small sizes mean that engineers are working with a small number of atoms, something that will soon present problems associated with quantum effects.
Integrated Circuit Sales
A parallel and related trend, he said, is one that economists are familiar with: the growth in integrated circuit sales. Once the cyclicality of the semiconductor industry is “smoothed” on a plot of prices against time, the annual growth of sales is seen to be roughly 15 percent over the long term. He said that total sales were
currently in the range of $200 billion a year and estimated that they would move to “an appreciable fraction of a trillion dollars” in the next 10 years.
This decades-long, rapid rise in revenues has been made possible to a large extent by the ability to make features smaller and thereby to place more transistors on a given area of silicon. The manufacturing cost of a square centimeter of silicon for large-volume product such as Dynamic RAM (DRAM) memory does rise slowly, but its rate of increase has so far remained small in comparison to the rate at which engineers have been able to gain efficiency/economy by “scaling down” the feature size of transistors, diodes, capacitors, thyristors, and other individual components built into integrated circuits. For example, transistors are often used in groups of four, corresponding to logic gates.
With feature size—specifically metal half-pitch, as defined by the International Technology Roadmap for Semiconductors (ITRS)—expected to reach 100 nm by 2003, transistor density will have reached about 100 million per square centimeter of silicon, or 25 million logic gates. Firms will benefit from these trends by the ability to market these huge gains in density and performance with very little increase in their own manufacturing costs per the same output of silicon area for high-volume products.
However, low-volume integrated circuits are now deriving less advantage, since the cost of photomasks, which must be amortized over the volume of production for a particular product, has been rapidly rising as feature scaling continues into the deep-sub-micron regime. This factor is becoming increasingly significant. Unless a technical breakthrough (e.g., some relatively low-cost form of “maskless lithography”) emerges which solves this issue, Dr. Doering predicted, low-volume integrated circuits at state-of-the-art feature sizes will eventually become more expensive than their predecessors. To some extent, this can be addressed by sharing mask costs via multiple-product masks, but this is obviously less effective for large-area chips.
Costs and Productivity
He then discussed a projection of the cost of transistors based primarily on the semiconductor roadmap. This projection shows unit costs in dollars per square centimeter of silicon increasing gradually from 1999, when the last complete update of the roadmap was done, to the roadmap horizon in 2014. The increase in cost/area is more than offset by the density increase, yielding a decrease in cost per transistor of roughly 18 percent per year. In the revised roadmap for 2001, scheduled to be published shortly, there was to be a more up-to-date projection indicating an even a faster descent in cost per transistor.
He summarized the preceding trends in terms of overall productivity, which showed an exponential trend in technology, such as the ability to make transistors smaller and place more of them on a square centimeter, and the ability to operate faster at lower power. Those trends generate the ability to market new products,
or to market the same products at greatly reduced cost, and to create an expanding market. As fuel to power these trends the semiconductor industry invests about 15 percent of the value of sales in research and development.
When Will the Trend End?
It is natural to ask when this favorable trend might end, he said, and the answers vary widely. The reason for this variation is the complexity of the technology. The semiconductor industry has a history of overcoming limitations with new solutions. For example, said Dr. Doering, when he first came to Texas Instruments, the feature size of integrated circuits was about 2 microns; engineers were then predicting that the effectiveness of optical lithography would end when a feature size of about 1 micron was reached. Since that time, feature size has shrunk to about 0.1 micron and, as a result of many ingenious solutions to technical problems, the industry is still using optical lithography.
Hybridization with other technologies in system-on-chip fashion will also extend the phaseout of CMOS. For this reason, CMOS itself will not disappear quickly even as new, miniature system technologies are developed. It will probably support other mechanical, optical, biological, and “nanoelectronic” components placed on the chip or in the package of the same system for a long time.
Yet another factor that complicates predictions is affordability. It is entirely likely, he said, that progress on CMOS technology will end not because engineers run out of ways to make it still smaller or faster, but because the cost of manufacturing outstrips the advantages.
The Future of CMOS
Optimizing the System
CMOS itself represents a vast array of different products: microprocessors, memories, digital signal processors, and other kinds of chips, some of which are optimized more for low power and others more for high performance. Within the overall technology are many parameters and complicated tradeoffs for optimizing the system. A complicating factor is that designers are just beginning to seriously contemplate how to better optimize the technologies at both the circuit and system levels.
Many possible tradeoffs and new ideas at these high levels could mitigate the physical barriers that threaten to slow progress at the fundamental level of transistors and interconnects. For example, until recently few circuits were able to power themselves off when they were not being used; this capacity requires some technique to “wake up” the system. This simple design feature, which could save a great deal of power, is now being increasingly used as one way of optimizing the whole system as we begin to approach the limits of CMOS scaling.
A Question of Tradeoffs
Dr. Doering reviewed the question of tradeoffs in more detail. Basically, he said, CMOS scaling comes down to a set of relationships that have been studied for over 20 years as researchers have tried to make features smaller. Scaling applies to all the simple linear dimensions of the device: the thickness of the oxide, the length of the gate, the width of the transistor and the wire, and so on. Each of these dimensions drops by half each time scaling is doubled. This has typically happened approximately every 4 to 6 years for decades.
In order for the rate of scaling to continue at this pace, each component must function optimally as it shrinks or runs faster. For example, switching speed scales well; as transistors get smaller, they switch faster. This is known as “good scaling behavior.” Good scaling behavior also holds true for transistor capacitance, current, and switching power. Voltage scaling introduces some challenges, however; engineers are less confident that lower voltages can continue to be used effectively as thermal noise levels and other voltage limits are approached.
A more serious scaling problem is presented by interconnects—the wires that run between transistors and other components. The speed of interconnects tends to be constant. For chips of the same size, which have wires extending from one side of the chip to the other, speed does not scale well because the resistance of the wires rises as the cross-sectional area falls. This problem can be addressed through designs that arrange the interconnects in hierarchical fashion. That is, wires that must run at high speeds can be very short, while longer wires can be those where high speeds are not as important. This is one simple example of potential tradeoffs and design innovation opportunities.
How a Tradeoff Can Work
A tradeoff at the device level can be seen in the case of a simple circuit called a ring oscillator, which is used to measure the speed of transistors. When a transistor is turned off, some current still leaks from the source to the drain, and that leakage worsens as the size of the transistor shrinks. For a particular technology node, each chip manufacturer will typically design a family of transistors, and within any one family is a tradeoff curve. If lower leakage is desired, the transistor speed must be lower as well. In the same way, if transistor operating voltage or another feature is changed, other parameters must be adjusted to accommodate it. Most companies that sell integrated circuits have processes that are aimed at a number of points along such curves. A customer making a device that runs off wall current, for example, may be interested in high performance but not in whether the standby power is kept low. Another customer, making hand-held devices, does need to worry about power efficiency to prolong battery life.
The Tunneling Problem
There are several ways to increase the performance and reduce the standby power of transistors, and a common one is to reduce the thickness of the gate insulator. This technique has already been pushed to extremes, as in a demonstration by Intel last year of an experimental device with a channel length of 30 nm and a gate-oxide thickness of only 0.8 nm. This is a thickness of less than three atomic layers, which is clearly approaching a limit, especially for an amorphous material such as silicon dioxide. The downside of making a gate oxide so thin is that it loses most of its insulating ability and becomes a new leakage path through the transistor. This current flow is dominated by quantum mechanical “tunneling” of electrons through the barrier, which cannot be prevented in extremely thin layers of any insulator. A possible solution to this problem is to use a material with a higher dielectric constant, such as particular metal oxides or silicates, which can be thicker than silicon dioxide for the same electrical performance. But whether any such material has the reliability and other desirable features of silicon dioxide is not yet known.
Another uncertainty has to do with device structure. Several laboratories are building devices with silicon-on-insulator (SOI) technology, which places the transistor into a thin layer of silicon above a much thicker layer of silicon dioxide to reduce parasitic capacitance. Most technologists, said Dr. Doering, believe that CMOS will go a little farther than basic SOI with double-gate devices that have a gate on the bottom as well as on the top. Simulations with these devices seem to suggest that they may be candidates for feature sizes as small as 10 nm, although they will still require a high dielectric constant gate insulator.
He then illustrated how hard it is to predict limits for these devices. Even for double-gate devices, one limit is a drop in the threshold voltage as a function of the gate length. With a given structure, as the gate length becomes smaller and smaller, the transistor must maintain a certain threshold voltage at which it turns on. This voltage should be stable and independent of small variations in the gate length. As a certain gate length is approached, however, the threshold voltage begins to drop rapidly. With one set of materials and thicknesses, that “roll-off” occurs with a gate length of about a 50 nm; for another set that behaves more favorably, the system works well until the gate length is about 20 nm. Researchers must discover how much instability can be tolerated and what techniques can offset this kind of roll-off in a parameter that should be well controlled.
On the other side of the coin, some advances in technology are beneficial but require little in the way of a breakthrough. One class of relatively obvious but cost-effective improvements is called “optical proximity correction,” which involves small pattern adjustments in the lithography process. Adding small fea-
tures or tabs at the corners or other places of the patterns may squeeze small improvements out of the performance of the optical system at relatively little cost. This is another reason why it is so difficult to predict the exact limits of lithography.
The Semiconductor Roadmap
Dr. Doering turned then to a discussion of the semiconductor roadmap, which is now organized around five regions of the world: the United States, Taiwan, Japan, Korea, and Europe. Most major producers of integrated circuits, as well as their suppliers, universities, and government research agencies, send representatives to roadmap committees. These working groups, hierarchically organized at the international and regional levels, actually make the charts and write the chapters of the roadmap itself.
History of the Roadmap
The first edition of the roadmap was drawn up in 1992—a “pretty rushed job compared to how we do it now,” said Dr. Doering. In part, it was written in response to the government’s request for information about the most urgent research and development needs of the industry. Lithography was a contentious issue then, with competing suggestions about the kind of technology that would succeed the incumbent technology. The Semiconductor Industry Association (SIA) organized the National Technology Roadmap for Semiconductors, and the format was basically shaped during a two-day workshop in Dallas. This format was followed for an update in 1994 and a second update in 1997.
After that, the group decided to open its membership to international companies, beginning with a partial step in 1998 when the first ones were invited both to attend as observers and to critique the process. In 1999 the name was changed to the International Technology Roadmap for Semiconductors and full international participation began. The ITRS has adopted a schedule of biennial updates alternating with semi-annual full revisions. The previous full revision was done in 2001, following a mid-term update in 2000. The International Technology Roadmap for Semiconductors had about 200 participants in the United States—almost half from chip makers, about 30 percent from the supplier community, and the rest from SEMATECH, the Semiconductor Research Corporation, other consortia, universities, government, and other institutions.
Features of the Current Roadmap
He turned to what the roadmap that was then being updated would have to say about continued scaling. From discussions at a workshop in July 2001, he described in two different ways the projected minimum feature size. The first was
what could be printed lithographically, the second was one particular feature—the gate length of the transistor. Because the gate length is important to performance, he said, techniques have been developed to make it even smaller than features that can be printed lithographically, so that it is essentially a “sub-lithographic feature.” Sub-lithographic techniques take advantage of the fact that transistors are not usually patterned at the lithographic minimum pitch. He said that it was probably best to view overall IC progress in terms of circuit density, the diversity of function being integrated, and Moore’s Law.3 The most important density consideration, he said, is how close together metal wires can be placed on circuits. Another, which is more relevant to speed, is how short the gates of transistors can be.
One of the objectives of the roadmap is to indicate potential solutions. He said that there is no lack of ideas on how to stretch optical lithography technology farther through many evolutionary improvements. The next big step in lithography, most people agreed, would be extreme ultraviolet (EUV) technology, which was just entering the research demonstration phase. By 2008 or earlier, he said, a commercial system may have been developed that will bring feature sizes below 50 nm and guide the technology all the way to the end of the roadmap: 2016, according to the 2001 edition. If these systems can be cost effective, lithography won’t likely be the limiting factor in IC scaling.
He again raised the question of what comes after CMOS, affirming that there will be other kinds of technologies. The roadmap working groups, which are attempting to anticipate how these technologies will affect speed, cost, and other parameters, assume that silicon CMOS will remain in use for many years, both hybridized with and complemented by other techniques.
He concluded by saying that the consensus now on purely physical limits is that the industry can expect at least 10 more years of CMOS scaling and perhaps 15 years, which represents the current horizon of the roadmap. Beyond that horizon loom a number of technologies, most of them in the very early stages of research, which have the potential to complement CMOS and to allow it to accomplish even more in future integrated circuits.
Mr. McFadden thanked Dr. Doering and suggested that the symposium had just heard “a very optimistic view of the challenges we face in trying to deal with continuing productivity gains in the semiconductor industry.”
George M. Scalise
Semiconductor Industry Association
George Scalise commented that he had participated in the first roadmap meeting in 1991, along with Gordon Moore and several others. Part of what motivated them to meet, he said, was the need to rethink the function of SEMATECH, which lacked a roadmap that could provide guidance for the industry. They made a preliminary outline for such a roadmapping process, and this outline became the starting point for the workshop in Dallas.
The group also realized that the way the new, international version of the roadmapping groups was structured would be critical, and decided that a structure on the model of the United Nations, with a new leader and new forms of governance each year, would not work efficiently. Instead, they agreed that the most effective system would be for the SIA to continue to provide the leadership. There was resistance to this idea at first, but Mr. Scalise and the others prevailed, and in his opinion the SIA leadership has contributed to the smooth working of the first few international iterations.
Collaboration Among Firms
Mr. McFadden then asked Mr. Scalise to respond to the issues of challenges to productivity gains in light of existing business models, and of what the industry might expect as it restructures and continues to tighten its operations in light of the current, constrained economic conditions. Mr. Scalise focused on two aspects of the question. The first was the ability of competing firms in the industry to collaborate, which he described as “one of the things that has really helped to shape the industry over the last many years.” Collaboration began with the formation of the Semiconductor Research Corporation in 1982; it continued with SEMATECH in 1987 and then with the ITRS in 1999. We have reached a state, he said, where worldwide collaboration is becoming even more important, and “we have yet to figure out how to do that well. Building an international consortium is a far tougher thing to do than SEMATECH.” He expressed optimism, however, that it could be done.
The Foundry Phenomenon
He also commented on the emerging new “fabless” business model in the semiconductor industry, where large, efficient, fabrication-only plants produce custom chips for firms that design the chips. He put this in the context of the industry as a whole, in which firms traditionally have to invest about 23 percent of revenues in manufacturing and facilities in order to maintain “equilibrium” with the 17 percent compounded annual rate of growth. The foundries, he said, have been investing approximately 100 percent of revenues, which is “far beyond anything that can be tolerated in terms of a reasonable business environment.”
The consequence of the recent surge in manufacturing capacity, said Mr. Scalise, is “excessive price attrition,” well beyond what the traditional integrated firms can compete with. Beyond that, he said, is a new environment that could well be dominated by the success of the foundry model. The question is whether the foundries will become not only the manufacturers but also begin to do design work. If so, what would that mean for the integrated device manufacturers (IDMs)? Will the IDMs move toward the foundry model and stop investing in captive fabs? If they do, who will drive the equipment industry, and who will invest in it?
The old structure, led by IDMs, had been well defined for many years, especially in the United States, which had held a firm grip on the leading edge of every aspect of the business. The U.S.-based suppliers control about 50 percent of the worldwide market. The foundries are now approaching the ability to handle roughly 20 percent of total fabrication, and they are concentrated geographically in Taiwan, Korea, Malaysia, and Singapore, so that the industry is moving toward a new balance.
The Impact of China
He noted that this new balance will certainly include China, which is rapidly surging toward the forefront of the semiconductor universe. This follows its entry into the World Trade Organization (WTO), which the SIA has strongly supported. Now the Taiwanese foundries are moving into China with their technology, aiding the rapid growth taking place there. Current projections are that mainland China will within this decade become number two in the world both as a market and as a manufacturer of semiconductors. Its resulting influence means that it will have a major impact on how the industry functions, something the United States cannot ignore.
“Rest assured that we will not be the same industry either structurally or geographically in a few years, the way things are unfolding today,” said Mr. Scalise. He concluded by saying that the country may have to make another concerted effort to maintain its leadership, as it did when the industry lost momentum in the mid-1980s. At that point, many people predicted that the “U.S. industry is finished.” The industry once again faces the challenge of responding quickly to a new challenge to its leadership. “If we don’t address that,” concluded Mr. Scalise, “a panel sitting here in 2010 will have a very different discussion than we’re having today.”
Charles W. Wessner
National Research Council
An Unanswered Challenge
Dr. Wessner thanked Mr. Scalise for articulating the extent of the challenge—and previous success—yet noted some “asymmetry” among the points made by the first three speakers:
The industry is very important to the American economy.
The industry faces steep technical challenges but is probably up to them.
The industry may be “outflanked” by the sudden recent competition from offshore.
In light of those points, he posed three questions:
Does U.S. industry have the resources in place to meet the technical challenges and regain leadership?
In light of the overbuild of manufacturing capacity during the current downturn in the business cycle, it is harder for companies to maintain profits. Does that cede advantage to companies focused more on market share than on quarterly profit?
Does the U.S. industry have the mechanisms through the MARCO program, the SRC, or SEMATECH to address these technical challenges cooperatively?
Finally, he asked whether a generalized, undefined call for more basic research is an adequate response, given the challenges faced by the industry.
Mr. McFadden noted, in regard to the last point, that the industry at its inception in the 1960s used the results of fundamental research done in the 1940s and 1950s to develop its technologies. “Yet we see the federal government reducing the amount of funding in the basic sciences,” he said, “at the same time the industry is pressed to maintain its level of capital spending and investment. We need the underlying basic research that’s going to allow the U.S. industry to meet these challenges.”4
Mr. Scalise added that the concern about basic research is so great that the board of the SIA has made increasing federal funding for university research in mathematics and science its number-one priority for the next year.
Coordinating Fab Investments?
Dr. Pakes asked why the industry had been so effective at coordinating research among the members of SEMATECH and of the SIA, and yet had not been able to coordinate investments in fabs or foundries. Mr. Scalise answered that the purpose of collaboration so far had been to provide the technical capability for innovators to compete vigorously with process technology as well as manufacturing equipment. Manufacturing, to this point, had been one of the competitive tools. In some cases, one company had led the way and others had benefited, as when IBM blazed a trail in moving from four- to six-inch wafers. The following generation of wafers had been driven largely by Intel. But the industry hasn’t yet reached the point of collaborating on manufacturing capability. He noted that this question has to be addressed very soon with regard to the foundries.
Dealing with ‘Showstoppers’ and ‘Choke Points’
Dr. Flamm suggested that the international roadmap is often described as a descriptive or predictive process, but that it is really more than either. It involves a complex technology with different pieces from different suppliers, and it has to be coordinated. “What you really have is people identifying technical challenges they call ‘showstoppers,’” he said, “and trying to mobilize people to address those ‘choke points.’” He said that the situation was the first example he knew of a competitive industry, rather than organizing as a cartel or monopoly, collaborating on the technology-investment aspect of the business only. He added that the practice was explicitly legal because of the Limbert antitrust exemption that had been granted to joint research cartels in 1984. Dr. Pakes seconded that there was no illegality in the activity, and Mr. McFadden said that the activity would be governed by the rule of reason and would not be liable to treble damages.
Mr. McFadden continued that it is important to understand that the roadmap is not a solution to technological problems. It describes various options, challenges, and gaps, and it communicates that information to the industry in ways that suppliers, manufacturers, and customers can appreciate and use. The collaboration still depends on individual competitive actions. In its consortium activity, the industry has been unique not only in establishing roadmaps, but also in having collaborative industry support for university research. The industry invests nearly $100 million a year in university research. It also collaborates on programs like SEMATECH that focus on specific technological problems. “This has always been an activist industry,” he said. “The question is, in the kind of economic
conditions we’re facing now, can we maintain this progress when government investment in university research is declining?”
Dr. Jorgenson asked what had caused the roadmap to underestimate the speed at which successive generations of technology are evolving. The roadmap had projected successive generations at about three-year intervals, but in fact new generations have been evolving recently at intervals of closer to two years.
Dr. Doering responded that the adoption of the two-year cycle had been purely competitive. He suggested that the roadmap may even have given some impetus to the faster cycle just by putting “a stake in the ground” that people could try to pass. Aside from that, he said, two years is about the smallest feasible amount of time in which the industry can complete the R&D for a technology generation, and it may not be quite enough time to produce uniform improvements. For example, gate length may come down faster than other parameters. “But from a business standpoint, what’s important is to come out with something that is a new technology generation on a two-year cycle, which is just about as fast as it can be done.”
More on the New Foundry Model
In response to a question, Mr. Scalise said that it had once been possible for a single company to drive a transition in wafer size from one generation to the next, but this is no longer possible. Moving from the 8-inch to the 12-inch wafer, for example, is far too costly for any one company to accomplish alone. The industry is promoting this transition on an international basis, guided largely by the international roadmap. Similarly, unlike 5 years ago, today only a few of the traditional integrated device manufacturers, who integrate design and manufacture, can afford to invest in wafer fabs. The industry is moving toward the foundry business model, and the capacity coming online at present is dominated by the foundries, not the IDMs. The important question, said Mr. Scalise, is whether this development will continue in the same direction over the next several years and the new foundries will come to represent the dominant new business model of the industry.
Dr. Flamm pointed out that the foundry-based companies spend a smaller percentage of their sales on R&D than do the IDMs, and Mr. Scalise agreed that virtually all of the research in the industry is being performed by the companies on the boards of the SIA, the SRC, and SEMATECH.
Dr. Mowery asked whether the foundry phenomenon is partly a reaction to expanded research activities on the part of the equipment manufacturers, which often produce complete modules with their hardware. These firms sell modules to the other foundries as well as to the IDMs. Mr. Scalise agreed with this assess-
ment and said that the development flowed largely from the activities of the ITRS and SEMATECH in planning for this transition toward a new and evolving business model that includes the specialized foundries.
A Problem for Equipment Suppliers
Mr. McFadden commented that the roadmap may have recently complicated the challenge of the equipment suppliers, who anticipated a more rapid movement into new equipment for large wafers than actually occurred. Dr. Mowery agreed with this point. He said that the roadmap is indeed an important contribution, and that it has facilitated specialized coordination, but that the equipment makers had mistakenly based the timing of their move toward 300-mm wafer technology on the roadmap. This caused them to take on the cost of developing a new generation of equipment that in a sense was not timed properly.
Dr. Mowery emphasized that the timing problem was not the fault of the roadmap. A roadmap cannot eliminate uncertainty. At the same time, it does bring some risk to the industry in that a consensus may suppress less orthodox alternatives and impose higher costs on firms that interpret the time line of the roadmap literally. In the case of some equipment firms, these conditions resulted in higher costs. Mr. Scalise agreed, emphasizing that a “roadmap is just a roadmap—not an architectural blueprint.” In this case, he said, the delay in moving from 200-mm to 300-mm wafers unfolded as a consequence of competing elements in the economy, including the industry’s own cyclicality.
The Question of Government Support
Dr. Wessner noted that STEP would soon be releasing a report based on a semiconductor conference held late last year.5 Participants included members of semiconductor consortia from Europe, Japan, and Taiwan, all of whom reported substantial government support. He asked whether it did not seem logical for our government to support the industry as well, given the latter’s strategic importance to the nation.
Dr. Doering agreed that U.S. industry had depended heavily on government-supported R&D in the past. If the present falloff in support for fundamental R&D were to continue, it would curtail the industry’s ability to address the limits of CMOS technology as well as potential “post-CMOS” breakthroughs. “Based on physical limits,” he said, “we need a big R&D effort, on many levels, to come up with new ideas and take them to a point—even in academic research—where
they can be picked up by industry. Where that point of transition between academia and industry is located has shifted today toward academia, because we don’t have as many large industrial labs that work at the breadth and depth they used to.”
Because of industry’s reduced ability to do long-term research, he suggested additional government investments in the university research community for programs that can move the technology closer to the commercialization stage than was the practice 10 to 20 years ago. If this is not done, he projected, some of the innovations that will be needed probably won’t appear—“at least not in this country.”