Economic Growth and Semiconductor Productivity
University of Texas at Austin
Dr. Flamm opened Panel II by asking, what is the impact of semiconductor price/performance improvement on user industries? He said that the answer depends on three general factors:
The composition of semiconductor input varies greatly across user industries.
Price changes vary greatly by chip type.
Differences in semiconductor input price changes across the industry may play a significant role in explaining differences in quality-adjusted price declines across user sectors.
Moore’s ‘Self-fulfilling Prophecy’?
He then reviewed Moore’s Law in its original form. In 1965 Gordon Moore noted that the number of devices per chip was doubling every 12 months. Then, in 1975, he revised this observation slightly to say that the number of devices was doubling every 18 months—a “law” that has remained substantially in place to the present. This observation was never intended as a law, of course; Dr. Flamm suggested it might have been a “self-fulfilling prophecy” that “happened because everyone believed it was going to happen.” Whatever the mechanism, the Moore’s
Law phenomenon included a continuing process of technological innovation that effectively pushed ahead of it the technological “brick wall,” that moment when technological roadblocks would slow or halt the pace of doubling.
He then described an “economist’s default corollary” to Moore’s Law, which describes processing cost in dollars per device. Because lithographic and other advances have produced twice as many features per “technology node” every three years, and the cost of wafer processing has remained roughly constant, the processing cost in dollars per device has shown a compound annual decline rate (CADR) of 21 percent.
An ‘Ingenuity Corollary’
Then he added an “ingenuity corollary,” consisting of several observations:
Instead of doubling chip size, the industry has used technological ingenuity to increase chip size by only Z (Z < 2) times.
A recent example is DRAM size, which has increased by only Z=1.37.
Another example is 3-D device structures.
The use of ingenuity has several implications:
For DRAMs recently, CADR has equaled minus-30 percent.
For DRAMs in the 1970s and 1980s, the wafer-processing cost also fell, so that CADR equaled approximately minus-37 percent.
The Japan/VLSI project has had a competition impact.
Another example is ASICs (application-specific integrated circuits), which represent rapid, leading-edge technology adoption.
This has a transitory impact on CADR.
He added that the differences in semiconductor price movements are huge (see Figure 1). The prices declining fastest are those of microprocessors, DRAMs, other MOS logic, and other MOS memory. Prices have declined little for analog devices, bipolar devices, and light-emitting diodes.
The implications for input prices in different user industries are also great, he said. Input prices are much higher for automobiles and all of industry than for computers, “all end-user sectors,” and communications.
Tinkering with Moore’s Law
Then he looked at the consequences of “tinkering with Moore’s Law” so as to change the time required for doubling the number of devices on a chip. In fact, the roadmap committee did just that in the late 1990s, changing the doubling time from every 3 years to every 2 years. This was a consequence partly of technological abilities and partly of competitive pressures. This raised the compound annual decline rate for processing prices from minus-21 percent to minus-29 percent, for
DRAMs from minus-30 percent to minus-41 percent, and for the constant chip size to minus-50 percent.
A Temporary Point of Inflection?
He then discussed the “point of inflection” for price performance, which accelerated sharply between 1995 and 1999, with memory prices falling at a rate of 60 percent a year. This behavior was so anomalous, he suggested, that it would probably prove to be temporary. He examined some of the factors that seemed to have caused the point of inflection, including shorter product lives, intensified competition, and more rapid adoption of leading-edge processors in other products. All of these conditions, he said, were unlikely to persist, and in the future the rate of price declines was likely to return to the historical rate of minus-40 percent. “This rate of minus-60 percent,” he concluded,” does not seem to be sustainable.”
SEMICONDUCTOR PRODUCTIVITY AND COMMUNICATIONS
Dr. Pinto opened by saying that within the semiconductor market communications is growing more rapidly than other subsectors, moving from 17 percent of the $151 billion market in 1995 to 26 percent of a $228 billion market in 2000.
Within communications, such rapid growth has been made possible by optoelectronics, where photonics is outstripping electronics, bringing increased amounts of both data and voice transmission. Within this category, data transmission is growing faster than voice transmission. The demand in the United States for capacity is growing by about 80 percent per year. At this rate, capacity will reach 1 petabit per second by 2006-2007. This would represent a capacity of 5 megabytes per person in the United States, always-on. Business-to-business (B2B) and intranet usage dominate present usage.
The capacity trend in optoelectronics resembles that described by Moore’s Law, which is the doubling of the number of transistors on a chip every 18 months. The total capacity for a single optical fiber has risen a thousand-fold since 1981, for a CAGR of 40 percent. Since 1995 capacity has increased even faster, doubling every 9 months (a 140 percent CAGR). Much of this gain was caused by the advent of wave division multiplexing. “Light-speed,” said Dr. Pinto, “is faster than Moore’s Law.”
The real driver of increased capacity, he said, is productivity. He listed device-cost learning curves, one of which showed a decline of 47 percent per year in silicon transistor costs over a period of four decades. The cost of optical components decreased by up to 35 percent per year in most key segments. System cost reductions have amounted to 120 percent since 1997 for dense wavelength division multiplexing (DWDM).19 Scalable DWDM systems are important because they enable service providers to accommodate consumer demand for everincreasing amounts of bandwidth. DWDM allows the transmission of e-mail, video, multimedia, data, and voice information. He added that the same phenomenon is to be seen in the electronic space, where the fundamental issue is to drive down cost as network capacity increases.
Communications Perspectives on Systems Integration
Communications Across Nine Orders of Magnitude
He then showed a communications perspective on systems integration that demonstrated a “distance” of nine orders of magnitude “from IC interconnect to TAT-4”—from the center of a single chip to a fiber-optic system spanning the Atlantic Ocean. Costs rise with distance as one moves to progressively larger structures: the center of a switch, an integrated circuit, an IC package, a circuit board, a box, and finally a network or long-distance cable. At the IC scale, the cost of interconnects can be reduced by monolithic integration; at the network end, the challenge is to establish higher bandwidths through cheaper “effective” interconnects for DWDM and 3G20 systems.
He illustrated some examples of communications systems that use integrated circuits, including the cordless phone, GPRS wireless terminal, ADSL (asymmetric digital subscriber line) client, Ethernet network information center, and multichannel wireless infrastructure.
Shrinking Wireless Handsets
Dr. Pinto then illustrated the evolution of the wireless handset since 1978. During that period, the total volume of a wireless handset has shrunk from 2,000 cubic inches, requiring 130 ICs, to 2 cubic inches for a PCMCIA card-driven device. By 2002 the smallest wireless handset, a “soft radio,” will occupy just 1 cubic inch, with only two ICs.21 He said a single small card would be able to carry both a wireless local area network connection and a GPS device.
He mentioned also high data rate wireless communications as an example of “where could we go.” One version is known as BLAST, for Bell Labs Layered Space-Time, an architecture for realizing very high data rates over fading wireless channels.22 This uses a compact antenna array and is the same size as a palm-held portable device. He also discussed module-level optoelectronics integration, which has evolved from a discrete board (1997), to a subsystem in a package (1999), to an OE-IC multichip module (2000). This evolution has been accompanied by gains in size reduction of more than 10 times, in power reduction of more than three times, and in cost reduction through advanced packaging.
In other advanced technologies, he briefly discussed monolithic III-V optoelectronic integration, an electroabsorption modulator, and an integrated optical preamp in a detector that cleans up the signal. He discussed heterogeneous integration of various elements (III-V actives, waveguide passives, and optical discretes) into a single SiOB (silicon optical bench). SiOB technology is a fabrication platform for integrated optical device components under development at Bell Laboratories. Devices made using SiOB technology will find application in optical networks, especially those where wavelength division multiplexing (WDM) is used to increase the capacity of the system. Finally, he demonstrated some of the newer silicon MEMS (microelectromechanical systems) in optical networking that had just been shipped in the previous 12 months.
Rapid Evolution of Optical Networking
He then discussed the evolution of optical networking, which has been rapid. Between 1996 and 2002 it had been changing: from WDM transmission, to WDM transmission with the ability to add or drop signals, to WDM rings with node addressing, to WDM rings with full connectivity, and finally to interconnected rings and mesh topologies. This evolution had taken the technology from pure connectivity with discrete components to content and traffic management featuring intelligent subsystems and high-volume, low-cost manufacturing. The area of most intense interest is “metro” systems that bring optics into the public space, if not all the way to the home.
The “Last-mile” Challenge
He returned to the cost of networks vs. the distance from the center of an integrated circuit, updating the work of Mayo to span 18 orders of magnitude in cost. The actual networks of fiber-optic lines are still expensive because of the cost of burying cable, he said. So while the cost of sending gigabits per second per meter is going down sharply, thanks to the use of MEMS and other productivity-enhancing technologies, the cost of the “last mile” of connection is going up just as sharply, especially at distances of more than 100 km from the chip center.
Exponential Improvements—for How Long?
The performance of optoelectronic components has been growing exponentially—a rate that cannot continue indefinitely, he said. In other words, we cannot count on the following trends for much longer: IC capacity increasing at 60 percent per year, computation increasing at 25 percent per year, cost/function decreasing at 30 percent per year, optical network capacity increasing at 140 percent per year,
and optical network cost decreasing at 120 percent per year. Among the barriers to these trends, he said, might be the laws of physics, manufacturability, economics, design complexity, and lack of applications.
Are there fundamental limits to fiber optics? He said that a transparency window in fiber optics had been broadened by using materials science to eliminate a source of decibel loss called the “red peak” or “water peak.” There may be further improvements of materials possible through coding techniques. But the larger issue here is that once the fiber is in ground, it cannot be replaced whenever a new advance in technology makes replacement desirable.
A View of Future Markets
He offered a view from the year 2001 of the end-user marketplace ahead. One important issue is that capital expenditures by service providers were projected to drop 10 to 30 percent year-over-year from 2000 to 2002, then to grow by 5 to 10 percent per year. To a large extent, service depends on what consumers are willing to spend. He also gave an overview of growth in the network business. Since the beginning of 1998, network capacity has grown by 80 percent per year. Accompanying this growth have been price reductions in the cost of bandwidth of about 80 percent per year.
Some people question whether the applications exist to continue to drive this bandwidth growth. He noted that computing has faced this question for more than 50 years, and the answer has always been “yes.” It is logical to suppose, he concluded, that communications, too, will find a continual stream of new applications driven by better networking.
The Key is Productivity
The key in all semiconductor applications is to deliver better productivity. Applications in the foreseeable future are likely to include high-performance computing, virtual conferencing, faster and more intelligent networks, ubiquitous sensing and computing, games and entertainment, interconnected intelligent appliances, and better human interfaces.
SEMICONDUCTOR PRODUCTIVITY AND COMPUTERS
Randall D. Isaac
International Business Machines
Dr. Isaac addressed the complex question of whether computer productivity will continue beyond the range of CMOS scaling. He began by noting that in terms of how many computations per second $1,000 will buy, progress is achieved by migrating to new technologies. The only technology that has made substantial
progress without migrating, he said, is the semiconductor, which has leapt ahead in performance by nine orders of magnitude since 1960. The questions he posed are how much life is left in the semiconductor and what will replace it.
Economics as the Driver of Moore’s Law
He reviewed the original Moore’s Law proposal, affirming Dr. Moore’s original statement that increased integration and the consequent gains in performance have been driven primarily by economics. If that ceases to be true, he said, those gains will slow or stop.
He then looked more closely at the factors underlying the improvement of integration. Lithography was by far the most important, accounting for 50 percent of the gains in integration, which is why it has dominated the national discussion about Moore’s Law. The other factors were device and circuit innovations, accounting for 25 percent of the gain, and increased chip size (manufacturability), accounting for the remaining 25 percent of the gain. Performance gains were caused by improved transistor performance, interconnect density and delay, packaging and cooling, and circuit level and system level gains. He also said that the evolution of memory density in megabits per chip is one of the fundamental influences on the direction of the industry. Historically the density of megabits per chip had increased four times every 3 years on a consistent basis. That is now changing to a doubling every 3 years.
Trends on the Roadmap
Such trends are the subject of the ITRS lithography roadmap, which attempts to describe the minimum feature size in nanometers for each successive generation of chips. Since 1994 the curve toward smaller size has become steeper, partly because the transition to deep ultraviolet lithography was easier than first thought. It was also accompanied by significant innovations, such as the chemically amplified photo resist, optical proximity correction, off-axis illumination, and additional elements. Implementing all of them at the same time turned out to be much easier than many in industry expected.
Doubts about EUV
The combination of deep UV and its set of accompanying improvements resulted in a surge of improvement in performance. That surge, however, is probably not sustainable. It is already proving to be more difficult to move to 193 nm, and 157 nm is even more of a challenge. He said that signs are emerging already that extreme ultraviolet (EUV) lithography, the next expected technology, may not be as pervasive as its predecessor, partly because the depth of focus may be
too small to efficiently pattern levels such as contact holes. At between $40 million and $50 million per tool, the economic challenges of the technology are daunting. It will not be known until the latter part the decade whether lithographic improvement will continue to spurt, will return to its historical level, or to some other level.
He examined the average price of data storage since 1980 in dollars per megabyte and showed that it has decreased even faster than the price of DRAMs. He said that much of the increased use of storage has been drawn by the decrease in prices.
A Tailing off of Scaling
In terms of performance, he noted, scaling would not be able to move technology much further. He said that he would slightly revise Dr. Doering’s earlier comment in that he did not think that CMOS device scaling would continue for 10 to 15 years. He predicted that progress in logic performance would come less from scaling than from moving from one kind of device to another, such as the double-gate devices described by Dr. Doering. For a particular transistor, he said, the advantages of scaling would probably tail off, but productivity would continue by jumping from one type of device to another.
He listed a collection of items his company has been working on, including copper technologies, silicon-on-insulator, silicon-germanium, strained silicon, and low-k dielectric. The objective is not to stick too long with a particular scaling paradigm that is flattening out, he said, but to move from one transistor to another to reset the scaling direction.
Using Twin Processors that Check Each Other
From a system point of view, technical enhancements, in terms of both area reduction and performance enhancement, had brought his company a year previously to a point where it was able to implement a 64-bit system S/390 microprocessor. This series uses a 175-sq-mm chip with 47 million transistors and seven layers of copper interconnect. It has been possible to run a 1-gigahertz processor on a 20-way system with enough productivity to put two processors on a single chip. The two processors carry out the same function and check each other. If the two do not agree, they retry the instruction and re-check with the other. This capability is a powerful testament to the levels of cost and effectiveness that makers of processors can achieve today.
Modern system-level performance is also efficient because it depends on a large range of factors. Surprisingly, there is a long list of incremental improvements that are far more important than the traditional and better-known CMOS scaling.23 For a 60 to 90 percent compound annual growth rate in system-performance improvements, both the technology and every one of these improvements are needed. The small improvements account for 80 percent of CAGR, while traditional CMOS scaling accounts for 20 percent of CAGR. And the list of improvements has been growing steadily.
The importance of this concept for productivity is linked to the fact that if one of the improvements slows down, there are more than enough others to take up the slack, especially as regards software. In mathematics, the combinatorial function of the number of components is much faster than exponential. So with hundreds of millions of components on a chip, the number of possibilities that can be combined for large numbers of devices is infinite.
Power Density: the Fly in the Ointment
The Issue of Power Density
He then turned to a “fly in the ointment” which had become an industry-wide concern over the last 12 months: the issue of power at the system level. First he looked at a variety of functions and compared the expectations based in Moore’s Law with what had actually happened over the past decade. The expectation for lithography had been an improvement in resolution by a factor of two every six years. In fact, it has moved faster than that. And the impact? Primarily reduction in chip area, he said, which in turn was driven by economics. The cost of a chip is roughly proportional to the chip’s area. Because the processing cost per unit area had increased slowly, there was tremendous economic pressure to reduce chip size.
The number of transistors was supposed to follow Moore’s Law closely, doubling every 18 months, but the doubling time for microprocessors has been closer to 1.9 years. For performance, which might be supposed to follow Moore’s Law as well, doubling had occurred approximately every 18 months. Frequency had not been increasing as fast, doubling about every 2 years, with signs of an upturn in the last 2 years.
Signs of Trouble
The power curve, however, was what brought signs of trouble. Power density should have remained roughly flat, or constant. The only reason for any increase in power density was the 25 percent increase in innovation, which increased the density of the circuit without changing the power. In fact, power density was actually found to rise rather steeply. “This is an important issue,” he said. “The fact that power density is going up much faster than classical scaling tells us that as we pack more chips together, we’re getting into a power issue.”
He displayed some specific examples. On April 10, 1989, Intel released the first 486 chip, a 1-micron technology that held 1.2 million transistors on a 165-sq-mm chip. The 486 used about 4 to 5 W of power at a power density of 2.5 W per square centimeter. If these features were changed only by scaling at the same rate as lithography, then 12 years later, in 2001, one would expect a one-quarter-micron technology using 0.25 W on a 10-sq-mm chip, with the same power density: 2.5 W per square centimeter. If these changes were made at the rate dictated by Moore’s Law, one would still reach one-quarter-micron technology, with 300 million transistors on a 660-sq-mm chip. But the chip would use about 66 W of power and 10 W per square centimeter—four times the power density expected from scaling.
What actually happened was much worse. On April 23, 2001, Intel announced the first member of the Pentium 4 family, and it had a power density of 29.5 W per square centimeter, three times larger than predicted by Moore’s Law. How had this happened? The number of transistors was actually much lower than predicted and the power was about the same, but the voltage was much higher, which drove higher performance. If classical scaling had held true, designers could have used many more transistors without causing power problems. The root cause of the problem was that the technology did not follow classical scaling. As a result the power density, due to technology alone, had actually risen not by the predicted factor of three but by a factor of 10.
The Culprit: Operating Temperature
The fundamental reason for this, said Dr. Isaac, is that the one feature not following the scaling pattern is the operating temperature. The ambient temperature should be lowered by a factor of about three to accommodate the decrease in threshold voltage, keeping the “off-current” (the leakage current at zero gate voltage) at negligible levels. “Because the temperature doesn’t go down,” he said, “the subthreshold current kills us. You can’t drop the threshold voltage, so you can’t drop the supply voltages.” Therefore the power, which is the square of the supply voltage, stays too high. The technology is violating classical scaling, keeping the power density very high.
The consequences are waste heat and high power consumption. Few technologists or economists have factored in the cost of the power for computing; it
has been considered trivial. Today, however, the energy consumption of server farms is increasing exponentially. Already, a server farm uses more watts per square foot than a semiconductor or automobile manufacturing plant.24 Energy needs for server farms are 60 percent of costs.
Too Much Focus on Performance and Frequency
The underlying factor, said Dr. Isaac, is not that chip designs are wasteful, although they could be much more efficient. Rather, the cause is that the industry has been following a high-performance scaling law rather than a low-power scaling law. As a consequence, power density has been rising since 1995, and increases in power density are now inherent in the technology. As engineers place more components more closely together, the power consumption and heat generation have become systemic problems.
A second factor behind the power problem, he said, is the focus on frequency. Even though frequency doesn’t affect system performance in linear fashion, it has a large impact on power. For example, a 600-megahertz processor uses more than three times as much power as a 300-megahertz processor. When a design is optimized for frequency, as opposed to system performance, power use rises sharply.
“Been Here Before”
He pointed out that the industry has “been here before.” In the early 1990s, the industry hit an early brick wall in terms of power. This technological block was avoided primarily by moving from bipolar to CMOS technology. CMOS technology uses some 100 times fewer parts, 15 times less weight, and 30 times less power than bipolar technology. Bipolar technology is much faster: The industry had to move from a faster building block to a slower building block that was more power efficient. This presented an option for the current situation, he said: to find other building blocks that are more power efficient. This option at first did not seem promising, because there is no known circuit technology ready to replace CMOS, nor is any on the horizon.
A Solution: Parallel Systems
Instead, Dr. Isaac suggested that the industry will move toward massively parallel systems, which use slower but more efficient processors to build a system
that is, on balance, better. This is done without replacing the circuit. For example, one could replace a Unix processor using 50 to 60 watts of power with one that is much slower but 30 times more power efficient to produce a “higher” system at lower power. The leading technology here is the supercomputer, which has taken exactly this route. The supercomputer roadmap shows a progression over the years, as follows:
use slower processors with much greater power efficiency;
scale the technology to desired performance with parallel systems;
design workload scaling efficiency to sustain power efficiency; and
keep physical distances small to use less communication power.
One of the productivity challenges in the future, said Dr. Isaac, is to design for the total picture, not just for computations per second. Now that the power issue is no longer negligible, the industry will need to find new solutions. The most promising approach, he concluded, is to make use of solutions with multiple components each of which is so power efficient that the overall system is more productive.
PRODUCTIVITY AND GROWTH: ALTERNATIVE SCENARIOS
Dale W. Jorgenson
Dr. Jorgenson said that his objective in his talk would be to link the two “conversations” of the conference: first, the economics of the industry and its relationship to performance, and second, the issue of price. Economists tend to focus on price, he said, and instead of looking at performance he would examine prices in information technology and show how they were related to trends in the semiconductor industry. He would also discuss the role of IT in the recent U.S. economic boom and its potential for moving that economy in the future. Finally, he would try to relate the issue of prices to the technological issues that had been discussed.
An Amazing Regularity
He began with a review of Moore’s Law and said that the progression from the Intel 4004 processor to the Pentium 4 “has a regularity to it that is amazing to economists as well as technologists.” What is interesting from an economic point of view, he said, is the closely related but distinct subject of price behavior. He
showed a graphical representation of prices, which required complex calculations to produce, indicating that the price of computing had declined steadily from 1959 to 2001 at a rate of about 16 percent a year. A phenomenon visible in that diagram was a sharp acceleration in the rate of decline of computer prices in the mid-1990s, when the rate of decrease doubled to about 32 percent a year.
The diagram also showed the price declines of memory chips and microprocessors. This information had become available only recently. Prior to 1998, semiconductor prices were not available from the official national accounts. The price of storage devices had declined by about 40 percent a year, and the price of microprocessors by about 50 percent a year. He pointed out the same point of inflection for both products in 1995, although the change was more pronounced for microprocessors than for storage devices.
Shift to the Two-year Cycle
He then displayed the lithography part of the semiconductor roadmap for 2000, which showed a similar change. That is, until 1995 successive generations of these products moved on a three-year cycle, and in 1994 the roadmap committee extrapolated that trend for the next decade or more.
Technology continually outruns the roadmap, however, and beginning in 1995 the map changed to a two-year cycle. The roadmap construction team doubted that this would be permanent, so they planned for a two-year cycle for another two years and then called for a reversion to a three-year cycle. “Reality intruded,” however, and the two-year cycle continued. This was then estimated to continue for one more two-year cycle, followed by a region of uncertainty.
Economic Consequences of the Two-year Cycle
The shift to a two-year cycle had important economic consequences. Dr. Jorgenson showed a semi-logarithmic chart that indicated the point of inflection in computer prices more clearly. Unfortunately, the price data for software are incomplete. Only the prepackaged portion—including preinstalled or “shrink-wrapped” software—was part of the national accounts. The rest, including custom software,was not priced in the same way. Even the data for the prepackaged software have only been in the national accounts as investments since 1999.
The price data for communications equipment are also incomplete, including only central office switching equipment and omitting routers, transmission gear, and fiber optics. Nonetheless, to an economist this was a dramatic story, since economists expect price indexes to rise, not fall.
IT and GDP Growth
The Data on Software
He moved on to the implications of this price information for the recent growth resurgence and future economic growth. The share of information technology in GDP is now a little under 7 percent. Software, as measured in the national accounts, has emerged as the most important form of information technology. Computing has had a very important role and up until the end of the 1980s was about equal in importance to software.
A Disproportionate Share of Growth from IT
An important part of the story is how a sector that is only 7 percent of the GDP can play such a large role in the nation’s growth. The contribution of IT to growth from 1995 to 1999 was about 1.3 percent; by comparison, the annual growth of the U.S. economy from 1995 to 1999 was about 4 percent. A third of that was due to IT, so that 7 percent of the economy accounts for about a third of its economic growth. The growth rate of IT is equal to its share of the total, about 7 percent, multiplied by the growth rate of its components. The growth rate of the components reflects the fact that prices were declining dramatically and the dollar values were going up. Therefore the quantities were increasing even more rapidly than the prices were falling.
Dr. Jorgenson said that IT has been a very important part of growth for some time, accounting for about half a percentage point during the early 1990s. This tremendous surge at the end of the 1990s, however, pushed the role of IT into the “major leagues” and made it about a quarter of the total. The rate of economic growth increased by about 1.75 percent between the middle and late 1990s. About half a percent of that increase was accounted for by IT, so the contribution of IT was a very important part of the growth.
The Role of IT in Productivity
Within overall economic growth, IT plays a key role in productivity. Dr. Jorgenson discussed the economy in terms of both IT and non-IT inputs and outputs. Productivity growth was relatively rapid in the postwar period, from 1948 to 1973, and IT did not have a very important role. After 1973 productivity grew slowly, by about 0.25 percent per year, but by then the predominant share belonged to IT. From 1990 to 1995 the productivity growth in the non-IT part of the economy was small and negative. Then IT productivity jumped substantially toward the middle of the 1990s. Productivity in general, however, never got back to what economists think of as the Golden Age after the war.
Morals of the IT Story
From this he drew “a number of morals.” One is that IT has been a factor in the economy for a long time. Even when the IT industry was only 2 percent of the economy, it accounted for about 80 percent of productivity growth. During the early 1990s, a time when productivity growth was not very substantial, it accounted for basically all productivity growth. During the resurgence of the late 1990s, the escalation in IT productivity growth that appears in the price trends accounts for the total productivity.
For the big picture, therefore, economic growth during the latter 1990s was comparable to the growth of the Golden Age: slightly better than 4 percent a year. By then, IT had become extremely important for both input and productivity. Productivity had always been extremely important, and IT as an input had also been important for decades. But the surge in productivity toward the end of the period was clearly related to the price trend.
The prices of IT are driven primarily by prices of semiconductors, fiber optics, and other components. The surge in the contribution of IT to productivity growth is associated with maintaining the acceleration in the rate of price decline. The authors of the roadmap estimate that that acceleration should continue through 2003. If that is the case, the resurgence should continue. After 2003, they hedged their bets, and there is a possibility of a return to a three-year cycle. There is some division of opinion in the industry. Those who side with the conservative faction behind the roadmap exercise and assume a reversion to a three-year cycle beginning around 2003 can expect the rate of acceleration of productivity to return to previous levels.
The most recent surge in economic growth was not sustainable, because this included an extraordinary gain in the labor force as unemployment declined from 7.3 percent to 4 percent. After taking that into account, however, the growth rate is likely to be relatively rapid, on the order of 3.5 percent a year. If there is a return to the three-year cycle, however, the growth rate is likely to be closer to 2.9 percent. If one splits the difference—assuming a two-year cycle to the year 2003 and then a somewhat longer cycle—the growth figure for the next decade would be about 3.3 percent, which is the estimate of the Congressional Budget Office.
Prices have ‘Momentous’ Importance
“The moral of this story,” said Dr. Jorgenson, “is that the prices of IT and the prices of semiconductors, in particular, are of “momentous” importance to our economy.” This importance reaches not only the people concerned with the fate of the industry, but also the people responsible for framing future economic policy for our country.
To estimate the potential growth of our national product, it is necessary to understand both the technology and the economics of this industry. The technol-
ogy had been discussed in detail at the symposium, he said; for the economics, however, we have to understand how prices and product quality in the industry are determined.
The roadmap describes the opportunities and a possible scenario. Without telling us what is actually going to happen, this does provide the factual basis for an economic analysis of product cycles in this industry. Depending on the outcome, the nation will see a different outcome for the growth of our economy.
In Conclusion, a Call for Better Data
He called attention in closing to the serious gaps in data that prevent a full accounting of semiconductor-related prices. He reiterated that software had been part of our national accounts for only 2 years. Telecom equipment was included in the national accounts, as it had been for many years, but the prices did not include fiber optics, which is fundamental to the future of the industry. He concluded with a call for more and better data in order to assist a better understanding of the nation’s productivity and growth.