National Academies Press: OpenBook

Deconstructing the Computer: Report of a Symposium (2005)

Chapter: Panel I : Performance Measurement and Current Trends

« Previous: I Proceedings: Introductory Remarks--Dale W. Jorgenson
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Panel I
Performance Measurement and Current Trends

MEASURES OF PERFORMANCE USED IN MEASURING COMPUTERS

Jack E. Triplett

The Brookings Institution


The aim of this initial presentation, Dr. Triplett said, was to build a bridge between technologists and economists—professions that “don’t talk very much to each other” but had been brought together for the symposium because both are interested in performance measures for computers and their components. The opening of his presentation, directed in particular to technologists, would describe in more detail than had Dr. Jorgenson what economists do with computer performance measures and what kinds of computer performance measures they really want.

Starting with what he called a “really simple, stylized example,” Dr. Triplett posited the existence in the economy of a single computer, UNIVAC. Referring to figures obtained from a book by fellow speaker Kenneth Flamm, he cited UNIVAC’s price at $400,000 in 1952 and said that three units were made that year, so that “current-price output” for computers in 1952 was $1.2 million. For the sake of example he put forward the assumption that in 1955 a fictitious “New Computer” came along and supplanted UNIVAC as the only computer in the

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 1 Example 1: UNIVAC and the “New Computer”

 

UNIVAC (1953)

“New Computer” (1955)

Number produced

3

10

Price, each

$400K

$600K

Current price output

$1.2 million

$6.0 million

Performance index

1.0

1.5

Computer inflation (with estimated performance premium = 1 + 0.7(0.5))

1.00

1.11

Computer “constant price” output index

1.00

4.60

economy, that 10 were produced, and that they sold for $600,000 apiece. In the 1955 economy, therefore, the revenue to the producers of computers—or the total amount of computer output, or the total amount of investment in computers, all of these being the same thing—would have been $6.0 million (see Table 1). Having observed such an increase in unit price and growth in output, he said, “The first thing economists will ask is: ‘How much inflation is there in the computer market?’”

Inflation cannot be determined from the above data alone, however: A performance index is necessary in addition so that increase in price can be adjusted for change in performance. Dr. Triplett proposed setting the performance index at 1.0 for UNIVAC and assigning New Computer 1.5 performance units; had the two computers been produced at the same time, he explained, it would have been clear that part of the difference in price between them would have represented a performance premium. “Similarly,” he said, “we don’t want to show ‘$1.2 million to $6 million’ as the change in output if, in fact, the computer in the second period is in some sense more computer than the one in the first period.” To make these adjustments, it is necessary to put a value on the increase in performance from UNIVAC to New Computer.

Better for this purpose than having just UNIVAC and New Computer in production at the same time would be having a number of computers available simultaneously. In that case, Dr. Triplett said, a value could be mathematically ascertained and used to adjust not only the price of New Computer compared to UNIVAC in order to get a measure of inflation, but also the change in the output measure in order to get the change in real output. This could be done with a Hedonic Function:2

(1)

2  

Hedonic price indices are a way to correct deflators for quality changes in the goods they are measured for, using product characteristics such as memory capacity or processor speed (in the case

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 2 Example 2: UNIVAC and the “New Computer”

 

UNIVAC (1953)

“New Computer” (1955)

Number Produced

3

10

Price, Each

$400K

$600K

Current Price Output

$1.2 million

$6.0 million

Performance Index

1.0

1.8

Computer Inflation (with Estimated Performance Premium = 1+0.7(0.5))

1.00

0.96

Computer “Constant Price” Output Index

1.00

5.17

a regression in which P is a vector of prices of computers, models are indexed by the letter i, M is the associated performance measure for each computer, and ei is the regression error term. The value of the coefficient a1 derived from the regression would be used to make the adjustments.

In the example of UNIVAC and New Computer, if a1 is 0.7 and the increase in performance is 0.5, computer-price inflation comes out at 11 percent—not the 50 percent that would be obtained by simply comparing the machines’ prices. Thus, taking out of the inflation measure a performance premium estimated from a cross-section of computers leaves the true computer-price change. Similarly, the standard way that the Bureau of Economic Analysis (BEA) would calculate computer output would be to deflate the change of $4.8 million, from $1.2 million to $6 million, by the price index (that is: ($6.0 / $1.2) / 1.11 = 4.60, which has the interpretation that output increased 4.6 times). In the example, the increase in computer output is a little smaller than the ratio of $1.2 million and $6 million because computer inflation is taking place that reduces the current-price output measure to below the constant-price output measure.

Dr. Triplett then demonstrated what happens when the example is altered by raising New Computer’s performance units to 1.8 from 1.5: The same regression yields a value of less than 1.0, indicating negative inflation in the computer price (see Table 2). In “this more realistic case,” he said, the computer price falls “be-

   

of computers), or speed and space (in the case of automobiles). See Zvi Griliches, (ed.), Price Indexes and Quality Change, Harvard University Press, Cambridge, MA, 1971; Zvi Griliches and M. Ohta, “Automobile Prices and Quality: Did the Gasoline Price Increase Change Consumer Tastes in the U.S.?” Journal of Business and Economic Statistics, April 1986, 4(2):l87–198. Price changes are “corrected” for changes in product characteristics using a so-called hedonic function that is estimated econometrically. Hedonic price indices are now available for the U.S. computer industry (which is the leading ICT producer in the world), as well as for some other countries. See A. W. Wyckoff, “The Impact of Computer Prices on International Comparisons of Labor Productivity,” Economics of Innovation and New Technology, 3(3-4):277–294, 1995.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 3 Private Fixed Investment in Computers and Peripheral Equipment

 

1995

1996

1997

1998

1999

2000

2001

2002

Billions of current dollars

64.6

70.9

79.6

84.2

90.4

93.3

74.2

74.4

Billions of chained 1996 dollars

49.2

70.9

102.9

147.7

207.4

246.4

239.9

284.1

Chained price index

131.29

100

77.38

56.99

43.6

37.87

30.91

26.27

Quantity index

69.4

100

145.22

208.39

292.64

347.77

338.61

400.92

SOURCE: Bureau of Economic Analysis, NIPA Tables 5.4, 5.5, and 7.6.

cause, with a bigger increment of performance, I take out a larger amount when I take out the performance premium.” Again, using the same index to correct the output measures yields a change in constant-price output—which is an indicator of the rate of growth of output—that is larger than the change in current-price output. (In national accounts, he noted, constant-price output is called “real output” by economists. BEA publishes it in tables under the title “Chained-Type Quantity Index.”) Dr. Triplett stressed that the performance measure is of “extraordinary importance,” saying the example illustrates that the determination of two critical factors—the level of price inflation and whether real investment growth is outstripping nominal investment growth—depends on the measure of computer performance.

Displaying BEA figures for private fixed investment in computers and peripheral equipment for 1995 through 2002 (see Table 3), Dr. Triplett pointed out that actual investment went up until 2000 but had come down since then as a result of a slump in purchases of high-tech equipment. BEA’s “chained price index” shows, however, that computer prices as a measure of the national account fell consistently, and quite rapidly, over the entire period. The change in actual investment, or current-price shipments, divided by the change in the price index yields the deflated output measure for computers—“billions of chained 1996 dollars,” in BEA parlance—which he described as growing “much, much, much, much faster” than expenditures on computers.

Thus, although the growth rate for current expenditures on computer equipment for the period 1995–2000 was 7.6 percent, the price index dropped 22 percent over that interval, raising the deflated output measure “that’s telling you what’s going on in terms of real performance” by 38 percent, he said (see Table 4). Even during the post-2000 slump, as the rate of growth of actual spending dropped 10 percent per year, the price index kept moving downward—with the result that the deflated investment in computers increased. To illustrate just how long this pattern of decrease in the price index has been under way, Dr. Triplett showed a graph of the computer-price index going back to 1953, the date of the

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 4 Fixed Private Investment in Computer and Peripheral Equipment

 

Average Annual Growth Rates

1995–2000

2000–2002

Billions of current dollars

7.6

−10.7

Chained price index

−22.0

−16.7

Quality index

38.0

7.4

commercial computer’s introduction in the United States (see Figure 4). The price index, because it is adjusted for performance, represents the price of computer power, he said, and so what the chart demonstrates is that the price of computer power today is 0.001 of 1 percent of what it was when the computer was introduced. Noting that a dotted line representing the price index for PCs has been falling more rapidly in recent years than the index for mainframes, he stated: “That’s why we’ve got so many PCs.” He then reminded the audience that all the foregoing numbers “depend crucially on having a measure of computer performance.”

Next, Dr. Triplett took up the question of which computer performance measures economists have actually used. Early research, focusing on the mainframe,

FIGURE 4 Price indexes for mainframes and PCs.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

borrowed measures used by technologists studying the rate of technical progress in computers. Initially, measures of computer performance tended to be simple, “some sort of single instruction speed,” a popular example of which was multiplication time. When that began to seem too elementary, what is now called “clock speed,” came into vogue among technologists. This measure was picked up, in turn, by economists. During a third stage attention turned to the various tasks a computer can perform, and researchers asked themselves: “Why don’t we go out and actually take a sample of jobs, try to measure the speed with which jobs are done, and use that as a performance measure?” That course turned out to be far more complicated than it might seem at first blush because not only do computers do many different things, they have many different users. While opining that “trying to measure the performance of a computer by looking at jobs” is better than looking solely at clock speed, Dr. Triplett cautioned that it “inevitably involves an aggregation over the instructions in the jobs, because computers have different kinds of jobs, and then an aggregation over the users of the computers, because users have different mixes of instructions.” The early history of research into computer performance culminated with an innovative project carried out in the 1980s by IBM and BEA; that study’s measure was MIPS (millions of instructions per second), one of a number of weighted-instruction mixes current at the time. Bidding farewell to the mainframe period, Dr. Triplett referred to the audience to a table summarizing the variables then used in hedonic functions for computers.3

He returned to the regression he had previously discussed along with a second, far more complex hedonic function:

(1)

(2)

He explained that while the simpler regression, having a single variable, might apply where there is a single characteristic of performance, computer performance is multidimensional and therefore demands a hedonic function with more variables. Economists who study computers, wanting to take many different measures of performance into account, would therefore estimate an equation that looks more like the latter. In it, each characteristic of computer performance is represented by M1 through Mk, giving coefficients a1 through ak covering the set of computer performance characteristics. The coefficients in this second, “more realistic” hedonic function are those employed by the Bureau of Labor Statistics (BLS), which now produces the price indexes that go into the BEA national accounts, to adjust for changes in computer performance.

3  

See Annex A, “Comparison of Variables in Hedonic Functions for Computers,” in Dr. Triplett’s paper, “Performance Measures for Computers,” in this volume.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Dr. Triplett then referred the audience to a table summarizing the variables used in computer hedonic functions found mainly in economists’ recent studies of personal computers.4 The data in the extreme left-hand column of the table come, however, not from a study of PCs but rather from the IBM study of mainframes used by BEA in 1985 to derive the first computer-price indexes for its national accounts. Of the five products covered in that study—the mainframe computer, disk drive, tape drive, printer, and display—the three that are relevant to PCs are found in the table: the processor, hard-disk drive, and monitor. Thus, he observed, “Economists who have done work on PCs have consciously or unconsciously taken over the variables used in the initial IBM study and applied them to PCs—which makes some sense.” He said the list of variables from the recent studies, among which he singled out Chwelos (2002) and the BLS study for praise, comprises “the most exhaustive specifications that have been used so far” in modeling computer performance.5 Nonetheless, there are “problems” with the variables, he cautioned, remarking that “nobody seems to pay much attention to the speed of the hard disk, even though that was an important performance variable in the IBM study.” He then referred to a second table enumerating not only additional hardware features but also dummy variables.6 Because few PCs today lack a sound card, for instance, a sound-card dummy might be entered into the regression as evidence that the researcher had asked whether it was present or not. “The best economists have done so far with many of the features that differentiate a modern PC from the assemblage of equipment that was in the computer center in 1985 is just to ask ‘Was it there?’” he said, “not, ‘What is its performance?’”

In conclusion, Dr. Triplett conceded one would be justified in observing that this aggregate of performance measures “doesn’t seem real exciting as a model of what a computer does” and in asking: “Why haven’t we gone further?” The answer, he said, is “data. Where do we get the data to improve the model of the computer?” And he posed a second question that is somewhat subsidiary to the question of data: “How do we figure out what performance measure we’re really looking for today?” One of the things economists would like to learn from technologists so that they can increase the sophistication of their model of computer operation—which is, in essence, what a hedonic function represents—is how to look at the performance of some components of modern computers. Once they understand better what they should be looking for, the second thing economists hope to get from technologists is the knowledge of where to find the data to

4  

See Annex B, “Variables in Computer Hedonic Functions, Selected Studies [Hardware Components Only], in Dr. Triplett’s paper, “Performance Measures for Computers,” in this volume.

5  

See Michael Holdway, 2001, “Quality-Adjusting Computer Prices in the Producer Price Index: An Overview,” Bureau of Labor Statistics, October 16.

6  

See Annex B, “Computer Hedonic Functions, Other Hardware Features,” in Dr. Triplett’s paper, “Performance Measures for Computers,” in this volume.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

improve their measurements. Looking forward to the rest of the symposium, Dr. Triplett stated: “What I’m hoping to hear is: What are the measures of performance in these components? What are the measures of performance used in the industry? And where can data be obtained to put some of these performance measures into better models of the computer by economists?”

OVERVIEW OF THE IBM GLOBAL PRODUCT PLAN

David F. McQueeney

International Business Machines


Dr. McQueeney defined the challenge before the symposium as figuring out how to measure value all the way across information technology’s “quite complex food chain of value-added.” Information technology insiders and economists who study the industry have both focused on the transistor, which he put near the bottom of that food chain, for a variety of reasons: It was an obvious place to start; it was easy to deal with; and it demonstrated constant, rapid progress that was clearly beneficial. He warned, however, that if greater attention is not paid to the rest of the food chain from now on, there will be repercussions for the industry’s production of business value: “We have to be very careful about how we think about the different levels of this food chain, which goes all the way from quantum mechanics and material science at the very bottom to, at the top, a business process that gives you—if you are an IT company—a net economic value-add for your customer’s business.”7

Laying out the path for his talk, Dr. McQueeney signaled that he would react with a “yes, but” to the trend toward “faster, cheaper, better” in technological progress—an exploration of which, he commented, must lead off “any good technology outlook.” He would next raise the question of what happens if the IT industry gets “too carried away with deploying technology for technology’s sake” and suggest that, by failing to exercise caution, it has in some cases created problems in deploying and provisioning technology, as well as in providing IT services to its customers. Finally, he would address the creation of business value itself by hardware, software, and applications. While acknowledging this as the perennial “Holy Grail for customers,” he noted that customers increasingly want to push the responsibility for creation of business value onto their suppliers rather than remaining content to take that responsibility upon themselves.

To illustrate how “faster, cheaper, better” can collide with what he called the “good-enough phenomenon,” Dr. McQueeney offered several “thumbnail sketches,” the first concerning optical-data transmission bandwidth. Several hun-

7  

Of course, there are off-market applications other than those related to business that have value, e.g., military, medical, and scientific activities.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

dred colors of light can now be sent over optical fiber, with the result that there are “individual fibers with enough bandwidth to connect every person in North America to every person in Eastern and Western Europe and to allow all to have a phone conversation at the same time.” Moreover, during the Internet boom—when such bandwidth was expected to go into commercial use more quickly than it has done in reality—tremendous capacity in optical fiber was installed between various cities and within metropolitan areas of the United States. Dr. McQueeney said he therefore accepts as reasonable “at the raw fiber level” predictions about bandwidths becoming free such as those made by the technology pundit George Gilder. The problem, however, is that “the intelligence needed to light up those fiber-optic networks and make them actually do something useful—the servers, the routers, the switches—is in fact quite expensive, and we’re still struggling with a good investment model that will let us build out that control infrastructure to use the fiber capacity that we have.” That the quantity of raw glass deployed outstrips the industry’s understanding of how to use it in a responsible business fashion constitutes a “good-enough phenomenon” with respect to installed fiber-optic capacity.

Displays and home PCs have also crossed easily understood “good-enough” thresholds. Monitors built within the year previous to the symposium and on sale at the time, he said, feature a resolution in pixels per inch about at the limit of what the human eye can detect in a windowing environment like that of a PC at an 18-inch viewing screen. While there is still progress to be made in such things as medical imaging and preserving art treasures, which call for the world’s best digital imagers and displays, displays used for everyday desktop-PC applications have become so good that “further technological improvements aimed at more pixels per inch would not be detectable to the end user.” As for overall home-PC performance, under the constraints of current architecture turning up the clock speed does not have a big impact on throughput because the rest of the system is not scaling with it. Similarly, disk capacity has become so large that many casual users never fill the hard drive in the two or three years they keep a computer. “Guys in our research lab can fill up a disk that comes with a standard PC in a busy afternoon,” Dr. McQueeney remarked, “and I’m sure that those of you in the room that do economic analysis could do the same thing.” For many home users, however, current PC performance is good enough.

So, although research and development do not stop, at a certain point their benefits begin to show up in price reduction and cost-performance reduction rather than in performance measures. “The raw capabilities of the technology have in some cases gotten to a point where either the economics of how you sell them and how you ascribe value to them is changing,” he explained, “or you are forced to look elsewhere in the system performance stack to get real improvements.”

Dr. McQueeney then displayed a chart comparing the growth in transistor switching speed projected in 1995 with growth achieved (see Figure 5). This chart, he noted, parallels one shown earlier by Dr. Jorgenson (Semiconductor

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 5 Growth in transistor switching speed.

SOURCE: ASCI Roadmap www.llnl.gov/asci, IBM. Brain ops/sec: Kurzweil 1999, The Age of Spiritual Machines, Moravec 1998, www.transhumanist.com/volume1/moravec.htm.

Roadmap Acceleration) that was based on feature size rather than device performance but also clearly showed actual growth far exceeding industry expectations. Observing that researchers have consistently accomplished more than they themselves had predicted, Dr. McQueeney said there is debate inside IBM over whether this discrepancy reflects “sandbagging” by technologists who may fear being fired for missing their own projections. But he also acknowledged the intrusion of a factor he called “technical, or psychological, or both”: Anticipating a future encounter with an engineering problem for which no solution exists can incline technologists to look ahead warily. “‘I’m not sure that I won’t know how to solve it, but I don’t know the solution today, so we’re going to back off a little bit and say we’re not sure if we can keep [advancing] on a straight line,’” is how he characterized their thinking.

He then talked about what is known in the industry as the “brick wall”: the point at which semiconductor device developers will “run out of the ability to do clever engineering and run into physics problems.” Although the expectation in 2000, as reflected on the chart, was that a brick wall would obstruct progress as of

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

2003, Dr. McQueeney said that improvement of the semiconductor products available commercially could be anticipated to continue at a rate conforming to Moore’s Law for at least 10 more years. But if no brick wall would be standing in the way of shipments for another decade, the barrier it represents was already a real concern in the laboratory, for semiconductor research had reached the point at which the building block for the transistor was the individual atom. To illustrate, he displayed a year-old image of one of the smallest field-effect transistors made to date, on which three layers were visible: at the top, a poly-silicon electrode used to turn the transistor on and off; next, an intermediate layer of silicon dioxide insulator whose thickness, no more than 10 atoms across, is critical to its performance; and at the bottom, the silicon crystalline lattice of the wafer, its individual atoms showing up distinctly (see Figure 6). “What happens,” he asked the audience, when the force behind Moore’s Law—the ability to make the transistor ever smaller—drives progress to the point that engineers “start tripping over atoms?”

As this era arrives and the era of transistor miniaturization’s going hand in hand with improvements in lithography ends, new alternatives for realizing different kinds of nanostructures are making their way to the fore. “Instead of building transistors in the bulk out of atoms and depending on their properties as bulk materials,” Dr. McQueeney explained, “we have to start thinking about using atoms themselves to try to do some of the computations or to build devices on an atomic scale.” Already, information technology and the life sciences have begun to converge: “The only manufacturing technology we know of in the entire scien-

FIGURE 6 Image of smallest field-effect transistors made to date.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

tific field that can manufacture devices in high volume where the atoms are assembled in precise locations on the atomic scale,” he noted, “is the replication-of-DNA process that’s used in biology.” Even while admitting that there is “nothing like that in the world of the technology that we use today,” he suggested that the audience take comfort in achievements like that recorded by the image displayed before it.

But the transistor, even if its performance has grown at an impressive compound annual rate of between 16 and 20 percent, is merely one element of the IT picture. “Is there a Moore’s Law at the system level?” was the way Dr. McQueeney framed the question before the symposium. To begin answering it, he cited 50 percent as the number that’s “kicked around” for the compound annual growth rate (CAGR) of single microprocessors, listing device design, chip design, packaging, the microprocessor core, and the core of the compiler as fertile ground for innovation. Moreover, since computing advancement less typically occurs in single microprocessors than in parallel arrays of them—whether small arrays in the case of PCs or large arrays in the case of Department of Energy supercomputers—there is room for innovation in shared memory system performance, development tools, middleware, and applications as well. This brings the CAGR to 80 percent.

Dr. McQueeney then displayed a graph representing the compound annual growth rate in the performance of supercomputers as measured in Teraflops, or trillions of operations per second (see Figure 7). A series of five machines ordered for nuclear-weapons modeling under the U.S. Department of Defense’s Accelerated Strategic Computing Initiative (ASCI) traces a CAGR of 84 percent. Several other machines, which he considers “not fully general purpose” because their hardware is designed for a unique problem, describe a somewhat steeper curve: Deep Blue, the chess computer that beat Gary Kasparov in 1995; a Japan molecular dynamics machine, Riken MDM; and Blue Gene, another special-purpose machine IBM expects to build by 2005. Rounding out the chart was a comparison of animal brain speed with computer speed, which showed the 11-Teraflop ASCI machine built for Lawrence Livermore National Laboratory to be on the performance level of a mouse and the fastest desktop computer on that of a lizard. The human, meanwhile, was positioned a factor of 10 higher than “the fastest machine that we can envision ever possibly being able to build using today’s technology projected forward,” Dr. McQueeney said. Using these comparisons to characterize technological advance, he posited that every 20 years supercomputers grow faster by the ratio of human intelligence to lizard intelligence, or a factor of about 104. “The kind of intelligence that we can apply to business problems,” he summed up, “is growing at an alarming rate.”

Even as such performance growth has made it possible to “do all kinds of interesting things in optimizing business processes in real time,” it has carried along with it what Dr. McQueeney labeled “a complexity problem.” He recalled “a big company that processes credit card transactions” that, driven by a need to

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

FIGURE 7 Compound annual growth rate in the performance of supercomputers.

eliminate downtime entirely, had pieced together a computer system so complicated that only three of the company’s employees worldwide understood it well enough to manage it when it showed signs of breaking down. The firm came to IBM because the last time its system’s performance had begun to degrade, two of these three, working on it together, were unable to pinpoint a cause. Although the in-house experts kept the system from going down by “making small changes to its configuration parameters until it started working correctly again,” they could not explain afterwards how they had solved the problem. The senior VP in charge of the operation then decided to bring in a firm that could “assemble a lot more expertise over the problem” because, as he put it, “we’ve built a system so complex that we’re having trouble assuring its reliable operation and maintenance.”

A second major problem that has arisen with performance growth is what Dr. McQueeney called “efficiency of deployment.” Looking at use of computing capacity averaged over a 24-hour day, he offered the following “typical” figures: for a PC, well under 5 percent and perhaps as low as 1 percent; for a UNIX workstation acting as an applications server, between 50 percent at the bottom and 80 percent at the very top; and for a mainframe, somewhere in the 90 percent

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

range.8 The reigning pattern is thus one of good utilization of central resources, poor utilization of remote resources, and “something intermediate” between. A factor in these numbers is the tendency to overprovision that is inherent in the model on which delivery of information across computer networks is based. For example, to be sure it had Web hosting capacity sufficient to handle the increased interest it expected in reaction to a Super Bowl advertising spot, a company might have to provision the front end of its site to accommodate a spike reaching 50 or even 100 times average demand.

In 1998 IBM began to investigate developing a more efficient delivery model for the middle to low end of IT resources based on aggregating different customers’ demand for the front end of Web service, simple computation, and other services that can be handled by generic applications. Checking data its customers furnished against Bell Labs’ late-nineteenth-century analysis of the statistical fluctuation of demand on telephone switches, the company found the patterns observed in the two instances to follow essentially the same mathematics. By 2001 IBM had achieved a vision of a new model in which many of the functions that had previously been part of an application written by an end customer would become capabilities that the public infrastructure or a set of middleware provided. This model would allow IBM to “build more value” into applications and code them in a more organized way, while at the same time free customers’ applications writers from focusing on the internal details of connections between machines so that they could concentrate more on the value of the business logic to the enterprise. In January 2002 IBM launched a service business supplying computing capability on demand, much in the way public utilities supply power, water, and telecommunications.

Meanwhile, IBM had been on the trail of what turned out to be the new model’s “missing piece”—Web Services—through an investigation of how scientists and engineers deal with complexity. “What they typically do,” Dr. McQueeney said, “is try to step back and understand how to encapsulate the complexity of lower levels and only expose interfaces above that.” IBM also surveyed the history of the PC platform from the days when it comprised only hardware, DOS, a file system, and an application through the advent of the windowing system for presentation and on to the addition of middleware to handle databases and online transactions.9 The applications area of the mainframe, he noted, grew in parallel, with middleware put in to handle the layer between the applications and the operating system. “It was all about hiding the complexity at layer N underneath layer N–1 and just exposing a simpler interface,” he explained.

8  

This does not, of course, imply an anomaly. Use of personal computers 5 percent of the time may be comparable to other common household appliances.

9  

Middleware is software that connects two otherwise separate applications, or separate products that serve as the glue between two applications. It is, therefore, distinct from import and export features that may be built into one of the applications. See CREN glossary at www.cren.net/crenca/glossary/cpglossary.html.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

With the development of Web Services, according to Dr. McQueeney “a way to find, describe, and execute business functions on the Web,” new questions arose:

  • “Doesn’t Web Services provide yet another layer near the top of this stack that lets us further abstract the applications and again have another simplification of the coding of business processes?

  • “Is there an analogy [to] the traditional layering of the computer from hardware to operating system to middleware to the applications interface, which we could call a virtual computer?

  • “Can we in fact treat the Web as a virtual computer? Does it have computing power? Does it have storage? Does it have I/O? Does it have applications with Web Services?”

The existence of Web Services, he noted, “completes the protocol stack for the Internet and for the Web and lets us think of that, at least from an engineering-design point of view, as if it were a virtual computing platform.”

This allowed IBM to rationalize “Good Computing,” which Dr. McQueeney described as “all about tying together back-end, high-performance computing resources in a way that then aggregates that image up to a scientific and technical user—and, in the future, a business user.” And it made room for another innovation, called “autonomic computing” by analogy to autonomic biological systems, which devotes 20 percent of computing power to self-configuration, self-diagnostics, and self-maintenance. The thinking behind this, as summarized by Dr. McQueeney, was: “Let’s not route all the raw performance [accrued thanks to Moore’s Law] to the end users, because we have already overwhelmed them; let’s turn some of that back inside to make the system easier to manage.”

In closing, Dr. McQueeney turned to a change in the business environment for information technology that IBM had observed in the previous year or two: Buyers had begun demanding that the supplier assume a larger share of the responsibility for the return on investment from purchases of IT goods and services. Looking back to the early days of computing, he said customers took it upon themselves to integrate components they bought, to build applications for them, and to add the business value. Then, with increasing technological integration and the emergence of the layered structure he had outlined, fewer orders came for equipment alone: “‘I don’t want [even] really good components,’” he recalled customers saying. “‘I want to buy integrated, end-to-end applications’—meaning, ‘I want you to integrate all the IT so that it works end to end, but we’ll still be responsible for the bridging of that to business value.’”

Lately, he said, customers have taken yet another step, so that their refrain might be characterized thus: “‘I really don’t want to get involved with the connection between even the end-to-end IT and the business process. I want to design the business process—and I’m still responsible for my business—but I want

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

more of your risk and your value, Mr. IT Company, to be connected to whether that end-to-end integrated system produces the net result. I don’t care if it can do a transaction, I care if it improves my market share. I don’t care if it does a fancy optimization of my supply chain; I care if my inventory turnovers go from five to six. And I’d like to measure you that way.’” IBM’s “On-Demand Computing” developed out of the company’s efforts not only to respond to this shift in the needs and desires of its customers but also to enable its customers to adapt to shifts in their own customers’ requirements without rebuilding their entire systems.

Dr. McQueeney listed and commented on some key technical elements of this offering:

  • it provides customers, who IBM believes will insist on open systems, choices in components;

  • it delivers higher levels of integration, relieving customers of handling that integration themselves;

  • it is virtualized, so that a customer can blur boundaries between companies—for instance, running a business process out to a key supplier and back;

  • it is autonomic, because system complexity has driven up total cost of ownership, as so much expertise is needed to handle management and maintenance.

As an example of how it works, he ran through a schematic representation of the management of a company’s 401k plan. Web Services would be invoked when, for instance, an employee wants to check on the status of his or her investment elections or to make changes in the mix, precipitating a work-flow process that goes across company boundaries. There is a second tier of connections, to suppliers of investment vehicles like stocks and mutual funds, at the level of an external administrator, where services are aggregated. “Those relationships are specified by a quality-of-service metric stated in business terms, not IT terms,” Dr. McQueeney stressed. “It’s not ‘so many processors and so many gigabytes;’ it’s ‘how many tenths of a second of response time is acceptable on a transaction like this?’”

What has been accomplished is, in fact, the virtualization of interactions at a business-process level: Pieces of business process have been picked and chosen, then swapped in and out so that a cross-enterprise business process has been assembled “on the fly” using Web Services. “And if we want to then change that business process,” Dr. McQueeney stated, “we can respond very rapidly, in real time on an on-demand basis, rather than having to rebuild the system.” Additional advantages are the concealment of complexity from the user—who is mainly interested in the financial performance and security of the plan, not in how the transactions are executed by a supplier—and strong security that not only protects the system against hackers but protects individuals’ privacy by mak-

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

ing sure that participants get access only to the data that they require to do their job.

In summation, Dr. McQueeney declared: “Yes, we have a Moore’s Law continuing at the device level. Yes, we have a stronger Moore’s Law at the system level. And, yes, that produces a lot of tremendous benefits and will continue to do so.” He emphasized, however, that how efficiently computer power is delivered, which delivery model or models are used to deliver it, and how it is exploited to create business value merit the industry’s careful consideration. As to the choice of metrics, he advocated looking at “how many return-on-investment cases from individual businesses justify big IT investments?” and “how well did they pay off?” He suggested that academia or government might have an interesting role to play here as a trusted third party that would aggregate and analyze the pertinent business data, which tends to be more sensitive than technology data and which companies would be more reluctant to share with one another, then report back in such a way that no company’s competitive position would be compromised.

DISCUSSION

Moderator:

Steven Landefeld

Bureau of Economic Analysis


Dr. Landefeld began the discussion by mentioning that he was struck, upon looking at one of Dr. Triplett’s tables of features used in computer hedonic functions, that it dealt largely with individual devices and their characteristics to the exclusion of “big picture” qualities such as integration, reliability, and downtime. He raised the question of how researchers might move beyond the characteristics in existing models to measure systemic performance characteristics. The simple addition of component parts may underestimate or overstate worth and is an inadequate method of valuation.

William Raduchel, a member of the STEP Board, noted that hardware and software on their own, according to “the old rule of thumb,” seldom make up 5 percent of the cost of building a large system such as that Dr. McQueeney referred to in the example of the credit-card processing company. For this reason, he pointed out, indexes used in current metrics account for a “tiny, tiny portion of the total cost” at the level of the actual business process. Moreover, the remainder of the cost goes into the books as general and administrative [G&A] expense, so that “95 percent of the cost of a system that might totally change the business process for a company is [considered] G&A overhead and doesn’t show up as investment.” Dr. Raduchel asked Dr. McQueeney whether IBM keeps records of project cost breakdown that would make possible a study of the true economics of a collection of large systems-integration projects.

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Dr. McQueeney, while expressing reluctance to enter into a discussion of accounting methods, acknowledged that raw data are available that could be assessed as Dr. Raduchel was suggesting by a “suitably insightful person.” But he stated that, whatever technical categories the outlays might be placed into, those making investment decisions “do look at the big picture.” In addition, he said, such data reflect “the competitive capabilities—in fact, usually the core competitive capabilities”—of the customer, and their sensitivity would prevent IBM, and most likely discourage the customer as well, from sharing them. It was for this reason that he had proposed that the academic or government community might have a role to play in such research.

Dr. Raduchel, returning to his earlier point that hardware and software go into national accounts as 5 percent of the value of the large systems in which they are incorporated, asked whether it is then accurate to consider information technology to be only 5 percent of the overall economy in line with an earlier statement by Dr. Jorgenson. “If you took the full value of all the projects—that is, multiplied it by 20—then [IT is] no longer a small sliver [of the economy],” he said, adding that measuring their impact in aggregate could be an interesting factor in forecasting productivity. Noting that such investment had “dropped dramatically” over the past two years, once the investment crest powered by Y2K had passed, he expressed the concern that the commonly accepted five-year productivity prognosis might be overly optimistic.

Dr. McQueeney said he has observed that customers’ interest in improving their business processes has not waned but that they have been taking a different approach designed to avoid net cash outflow. “They will frequently come to us and say, ‘I desperately need this business-transformation project, and what I’d like to do is to package that along with an efficiency-of-the-infrastructure project that will generate some operational savings that I can then reinvest,’” he explained.

Dr. Raduchel, returning to the subject of complexity, noted that America Online, where he spent three years as Chief Technology Officer, employed over 20,000 servers located in “four rooms about 10 miles apart” and linked by over 100,000 physical connections. He commented: “Nobody truly knows what every one of those connections does, I assure you.”

Dr. McQueeney responded by relating a recent experience he had had with a large agency in Washington whose desire to use “really cheap unilevel hardware” had led it to add another appliance server for each new function—“to the point where [its] mid-range system infrastructure has 4,700 servers.” In his opinion, consolidation through the use of common images would allow the same functions to be handled by between 20 and 50 servers. “Think of the savings in managing the complexity of that,” he suggested.

Dr. Raduchel in turn brought up the case of Google, which builds all its own hardware from Taiwanese components. That such a course “would turn out to be the optimal solution for somebody” indicated, he said, that “the world is changing in ways that no one would have predicted.”

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Mark Bregman of Veritas Software Corporation expressed the concern that the metrics used by the industry may be on the way to irrelevance because of changes in customer behavior. Just as the amount of copper wire sold to make generator coils is probably obsolete as a measure of the electrical power industry, he argued, “looking at microprocessors and disk drives as a way of measuring IT value really does cause an increasing amount of the cost to show up as G&A or overhead. When someone goes to an IBM and pays one fee for the whole service utility, they really are capturing all that in an investment in IT business value; when they buy chips and boards and assemble them into boxes and you only measure the cost of the chips and boards, all that other investment looks like overhead.” Dr. Bregman singled out the appropriate placing of an aggregation point that moves very rapidly as one of the main challenges in looking at the information technology industry over a period of 30–50 years. “It’s not just a matter of looking at the whole stack,” he stated.

Dr. Landefeld noted that statistical agencies like BEA struggle with this very problem. “We are trying to measure the value of investments in in-house software,” he explained, “but we can’t value it in terms of the value of the output—the cost of the inputs is the best we can do.”

Dr. Bregman pointed out that changing such definitions makes it hard to compare over time—which, he acknowledged, is the statistical agencies’ “whole game.”

Victor McCrary of the National Institute of Standards and Technology asked Dr. McQueeney what metrics the industry uses to evaluate its scientists. Noting that researchers who work at the pre-development stage have traditionally been judged by their publications, presentations, and patents, he said his IT lab is seeking other ways to evaluate both short- and long-term research. He commented that notions of “economic value added” and “business value” have “worked back into the R&D community.”

Dr. McQueeney corroborated the importance of this issue, stating that IBM has worked “incredibly hard” on it for three decades, during which the marketplace has increasingly provided “inputs to the core research portfolio and core research deliverables.” But while IBM’s effort to align research with development and product is not new, over the previous five years “the influence of the customer has been reaching further and further back into our product plans, into our deep development, and, ultimately, into our research,” he said. Within the previous six months the company had made known that key researchers would take part in consulting engagements in critical areas of research, particularly those concerning optimization and mathematics, “partly to deliver value out to the customers from our research lab, partly to bring a worldview of the bleeding edge of the customer environment back in to influence the portfolio.” IBM scientists, he said, now “understand that they’re expected to have world-class credentials and world-class standing within their technical communities but that the end goal is impact on the business and, therefore, on our customers.”

Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 8
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 9
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 10
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 11
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 12
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 13
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 14
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 15
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 16
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 17
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 18
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 19
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 20
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 21
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 22
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 23
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 24
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 25
Suggested Citation:"Panel I : Performance Measurement and Current Trends." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 26
Next: Panel II: Computer Hardware and Components »
Deconstructing the Computer: Report of a Symposium Get This Book
×
Buy Paperback | $55.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Starting in the mid 1990s, the United States economy experienced an unprecedented upsurge in economic productivity. Rapid technological change in communications, computing, and information management continue to promise further gains in productivity, a phenomenon often referred to as the New Economy. To better understand this phenomenon, the National Academies Board on Science, Technology, and Economic Policy (STEP) has convened a series of workshops and commissioned papers on Measuring and Sustaining the New Economy.

This major workshop, entitled Deconstructing the Computer, brought together leading industrialists and academic researchers to explore the contribution of the different components of computers to improved price-performance and quality of information systems. The objective was to help understand the sources of the remarkable growth of American productivity in the 1990s, the relative contributions of computers and their underlying components, and the evolution and future contributions of the technologies supporting this positive economic performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!