National Academies Press: OpenBook

Deconstructing the Computer: Report of a Symposium (2005)

Chapter: II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett

« Previous: Concluding Remarks--Dale W. Jorgenson
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

II
RESEARCH PAPER

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

This page intentionally left blank.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Performance Measures for Computers

Jack E. Triplett

The Brookings Institution

I. INTRODUCTION

The “Deconstructing the Computer” workshop has the purpose of gaining better understanding of computer performance, especially the contributions of computer components to computer performance. Two groups of professionals are interested in measuring the performance of computers, peripherals, and components. This paper provides a bridge between their interests.

Section II explains, primarily to computer professionals, why economists want to measure computer performance and what economists do with performance measures. Subsequent sections provide background on economists’ work on measuring computers and components. As this workshop is part of the STEP Board’s “New Economy” project, one of its objectives is obtaining better performance measures for economic uses.

A second audience consists of economists. It is clearly true, as Nordhaus (2002) remarked, that computer performance measures used by economists in recent years have, if anything, gone backward compared with the measures they used 15 or so years ago. We need to ask “why?” We also need to ask: “How much does it matter?”

I review in section III the performance measures used by economists in the earlier computer literature, which covers primarily the mainframe years. Sections

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

IV and V review performance measures used by economists and by statistical agencies in more recent years, where studies have turned predominantly to personal computers (PCs).

II. WHAT ECONOMISTS DO WITH COMPUTER PERFORMANCE MEASURES

I begin by addressing technologists. Why do economists want to measure computer performance? And what do they do with performance measures? Technologists need to understand how economists use computer performance measures in order to converse with economists on this topic. The questions do not imply that the performance measures wanted by economists are the only performance measures that matter, but fortunately it turns out that what economists want is not that different from what technologists have developed. Indeed, historically, technologists and economists have proceeded in similar directions in measuring the performance of computers. But that gets ahead of the story.

Suppose, to create a simple illustrative example, one computer exists. Call it UNIVAC. Suppose three UNIVAC computers are made in 1952, and they cost $400K each.1 We are supposing that UNIVAC was the only computer produced in the economy, so total U.S. output of computers in 1952 was $1.2 million.

Now suppose a new computer is developed in 1955 (I call it “new computer”), and that it has higher performance than UNIVAC. Suppose “new computer” sells for $600,000 in 1955, and suppose further that it is the only computer available in 1955, the UNIVAC having disappeared. Ten computers of this new type are produced in 1955, so the economy’s output of computers is $6 million in 1955, a fivefold expansion since 1952 in what economists call “current-price” output. This is clear enough, but other aspects of the 1952–1955 comparison are less clear (see Table 1).

First, is there inflation in the computer market? “New computer” costs 50 percent more than UNIVAC, but the new computer also has higher performance. Part of its higher price is just a performance premium. Economists do not want to show an increase in computer performance as inflation. They want to measure computer inflation so that it is adjusted for changes in the performance of computers; in other words, computer inflation should be measured net of the performance premium. The example suggests that computer inflation was less than the 50 percent increase in selling price. How much less? To determine that, economists need a computer performance measure (or more precisely, the performance premium).

What about computer output? “New computer” has higher performance than UNIVAC, so each “new computer” is equivalent to more than one UNIVAC.

1  

These numbers correspond to UNIVAC production in 1952. See Flamm (1988), Table 3-1 for the price and page 51 for the quantity.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 1 UNIVAC and “New Computer,” Hypothetical Price and Output Calculations

 

UNIVAC (1953)

“New Computer” (1955) (Case One)

“New Computer” (1955) (Case Two)

Number produced

3

10

10

Price, each

$400 thousand

$600 thousand

$600 thousand

Current price output

$1.2 million

$6.0 million

$6.0 million

Performance index (M)

1.0

1.5

1.8

Computer inflation (with estimated performance premium = 1 + .7 (ΔM)

1.00

1.11

.96

Computer “constant price” output index

1.00

4.60

5.17

Computer output must have expanded by a factor greater than the threefold increase in units produced. How much greater? To answer that, economists also need a measure of computer performance.

Economists also want to calculate the productivity of making computers, just as they calculate productivity in other industries. One common form of productivity is labor productivity, defined as output per worker hour. Again, if “new computer” has higher performance than UNIVAC, economists want to calculate output per labor hour in producing computers with a “quality adjustment” that incorporates the improved performance of the new computer. For estimating productivity change, “new computer’s” higher performance must be factored into the output measure. Similar statements apply to other economic measures, particularly to computer investment and capital stock.

Thus, for measuring inflation, output growth, productivity growth, the volume of investment and capital stock, and for other economic measurements, economists need a measure of computer performance. It is well known that a bottom-end desktop computer today greatly outperforms anything available at the dawn of the commercial computer age, which was 50 years ago. Counting the number of computers produced will never tell us much about trends in computer output. The great expansion of computer output in the last 50 years is an expansion not only in numbers of computers but also in what might be thought of as “output per computer produced,” that is, performance per machine.

We need now to discuss the properties that economists want in their measures of computer performance. To carry this forward, suppose now that we all agree on a measure of computer performance. It should not be too surprising that, as I discuss in the following section, achieving measures of computer performance is not at all a straightforward task. But set that aside, for the moment.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Suppose we have an agreed-on measure of computer performance that covers the UNIVAC and the 1955 “new computer.” Suppose that we standardize our performance measure so that the UNIVAC has 1.0 performance units, and the new computer has 1.5 performance units.2

Unfortunately, even when computer performance is a scalar measure, we cannot simply divide the value of UNIVAC or “new computer” production by the performance measure in order to compare 1952 and 1955 computer output. Economists need the value of the performance indicator. There are several reasons. An old computer relationship called Grosch’s Law indicates that the cost of a computer center does not rise linearly with its computing power. Similar arguments can be made on the demand side: The incremental value of improved performance to the user does not necessarily rise proportionately with an increase in performance. Thus if “new computer” has 1.5 times the performance of UNIVAC, we need some way to value this 1.5 performance improvement ratio. We must know the performance premium, a value measure, not just the increment in performance.

The valuation problem is truly daunting. Likely, UNIVAC and the replacement computer do not appear in the market at the same time. If they do, “new computer” should sell for more, and it is natural to take the ratio of the two machines’ prices as measuring the value of their relative performance. All kinds of problems exist with that, which I do not mean to minimize. For example, the high-end user might be willing to pay more than the actual price premium for “new computer” to get a high-end machine, but the low-end user might not be willing to pay the price difference; if so, the price differential only reflects the value of the performance difference to the user who is on the margin between buying the one or the other. But these are essentially aggregation problems (over users), which I set aside because they arise throughout economic statistics of this kind.

A more promising situation exists empirically if there are a large number of computer models, and we have data on their prices and their performance. One can then run a regression, such as:

(1)

In equation (1), P is a vector of prices of computers, where models are indexed by the letter i, M is the associated performance measure for each computer, and ei is the regression error term. Using equation (1), we estimate a1 and use a1 to put a value on the performance difference among machines: If UNIVAC has M = 1.0 and “new computer” has M = 1.5, then the quantity [a1(0.5)] gives a “quality adjustment” that can be used to value the difference between the two machines.

2  

And we suppose, contrary to what is true, that computer performance can be represented as a simple scalar.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Suppose that we estimate a1 to be 0.7. Computer inflation between 1952 and 1955 (“quality adjusted” for “new computer’s” performance premium) is then: $600K / {$400K ((1 + 0.7(0.5))} = 1.11, or 11 percent inflation. This number is clearly less than the 1.50 (equals 50 percent inflation) that the unadjusted data would show. If “new computer” has M = 1.8, then computer prices are falling: $600K / {$400K ((1 + 0.7 (0.8))} = 0.96, or 4 percent price decline.

Turning to computer output, the usual method for measuring output changes (in the national accounts, for example) is to “deflate” expenditures on a product by its price index (information on the U.S. national accounts is in Bureau of Economic Analysis, 2001). To form a deflated measure of computer output, we start from the change in “current price” output, which in our example was ($6.0 mil − $1.2 mil) / $1.2 mil, equal to a 400 percent increase. Deflating that by the price index of 1.11 gives for the “constant price” output change a 360 percent increase between 1952 and 1955. Deflated output grows less than current price output because in this example (M = 1.5) computer prices, performance adjusted, were rising.

When “new computer” has a larger performance differential over UNIVAC (in the second example, M = 1.8), the price index declines, to 0.96, or a 4 percent decline. Using this declining price index as a deflator results in a “constant price” output change that is larger than the “current price” change (400 × (1/.96) = 417). In national accounts, this “constant price” output measure is sometimes (rather inappropriately) known as “real output.”3

This simple example illustrates several principles that govern estimation of computer output and investment in the U.S. national accounts. The most important one is that it shows how strongly measures of computer performance influence economic measurement of computer price change, and through the deflation procedure, how strongly computer performance affects measures of computer output, investment and productivity.

Equation (1) is a relation that is known in economics as a “hedonic function,” although equation (1) is a very simple hedonic function. The “quality adjustment” outlined in the preceding paragraph is, in essence, the method applied by the Bureau of Labor Statistics (BLS) in estimating price indexes for computers, where quality adjustments for enhanced computer performance are derived from a hedonic function. This example is far too simple, however.

In general, computer performance is not a scalar; it is multidimensional (the implications of this are explored in the subsequent section). Thus, computer hedonic functions look, generally, like equation (2):

(2)

3  

BEA also uses the somewhat cryptic term “chained dollars” to represent the same thing, and reports percentage changes in the form of index numbers, under the title “chained-type quantity index.”

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Each of the k variables in equation (2) is a “characteristic” of computer performance. The current BLS hedonic function for personal computers has more than a dozen characteristics. Each of the coefficients, ak in equation (2), is interpreted as the value of the corresponding computer performance characteristic. Because I have written equation (2) in a logarithmic form (the hedonic function often turns out to be logarithmic, but not always), these coefficients are not prices denominated in the usual dollars or euros, but dollar and euro prices can be extracted from the coefficients, if desired.

Hedonic price indexes have become the standard economic tool for measuring price change in computers. In principle, they measure the price of computing power.

Getting from the price indexes to the output investment numbers is relatively straightforward and follows the example already presented. Table 2 shows current dollar changes for computer investment in the national accounts, the computer deflator, and the resulting deflated investment numbers from the national accounts, for the years 1995–2002. In 1995 computer and peripheral “current price” investment in the United States equaled $64.6 billion. In 2000, the value of computer equipment investment equaled $93.3 billion. Thus, in current prices computer equipment investment increased by 44 percent.

The computer equipment price index declined by 71 percent over the same 1995–2000 interval (from 131 to 38, using 1996 as the base). The change in current price shipments divided by the change in the price index gives the “deflated” (also called “constant price” or “real”) value of the change in computer equipment investment over that interval: As Table 2 shows, this increased four-fold (the quantity index goes from 69 in 1995 to 348 in 2000). The source of the great increase in computer investment in the national accounts numbers is not only the increase in spending on computer equipment, but also the decline in performance-corrected prices for this equipment.

This same point is dramatically illustrated by the post-2000 experience. Actual spending on computer and peripheral equipment fell by 20 percent. But the

TABLE 2 Private Fixed Investment in Computers and Peripheral Equipment

 

1995

1996

1997

1998

1999

2000

2001

2002

Billions of current dollars

64.6

70.9

79.6

84.2

90.4

93.3

74.2

74.4

Billions of chained 1996 dollarsa

49.2

70.9

102.9

147.7

207.4

246.4

239.9

284.1

Chained price index

131.29

100

77.38

56.99

43.6

37.87

30.91

26.27

Quantity index

69.4

100

145.22

208.39

292.64

347.77

338.61

400.92

a“Chained dollars” is the Bureau of Economic Analysis name for a quantity index of computer equipment output (see text).

SOURCE: Bureau of Economic Analysis, NIPA Tables 5.4, 5.5, and 7.6.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

national accounts quantity index increased by 15 percent over the same 2000–2002 interval, because the price index fell by 30 percent (see Table 2).

Tables 3 and 4 show that these trends have been going on for a long time. The price of computer equipment (computers plus peripherals) has declined 17.5 percent per year over the whole period for which national accounts investment data are available. Moreover, over the whole of the historical period, prices of computers themselves have declined faster than prices of peripherals (this is evident from Tables 3-5). The price of computing power today approaches 1/1,000 of 1 percent of what it was at the introduction of the commercial computer 50 years ago (Table 4).4 Additionally, the prices of ancillary devices have also fallen, though their performance improvements are often overshadowed by the spectacular progress in computer hardware: PC World (March, 2003, page 91) reports that the cost of storage media (disks) has fallen from $16.25 per MB of data stored in 1981 to $0.0008 (8 percent of a penny) in 2003, an annual rate of decline of 36 percent, comparable to the rate for PC computers over the same interval.

Computer price indexes fall because the performance of computers is increasing very rapidly, where their actual selling prices are stable or falling. Accordingly, it is no surprise that computer output in the economy rises not so much because increasing numbers of computers are produced (though this is true) but because the capability of the computers that are produced has increased so much. The great increase in computer investment over the last 50 years as measured in the national accounts is in large part an estimate of the value of increased performance of computers over this interval.

Nordhaus (2002) takes the price of computing back another 50 years, using a different approach. Though the rates of decline in the last half century are greater than in the half century before that, Nordhaus’ results indicate that high demand for improvements in computational power has existed over a long time, as well as indicating the extraordinary fruits of innovative ability set to satisfy that demand.

Price indexes for computers transfer directly into economists’ measures of the output of computers, of “real” (an economist’s somewhat misleading jargon) computer investment and capital stock, and from these the rate of productivity improvement. As examples of the latter, Jorgenson, Ho, and Stiroh (2002) estimate that the contribution of ICT (information and communication technology) investment was responsible for a large proportion of the acceleration in U.S. labor productivity in years following 1995. Triplett and Bosworth (2002) reached comparable findings for the importance of ICT investment to the substantial gains

4  

The mainframe index in Table 4 gives a beginning/end value of 3.91−05. But over the period for which mainframe and PC price indexes are available (1982 forward), PC prices have fallen at 21 percent per year in the government indexes, where mainframes have trailed, at 18 percent per year (Table 4). Moreover, studies suggest that the government PC price index records too little decline, certainly over the first part of this period—for example, Berndt and Rappaport (2001, Table 1) indicate that PC prices fell over 30 percent per year between 1983 and 1999. Hence, taking all this together, the round number 1/100,000 in the text.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 3 Private Fixed Investment in Computers and Peripheral Equipment

 

Price Index (1996 = 100)

Quantity Index (1996 = 100)

1959

101372.4

0.000

1960

79593.2

0.00044

1961

58800.8

0.00077

1962

41710.3

0.00143

1963

27395.1

0.00396

1964

22916.4

0.00616

1965

18936.0

0.00979

1966

13272.8

0.02354

1967

10784.1

0.03091

1968

9202.6

0.03696

1969

8332.3

0.05038

1970

7484.4

0.0605

1971

5698.7

0.07458

1972

4592.4

0.11

1973

4354.0

0.12

1974

3554.9

0.15

1975

3288.5

0.15

1976

2746.5

0.23

1977

2390.1

0.34

1978

1616.8

0.66

1979

1339.7

1.07

1980

1045.6

1.69

1981

918.9

2.63

1982

822.3

3.24

1983

685.6

4.92

1984

554.6

8.04

1985

471.5

10.10

1986

406.2

11.61

1987

346.0

14.59

1988

321.4

16.67

1989

300.1

20.27

1990

272.3

20.03

1991

244.6

21.75

1992

209.2

29.40

1993

178.4

37.31

1994

157.3

46.00

1995

131.3

69.40

1996

100.0

100.00

1997

77.4

145.22

1998

57.0

208.39

1999

43.6

292.64

2000

37.9

347.77

2001

30.9

338.61

2002

26.3

400.92

SOURCE: Bureau of Economic Analysis, NIPA Table 7.6, and unpublished BEA data in possession of the author (with more precise quantity index for earlier years). In 1959, the quantity index (1972=1) equals 0 to three decimal places in the unpublished data.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 4 Price Indexes for Domestic Mainframes and PCs (1996 = 100)

 

Mainframes

Personal Computers

1953

791125.1

 

1954

682645.1

1955

605330.6

1956

516628.7

1957

456095.6

1958

412943.3

1959

354208.3

1960

260717.8

1961

197440.1

1962

143811.5

1963

109881.7

1964

83549.6

1965

60060.6

1966

22761.6

1967

16110.6

1968

14560.0

1969

14513.8

1970

13967.4

1971

10847.4

1972

8871.7

1973

9453.9

1974

8041.6

1975

7771.7

1976

7106.5

1977

5582.2

1978

2812.2

1979

2306.9

1980

1591.9

1981

1311.4

1982

1106.3

1549.9

1983

1006.8

1086.5

1984

727.1

937.0

1985

537.1

877.3

1986

486.5

646.4

1987

419.1

582.9

1988

397.1

533.3

1989

346.5

496.1

1990

307.5

415.5

1991

297.6

350.4

1992

277.2

267.9

1993

234.2

207.3

1994

182.1

182.7

1995

144.0

145.3

1996

100.0

100.0

1997

68.6

67.1

1998

49.2

40.3

1999

38.6

25.7

2000

30.9

20.7

 

SOURCE: Triplett (1989) and unpublished Bureau of Economic Analysis data.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

TABLE 5 Average Annual Growth Rate, Price Indexes for Domestic Mainframes, PCs, and Computers and Peripheral Equipment

 

Mainframes

Personal Computers

1982–1987

−17.6

−17.8

1987–1995

−12.5

−15.9

1995–2000

−26.5

−32.3

1982–2000

−18.0

−21.3

1953–2000

−19.4

 

 

Computers and Peripheral Equipment

1959–1969

−22.1

 

1969–1987

−16.2

1987–1995

−11.4

1995–2002

−20.5

1959–2002

−17.5

in labor productivity experienced in services industries in recent years. Services are the industries that purchase a predominant portion of U.S. investment in ICT equipment. The research results in these two (and other similar) papers could not have been generated without economic measurements that incorporate performance measures for computers. At the moment, economic statistics that incorporate computer performance measures are lacking in most other OECD countries (Wyckoff, 1995; Colecchia and Schreyer, 2002), which greatly inhibits the ability to analyze recent productivity trends and the contribution of ICT investment in countries outside North America.

In sections IIIV, I consider the variables that have been used as characteristics of computer performance in computer hedonic functions and the measures of computer performance that one would like to have for economic measurements.

III. COMPUTER PERFORMANCE MEASURES IN EARLY STUDIES—MOSTLY MAINFRAMES

It is intriguing that what an economist calls a hedonic function has appeared, in an essentially equivalent form, in the computer science literature. The earliest research on computer performance measurement grew out of, or was influenced by, research issues in the computer systems literature. Some performance measures were devised as a practical aid to equipment selection. Alternatively, computer technologists wanted to estimate the rate of technical change in computers. It makes no sense to do that without considering both performance and the price.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

So where the economist naturally thinks of the price of computers, adjusted for performance, the computer technologist thinks of performance per dollar spent on computers. For example, Knight (1966, 1970, 1985), who is actually an economist but was writing for computer publications, was interested in estimating the rate of technical progress for computers and not a computer price index. He estimated an equation that was similar to equation (1).

Economists were interested in a somewhat different but closely related problem: measuring performance-corrected price indexes for computers, using for the most part hedonic methods. Among economists, the early hedonic researchers on computers more or less followed the lead of technologists in choosing their performance measures. The following section is partly based on Triplett (1989).

Performance Characteristics of Computer Processors

From the earliest studies, the performance specification of computer processors consisted primarily of the speed with which the computer carries out instructions and its memory size (main memory storage capacity).5

It has always been difficult to obtain a publicly available measure of speed that is both sufficient and at the same time comparable across processors. This remains the problem today.

A computer executes a variety of instructions. The execution rate of each instruction is properly a computer characteristic. Computer “speed” is accordingly a vector, not a scalar.

Applications require instructions in different proportions or amounts (e.g., graphics and office productivity programs). Moreover, different users, even if they employ the same applications, employ them in different frequencies—I use both these two applications, but my usage differs greatly from the usage of a graphics designer. Accordingly, numerous measures of “speed” exist, in principle, because speed is a vector and there are many ways of valuing the speed vector.

Nevertheless, some scalar summary of the speed vector is needed. Three major approaches have been employed by economists in hedonic studies. In considering these, it is well to bear in mind the twin aggregations of the speed vector—one aggregates over instructions (for application speeds); another aggregates application speeds over users who use them in different proportions.

Single-Instruction Speed Measures

In this approach, the speed of one instruction is chosen (in early studies, it was invariably addition time or multiplication time), which then serves as a proxy

5  

Phister (1979), Sharpe (1969), and Flamm (1987) contain good statements of the rationale for the specification, and Fisher, McGowan, and Greenwood (1983, pp. 140–141) emphasize its limitations.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

for the rest. Single instruction speed measures were prominent in early computer studies (see Annex Table A). Even in the early days, they were not adequate. In analyses of instruction mix frequencies cited in Sharpe (1969, pp. 301–302) and Serlin (1986), additions accounted for only between 13 and 25 percent of total instructions, and multiplications around 5–6 percent. “Logic” or “other” or “miscellaneous” instructions, not easily measured at the time, were the largest category. A single-instruction speed measure will not adequately characterize a cross-section of computers or represent the change in computer performance over time.

Intermediate-Stage Proxy Measures

In this approach, the investigator looks for a machine specification that is correlated with the vector of performance characteristics. In early studies memory cycle speed was a popular proxy speed measure for computers. Memory cycle speed is memory cycles per second, or its inverse defined as the time (in microseconds) to read a word from the main memory and replace it. Memory cycle time is correlated with the speed of other processor operations and therefore acts as a proxy for those other determinants of speed. Closely related measures also appear in the regressions of Chow (1967—memory access time), Michaels (1979), and Fisher, McGowan, and Greenwood (1983—transfer rate).

Another intermediate-stage proxy measure is machine cycle time, also known as “clock speed.” The execution time of the logical portion of any instruction equals machine cycle time multiplied by the number of machine cycles required for that instruction.

Even in the mainframe days, it was well established that the relation between clock speed and instruction execution speed will shift with the instruction mix. Across machines, moreover, the relation varies with machine design. Thus, machine cycle time or clock speed contains the potential for substantial proxy error, both from machine to machine in the cross-section and over time. Economists who used clock speed either (a) did not understand its shortcomings, or (b) understood them, but used clock speed because it was widely available for a large sample of computers.

Benchmarks

Single-variable proxies for a multivariate vector of instruction speeds will always present the problem that the particular proxy chosen may represent very poorly the speed at which a computer performs actual jobs. It is thus natural to measure computer speed by presenting the same job or mix of jobs to various computers and measuring the time actually taken to perform them. Such an exercise is called a “benchmark” or a “benchmark test.” Computer users have often performed benchmarks for machine selection, and benchmark results for stan-

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

dardized or stylized data-processing problems have been published for many years: Phister (1979, p. 100) presents examples for mainframe computers that include filing and sorting problems, matrix inversion problems, and so forth.

An advantage of a benchmark measure is that it measures directly the speed, or cost, of jobs or applications, rather than of the instructions that are required for the job. However, representativeness requires selecting a group (possibly a large group) of alternative computer tasks and running each task on each machine in the sample. If problems are realistic, performing the tests may be expensive. A second problem arises when results of multiple benchmark tests are highly, but not perfectly, correlated (as they generally will be): The researcher must either select one benchmark as a proxy for all the rest or find some way to aggregate them—a problem exactly parallel to the use of single-instruction speed measures.

Weighted Instruction Mix Measures

A weighted instruction mix is formed by examining records of computer centers, or analysis either of “test packages” or of a sample of widely used programs. An internal “instruction trace” provides counts of the frequency of each machine instruction encountered in the programs. Execution speeds for each instruction can be timed (or obtained from published machine specifications). Weighting the speeds of the various instructions by the relative frequencies recorded in the instruction trace yields the weighted instruction mix.

The best-known weighted-instruction mix is MIPS (millions of instructions per second), which was used in a hedonic function for computers by Dulberger (1989). Lias (1980), Serlin (1986, p. 114), and Bell (1986)—and other computer manufacturers—emphasized that MIPS was designed to measure speed for IBM architectures, and might not provide a comparable measure across different machine architectures. Lias (1980, p. 105) put the measurement error that arises from applying MIPS to machines of different architectures at 10–30 percent. For this reason, Dulberger restricted her dataset to IBM and “plug-compatible” computers. Nevertheless, MIPS was fairly widely used across the industry in the 1980s and into the 1990s. Indeed, one often saw MIPS speed measures quoted for personal computers in the early 1990s (e.g., Rosch, 1994, Table 3.1).

Published documentation of the instruction mix used in MIPS is sketchy. Lias (1980) and others indicate that it was based on “IBM Job Mix 5,” but the instructions in Job Mix 5 were not fully documented. One presumes the mix was updated from an earlier set of instructions known as the “Gibson mix,” which Serlin (1986) dates around 1960. Serlin presents an example of the use of the Gibson mix to estimate processor speed. The only published documentation for the Gibson mix is an article in Japanese (Ishida, 1972).

Perhaps because published documentation of Job Mix 5 did not exist, some confusion has arisen about the nature of MIPS. To obtain a MIPS estimate with a smaller amount of work, approximating formulas were developed. For example,

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Bloch and Galage (1978) present an approximating formula that involves machine cycles and memory accesses per instruction combined with clock speed and memory access time. It is no doubt true that shortcuts were taken, and published empirical work may be affected by inaccuracies in the computer speed measure they employ.

Kenneth Knight (1966, 1970, 1985) published a weighted instruction mix speed measure designed for scientific purposes. His set of instructions included fixed-point addition, floating-point addition, multiplication, division, and finally, logic operations. Instruction frequencies for the scientific speed measure were derived from traces at a scientific computer center, with some arbitrary adjustments for aspects of the architecture of certain machines (for details, see Knight, 1985, Table 3, p. 117). For commercial uses, Knight collected the mix of instructions that were executed in a sample of commercial programs.

Knight’s computing power formula combined the characteristics of speed and memory size into an index of processor “computing power.” Some of the parameter values that were assumed in combining memory size with speed are arbitrary. Knight’s updated indexes, which extend through 1979, retain the original 1963 weights.

Of the studies in Annex Table A that used an instructional mix measure of speed, only Cartwright, Donahoe, and Parker (1985) report a variable in the hedonic function other than speed and memory size. This evidence suggests that where other hardware attributes have been employed as variables in hedonic functions for mainframe computers, they were correcting in some sense for an inadequate measure of processor speed.

Synthetic Benchmarks

MIPS is sometimes termed a “synthetic benchmark.” Others existed, even in mainframe days. The “Whetstone” reflected primarily scientific and engineering problems; the “Dhrystone” was based on systems-programming work, rather than numerical calculations. The “Linpack” measured solution speeds for systems of linear equations. Other special benchmarks existed for, e.g., banking transactions (Serlin, 1986, gives some representative results).

Since finding a satisfactory processor speed measure is the biggest challenge to measuring price and technological change in computer processors, one would have thought that economic researchers would have explored the usefulness of synthetic benchmarks. That did not happen until very recently (see section V).

Various studies investigated performance measures for peripheral equipment. Most of them were also measures of speed and capacity. The performance variables in the IBM price indexes are displayed in Cole et al. (1986). Flamm (1987) contains alternative indexes for peripheral equipment. The available price indexes for mainframe-era peripheral equipment are reviewed in Triplett (1989, Tables 4.11 and 4.12).

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Summary

At the close of the era in which economic research on computer performance focused on mainframe computers, two generalizations characterized the state of that research.

First in their choice of speed measures, economists had turned away from simple clock speed and memory access times toward more representative speed measures, primarily synthetic benchmarks. Though there was some sense that future research might incorporate true benchmark measures, in fact that never happened. Indeed, as explained in the subsequent section, when research on computers picked up again in the 1990s, the advance represented by synthetic benchmarking measures was almost entirely abandoned, and economists turned back to simple clock speed as their primary measure of computer performance.

Second, in the mainframe research era, little or no attention was paid to system performance. Economists primarily modeled the performance of separate “boxes” of computer equipment, without paying very much attention to the integration of the equipment. The computer literature of the day was full of discussions of queuing theory and the implications of this for optimization of system performance (see, for example, Bard and Sauer, 1981). The economists’ decisions were justified, in part, by their objectives: The boxes were separate pieces of output, typically produced and sold separately, and economists wanted to measure the output of boxes, adjusted for their performance. The computer center manager worried about the optimization problem; economists interested in output measurement and analysis did not have to be concerned about the computer center managers’ problem.

In my 1989 survey (Triplett, 1989) I speculated that measures of system performance would show more rapid improvements over time than did the measures of separate boxes. It is true, of course, that technology has reduced the speed and cost of what was done yesterday. But computer users have gained most from technological changes that enable them to do things that were not possible with the previous technology, not merely from doing what they did before cheaper or faster. In my 1989 survey, I used as an example computer modeling solutions to an even then 150-year-old problem in aerodynamics, the solution to the set of Navier-Stokes equations that model the flow of a liquid or gas over a solid surface. In the intervening years, computer aerodynamics simulations have largely replaced wind tunnel tests, flight tests, and so forth. The computer permits new calculations. Its value is not just in doing the old ones faster.6

Extensions of computations (one might better say “manipulations of data”) into new elements of the computational space are a major part of the contribution

6  

This is a common error in critiques of computer measurements. “I just type letters, the faster computer does not increase my typing speed in proportion, so computer performance measures overstate benefits to users.” Perhaps for some users, this is true, but not for uses that take full advantage of computer capabilities.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

of the computer that is not captured at all in existing measures of computer processor (and peripheral) speed and performance. Using even the best benchmarks to measure time series comparisons of computer equipment performance must inevitably measure the cost or speed of doing the jobs that were done yesterday in today’s technology.

During the mainframe research era, the personal computer was more or less ignored, even though it was well established even in the late 1980s. As the next section shows, research on PCs did not extend the research on computer speed, and it largely ignored as well most of the problems of modeling system performance.

IV. PERFORMANCE MEASURES IN STUDIES OF PCs

In a sense, the engineering architecture of the PC is more closely aligned with the mainframe computer than is either its performance measurement or its economics. As explained in the following, economists who have modeled the PC’s performance have followed, consciously or not, the performance measures used for separate computer equipment “boxes” in mainframe-era research.

A joint project between the IBM Corporation and the U.S. Bureau of Economic Analysis (BEA) developed hedonic computer equipment price indexes for the U.S. national accounts (Cole et al., 1986; Cartwright, 1986). These were the first hedonic computer price indexes introduced into any country’s statistics.

The IBM-BEA price indexes covered four products: mainframe computers, disk drives, printers, and displays (terminals). The performance variables in the IBM studies have provided the basis for most subsequent investigations on computer equipment, including price indexes for personal computers (which were actually not included in the IBM-BEA work). The IBM hedonic functions for computer equipment continue, therefore, to provide guidance for empirical investigations of computer equipment today.

The IBM-BEA hedonic indexes were price indexes for computer equipment “boxes”; they controlled for quality change that arose as manufacturers increasingly put more performance into each of the separate boxes. No direct attention was paid to how the boxes—or properly, the characteristics of the separate boxes—were combined into an operating computer center, because an operating computer center was not purchased as a transaction. The buyer assembled a computer center; it was not produced and sold as a unit by the manufacturer.

The PC is, in effect, a pre-assembled computer center. The PC contains separate components that link nearly one-to-one to the individual “boxes” that were the subjects of Cole et al. (1986). For example, the PC’s central processing unit (CPU), its hard drive and a display (keyboard/monitor) correspond to separate mainframe-era components. Most of this equipment can be purchased separately; indeed, these items may initially be manufactured by different manufacturers. It is not technologically linked together into a PC in the sense that components

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

cannot be investigated separately. However, from the final purchaser’s perspective, the PC transaction typically combines several of them (for example, a monitor and a keyboard are almost always included in the price). Only the printer remains as a separate piece of equipment that is typically purchased in a separate transaction. The transaction, more than the engineering, determines the unit that economists must analyze.

Thus, in modeling the PC one must ask a question that was never confronted in the IBM-BEA studies: Are we interested in the performance of the PC (that is, in the computer system)? Or that of its components? Of course, we are interested, ultimately, in both, for several reasons. But it will be important to keep the distinction between system performance and component performance in mind.

Annex Table B lists the performance variables used in PC hedonic functions. Three points can be made about existing PC hedonic functions.

First, compared with the IBM studies of separate computer equipment boxes, the PC studies omit some performance variables that were included in mainframe-era research (e.g., hard drive speed). Second, their processor speed measure (almost exclusively clock speed, measured in megahertz, MHz) is a step backward from the weighted instruction mix measure that was based, in principle, on the speed of performing jobs. Third, the PC hedonic studies measure the performance of components in the system—or simply the presence or absence of components, such as the video card—rather than the performance of the system, even though what the PC transaction researchers sought to model is the sale of a computer system, not the separate sale of computer components. These three points are developed in the following discussion.

Component Performance Measures

The variables used in three relevant IBM computer equipment hedonic functions are displayed in the first column of Annex Table B. The other columns of Table B summarize the variables used in a number of recent studies on personal computers, in comparison with the variables employed in the original IBM studies.

The studies in Annex Table B may not make up a complete review of research. They have been conducted in a number of different countries and show the degree of international comparability in hedonic research on personal computers. A number of them (mostly those at the right-hand side of the table) show either operational or experimental hedonic functions that have been estimated by statistical agencies in various countries, generally for the purpose of publishing performance-corrected computer indexes similar to those of the U.S.

Most of these studies have succeeded, to a perhaps surprising extent, in combining into one hedonic function many of the variables in three of the original IBM studies (Cole et al., 1986). None of the PC studies tries to combine printers (subject of a separate IBM study) in the PC hedonic function, because the acqui-

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

sition of a printer still typically remains a separate transaction, even though the printer, too, is sometimes bundled with the rest of the PC. Barzyk (1999) does not include monitors, presumably because they were not bundled into the Canadian dataset used for his research. Dalen (1989) also excludes the monitor from his hedonic regression. With these exceptions, all the PC studies can be viewed as combing into a single hedonic function three of the separate pieces of equipment studied in the IBM work.

All studies measure processor performance with speed and memory size, as did Dulberger (1989), though nearly all of them measure speed with megahertz (a topic to which I return below). Dulberger introduced the idea of specifying the semiconductor type used in the processor (technology dummies). Most PC studies follow this innovation (BLS, Chwelos, Bourot, and in modified form, Moch and Finland CPI).

With respect to the hard disk, all studies use a standard measure of capacity. The difference between megabytes (MB) and gigabytes (GB) is merely a scaling, adopted for convenience because hard disk capacity has grown so large. Of the PC studies, only Dalen uses a hard drive speed variable, although the type dummy variables used by Barzyk (1999) and Bourot (1997) control to an extent for HD speed.

For monitors (displays) all the studies investigate measures of the quantity of information that can be shown on the screen and the resolution of the picture. Because some software producers have taken increasing amounts of the screen for control “bars” and so forth that are not readily hidden by the user, screen size may be an imperfect measure, but it clearly influences the price of the monitor. Other monitor characteristics include flat screen and the thickness of the monitor, which reflect users’ desires that the machine occupy a smaller amount of desk space; those characteristics are omitted from existing studies.

In addition to the basic hardware items—processor, hard drive, and monitor/keyboard—a modern PC comes bundled with a number of other hardware features. Many of these are a consequence of the fact that the computer’s function is increasingly not “computations” but the manipulation of digitized data, including sound and pictures. Sound cards, video cards, network cards, and so forth may be regarded as other pieces of hardware that are attached to the basic PC components, as are input/output devices such as CD-RW.

Most of the PC studies have included dummy variables for the presence or absence of at least some of these auxiliary functions or, alternatively, for more advanced versions of the functions in cases where some version of the feature has become nearly universal. Annex Table B-2 displays these other hardware features.

Little consensus has emerged among the users about which of these auxiliary hardware features should be added to the PC hedonic function. Additional research will be required to determine the reasons for the differences between the variables included in, for example, the BLS study and the others tabulated in

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Table B: Do international hedonic functions for PCs differ because markets differ in the U.S. and other countries, or because of data availability differences, or because of different decisions made by the researchers? And even more importantly: How much difference in the computer price indexes computed by BLS and others results from differences in the variables in the hedonic function?

Performance Variables for PCs: The Dell Data

A recent Dell catalog illustrates the complexity of the bundle of PC computer characteristics and the inadequacy of the representation of the computer in most existing PC hedonic functions. This catalog illustrates how Dell markets computers to buyers. Most hedonic functions for personal computers do not contain nearly so many variables.

Dell advertises megahertz (now rescaled gigahertz, or GHz). But it also advertises the speed of the bus and the size of the cache.7 Memory size is there, but the specification page also talks about the speed of the memory, 266 MHz or 333 MHz SDRAM or RDRAM, which is a faster form of memory. The size of the hard drive is there, but so also is its speed. Specifications for the monitor and the DVD drive, not just their presence or absence, are included. Different cards are distinguished, the graphics card and the sound card, for example. How is that performance to be measured? Economists have left that out. Similarly, speaker performance must now be modeled in the PC bundle of characteristics, and audio specifications matter, a topic on which there is a minimal amount of hedonic research but nothing that has been applied to the computer bundle.

Then there is the software included in the Dell choices. This is a huge problem. A tiny amount of economic research exists on the performance of software, even though software in the national accounts is a larger component in the United States than is purchases of hardware (in the aggregate economy, though not necessarily for PCs).8 Software actually included in a Dell machine is more extensive than what is mentioned in the catalog. Little or none of this software is modeled in PC hedonic functions.

There is also, of course, the warranty and the Internet access. Two years ago, Dell offered one-year “free” Internet access from MSN.com. Now it offers only six months “free,” but it gives a choice between AOL, MSN, and EarthLink. Is that an improvement or not?

Judged by the Dell catalog, existing research by economists on PCs is not very adequate in the way they model computer performance. The relevant ques-

7  

Some PC hedonic functions include information on the cache—see Table B-2.

8  

For software price/performance research by economists, see Harhoff and Moch (1997), Gandal (1994), Prud’homme and Yu (2002) and Levine (2002). Discussions of software measurement in the U.S. national accounts are contained in Parker and Grimm (2000) and Moylan (2001).

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

tion is: How much difference does it make? Do the omitted characteristics (such as hard drive speed, performance of cards, and quantity of software included in the sale) bias hedonic price indexes?9 Does the “clock speed” measure of processor speed adequately measure improvements in processor performance?

Even aside from the adequacy of the variables in hedonic functions for PCs, there is another point: Do these variables measure the performance of the PC? Or do they measure the performance of inputs to PC performance?

V. BENCHMARK MEASURES OF PC PERFORMANCE

Ohta and Griliches (1976) introduced the distinction between what they called “physical characteristics,” or engineering characteristics and “performance characteristics.” In their language, processor megahertz and hard drive speed and size are “physical characteristics.” The same distinction has also been discussed in the hedonic literature under the name “proxy variables”: Physical characteris-

9  

Some economists will no doubt observe that the omitted characteristics may be correlated with the included ones. This requires what is essentially a digression.

Computer performance characteristics may, or may not, be highly correlated among themselves. This is a somewhat more complicated matter than has sometimes been supposed in some of the hedonic literature. As a factual point, correlations among the explanatory variables in the BLS PC hedonic function are in fact rather low (simple R’s are almost entirely under 0.3 and some under 0.1, in a hedonic function with some 15 variables). Unless omitted characteristics have higher correlations than included characteristics have among themselves, we can presume that in the BLS model the influence of omitted characteristics on the estimated price premium for improved machines will be simply lost. Some clue to the importance of omitted variables is provided by examination of R2: The BLS equation gets values on the order of 0.97, where the lowest value in the studies displayed in Annex Table B amounts is only about 0.5.

Inter-correlations among the characteristics of computer performance are higher in other datasets, for reasons that we do not need to explore here. One might presume, therefore, that omitted characteristics in these datasets will also have higher correlations with included characteristics. Most importantly, most of the other hedonic functions displayed in Annex Table B have far fewer characteristics than are used by BLS. This itself suggests that omitted variable problems (whether correlated or not) are far more serious in some computer hedonic studies than others. Differences in R2 among studies may reflect properties of the data themselves, and not just the number of characteristics in the equation.

Nevertheless, even if omitted variables are correlated with an included variable, failing to consider them will bias measures of computer progress, and will bias the price index, unless the omitted variable improves at the same rate as the included variable with which it is correlated. This is not the place to explore what are essentially econometric problems in estimating hedonic functions. My own conclusion, from considerations that are developed elsewhere in the hedonic literature, is that omitted variable bias in hedonic price indexes and in measures of computer performance can be serious, and that omitted variables predominantly result in missing some of the improvement in computer performance, or what is the same thing, missing some of the decline in computer prices. For a similar conclusion on different grounds, see Nordhaus (2002). On the other hand, see the discussion of the work of Chwelos (2003) in section V.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

tics have some relation to (are proxies for) the performance that buyers want from a PC, but they do not measure the performance that buyers really want. Variables in hedonic functions should represent what buyers buy (and sellers sell), not technical measures that have some relation or other to the true characteristics that are important for buyers and sellers behaviors.

Benchmark measures have the advantage that they measure machine performance, rather than measuring some proxy for machine performance, or some input that may influence machine performance. Researchers who have tried to incorporate benchmark data into PC hedonic functions are Chwelos (2003) and Barzyk (1999). Chwelos (2003) contains a good discussion of the relation between technical variables such as megahertz and benchmark performance measures.

From browsing e-sites, one would think that quite a number of benchmarks exist for PCs. The impression is illusory: It is a bit like the days when Sears sold Whirlpool appliances under its own name—a good many internet sites repackage benchmark tests from two companies: Veritest’s Winstone and Bapco’s SYSmark 2002 (revised from the 2001 version). Both of these perform separate benchmarks for performance on office productivity applications and graphics applications. For example, one task included in SYSmark’s office productivity application is the time to execute a “replace all” command in Word, and there are a large number of tasks for which times are recorded. The overall score is the average time taken for the tasks included in the benchmark. For SYSmark, the final score is a geometric mean of scores on the two types of applications.

These benchmarks appear far better suited to economists’ needs for performance measures than is megahertz, or clock speed. The benchmark is still subject to the problems listed above: Two or three applications benchmarks may not be representative of the range of applications that are important to users. For example, SYSmark 2002 contains only two applications, one for “office productivity” and another for “internet content.” How one aggregates across users (in this context, the weights to be applied to the individual tasks within an application) is an issue. Another issue is the weights assigned to applications across users: SYSmark effectively assumes the two applications have equal weight. Winstone remains agnostic on the matter, leaving aggregation over applications included in the benchmark to the user of the benchmark.

More importantly, data on benchmarks may not always be available to economists and are not necessarily consistent over time.10 The Butler Group (2001) commented that “attempts to market processors using something other than their clock speed have found limited success…. Consumers are used to dealing with the seemingly easy to compare clock speed, even though this may not be the greatest performance indicator it has been the only one available.” Economists,

10  

Chwelos (2003) used overlaps to estimate comparable points to create time series of benchmarks.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

too, have had to measure performance with the only data available, which has been megahertz.

One also needs to distinguish a benchmark for the speed of the microprocessor chip, which is the focus of a considerable amount of recent interest in benchmark measures,11 from a benchmark for the system as a whole.

In the end, the key question is: How much does an inferior or proxy measure of speed matter? Chwelos (2003) compared usual proxy measures of speed (megahertz and so forth) with benchmark tests from PC magazine. He found that the relation between megahertz and performance differs across microprocessor generations.12 However, the price indexes he estimated differed trivially: An index using benchmarks declined 39.6 percent per year, and one using technical specifications declined 39.3 percent annually, where the indexes used otherwise comparable computational forms (see his Table XII). One reason for this result is that Chwelos’ technical specification was unusually rich: It included measures for cache memory, dummy variables for chip generation, and so forth.13 The same result might not apply to the simpler hedonic models employed in most of the other studies tabulated in Annex Table B.

Yet the results are provocative. Simple measures of processor speed may not be that inadequate, empirically, though they seem inadequate, a priori.

VI. INPUT COMPONENTS, THEIR PERFORMANCE MEASURES AND THEIR CONTRIBUTIONS TO SYSTEM PERFORMANCE

As discussed in sections IV and V, it is not entirely clear whether the performance variables in PC hedonic functions measure the performance of the PC, the performance of inputs to the PC system, or some of both (or neither). However one addresses these questions, a modern PC incorporates many hardware and software components. The performance of many of these components is not measured at all in PC hedonic functions.

Putting these matters aside as unresolved for the present, this final section addresses another important topic: To what extent can the price/performance of the computer be explained by technical changes in the computer’s components? This shifts attention from measuring the computer’s performance to explaining it. The problem itself and its economic importance are well stated in Jorgenson (2001): Understanding recent economic growth in the U.S. and forecasting future growth requires a better understanding of the sources of technical progress in ICT equipment.

11  

For example, AMD released a report by PricewaterhouseCooper listing a variety of benchmark tests on AMD microprocessor products (www.amd.com/us-en/.htm).

12  

“… A MHz of clock speed from a 286 processor produces less performance that a MHz from a 386, a 386 less than a 486, and so on.” Chwelos (2003, p. 14).

13  

Chwelos contains an excellent discussion of the architecture of the PC and the implications of the architecture for measuring price/performance.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Two strands of economic research have approached this problem. One can start from the semiconductor, clearly a major input into the computer itself and into many of its components. This approach is represented by Flamm (1997). Flamm calculates the impact of the semiconductor on the economy, which perforce involves both the impact of the semiconductor on computers and the consequent impact of computers on the economy (as well as the impact of semiconductors on other, non-computer, sectors).

Another approach is to model the price/performance of the computer as an outcome of the price/performance of its inputs. Triplett (1996) decomposes multifactor productivity in computers into three parts: productivity in computer manufacturing, productivity in semiconductor manufacturing (which provides a major input to computers), and semiconductor manufacturing equipment, treated as providing technological inputs (with computers) to the manufacture of semiconductors. This approach builds on the technical change model in Triplett (1985): The production of a technological output that uses technological inputs is treated as the production of a set of output characteristics with a set of intermediate input characteristics. This implies a transformation (production) function of the form:

where Mi is a characteristic of computer performance (and there are h characteristics of computer performance), and Vjk is the kth characteristic of input j (which might be the semiconductor, or the network card, and so forth), and Z is a group of other inputs, which may or may not be homogenous inputs, but whose characteristics are ignored for simplicity.14 This model is a very general way of modeling what most economists would describe in a much more structured way: Quality-adjusted output is a function of quality-adjusted inputs.15 Either structured or unstructured ways of approaching the problem require information on the price/performance of the output of the computer production process and the price/performance of the inputs that are incorporated into the computer.

In Triplett (1996) the specification of component inputs to the production of computers was very crude: Semiconductor price indexes that embodied price/

14  

“Labor quality,” for example, is composed of elements of human capital, with characteristics such as education, training, experience and so forth. This is formally identical to the treatment of technological inputs through hedonic functions.

15  

The advantage of the unstructured way of approaching the problem is that it avoids having to specify whether a characteristic of the video card (say) enters into the performance of the computer in some manner that is independent of the characteristics of some other component, say processor speed or hard disk capacity. The unstructured way is more appropriate in the sense that it can more readily incorporate the engineering knowledge on the relations among characteristics of different components. In practice, however, the information requirements are daunting, and most economists therefore pursue the structured approach, even though it represents implicit and unrealistic specifications of engineering relations.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

performance of semiconductors had only recently become available, following the pioneering research of Flamm (1993) and Dulberger (1993). Because only an aggregate price index for semiconductors was available, semiconductors were treated as an aggregate good, even though different types of semiconductors go into computers and they do not always have identical rates of performance improvement.

More recently, Aizcorbe, Flamm, and Khurshid (2002) have produced detailed price indexes for 12 classes of semiconductors and also produced consumption weights among these 12 classes for end uses, including computer production. This is a great step forward in data that can be used for modeling the contributions of component inputs to the performance of computers.

Missing, however, are comparable performance indicators for other component inputs to computer performance. The performance of hard drives does not appear to be driven primarily by advances in electronics, though it may be true that the miniaturization technology that is evident in hard drive technology is similar to, or is driven by, the miniaturization technology that underlies advances in semiconductor performance. Little information on the performance of graphics and networking cards, or CD/DVD read/write, or monitors, or other components has so far been brought to bear on economic modeling computer price/performance. Even storage media themselves appear to have undergone price/performance changes that rival those of hardware.

VII. CONCLUSIONS

Much is still unknown about the contributions of component technologies to the increase in computer performance. Earlier, I noted that the cost of computing power is now around 1/1,000 of 1 percent of what it cost 50 years ago. That estimate, breathtaking as it is, actually does not incorporate all of the aspects of computer performance that one might consider. Even so, an exciting research agenda is to account for the determinants of the great decline in computer price/performance over the last 50 years: Part of it must be technological innovations in computer components, and to quantify those contributions, we need better performance measures of components. The Deconstructing the Computer workshop is a step along the route of modeling those determinants.

REFERENCES

Aizcorbe, Ana, Kenneth Flamm, and Anjum Khurshid. 2002. “The Role of Semiconductor Inputs in IT Hardware Price Decline: Computers vs. Communications.” Federal Reserve Board Finance and Economics Series Discussion Paper 2002-37. August.

Archibald, Robert B., and William S. Reece. 1979. “Partial Subindexes of Input Prices: The Case of Computer Services.” Southern Economic 46(October):528–540.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Bapco. 2002. “SYSmark ® 2002: An Overview of SYSmark 2002 Business Applications Performance Corporation.” Available at http://www.bapco.com/SYSmark2002Methodology.pdf. Accessed February 19, 2003.

Bard, Yonathan, and Charles H. Sauer. 1981. “IBM Contributions to Computer Performance Modeling.” IBM Journal of Research and Development 25:562–570.

Barzyk, Fred. 1999. “Updating the Hedonic Equations for the Price of Computers.” Working Paper of Statistics Canada, Prices Division. November 2.

Bell, C. Gordon. 1986. “RISC: Back to the Future?” Datamation 32(June): 96–108.

Berndt, Ernst R., and Zvi Griliches. 1993. “Price Indexes for Microcomputers: An Exploratory Study.” In Murray F. Foss, Marilyn Manser, and Allan H. Young, eds. Price Measurements and Their Uses. Studies in Income and Wealth 57:63–93. Chicago: University of Chicago Press for the National Bureau of Economic Research.

Berndt, Ernst R., Zvi Griliches, and Neal Rappaport. 1995. “Econometric Estimates of Prices in Indexes for Personal Computers in the 1990s.” Journal of Econometrics 68(1995):243–268.

Berndt, Ernst R., and Neal J. Rappaport. 2001. “Price and Quality of Desktop and Mobile Personal Computers: A Quarter-Century Historical Overview.” American Economic Review 91(2):268–273.

Berndt, Ernst R., and Neal J. Rappaport. 2002. “Hedonics for Personal Computers: A Reexamination of Selected Econometric Issues.” Unpublished paper.

Bloch, Erich, and Dom Galage. 1978. “Component Progress: Its Effect on High-Speed Computer Architecture and Machine Organization.” Computer 11(April):64–75.

Bourot, Laurent. 1997. “Indice de Prix des Micro-ordinateurs et des Imprimantes: Bilan d’une rénovation.” Working Paper of the Institut National De La Statistique Et Des Etudes Economiques (INSEE). Paris, France, March 12.

Bureau of Economic Analysis. 2001. “A Guide to the NIPAs.” In National Income and Product Accounts of the United States, 1929–97. Washington, D.C.: Government Printing Office. Also available at http://www.bea.doc.gov/bea/an/nipaguid.pdf.

Butler Group. 2001. “Is Clock Speed the Best Gauge for Processor Performance?” Server World Magazine September. Available at http://www.serverworldmagazine.com/opinionw/2001/09/06_clockspeed.shtml. Accessed February 7, 2003.


Cale, E.G., L.L. Gremillion, and J.L. McKenney. 1979. “Price/Performance Patterns of U.S. Computer Systems.” Communications of the Association for Computing Machinery (ACM) 22 (April):225–233.

Cartwright, David W. 1986. “Improved Deflation of Purchases of Computers.” Survey of Current Business 66(3):7–9.

Cartwright, David W., Gerald F. Donahoe, and Robert P. Parker. 1985. “Improved Deflation of Computer in the Gross National Product of the United States.” Bureau of Economic Analysis Working Paper 4. Washington, D.C.: U.S. Department of Commerce.

Chow, Gregory C. 1967. “Technological Change and the Demand for Computers.” American Economic Review 57(December):1117–1130.

Chwelos, Paul. 2003. “Approaches to Performance Measurement in Hedonic Analysis: Price Indexes for Laptop Computers in the 1990s.” Economics of Innovation and New Technology 12(3):199–224.

Cole, Rosanne, Y.C. Chen, Joan A. Barquin-Stolleman, Ellen Dulberger, Nurhan Helvacian, and James H. Hodge. 1986. “Quality-Adjusted Price Indexes for Computer Processors and Selected Peripheral Equipment.” Survey of Current Business 66(1):41–50.

Colecchia, Alessandra, and Paul Schreyer. 2002. “ICT Investment and Economic Growth in the 1990s: Is the United States a Unique Case? A Comparative Study of Nine OECD Countries.” Review of Economic Dynamics 5(2):408–442.


Dalén, Jorgen. 1989. “Using Hedonic Regression for Computer Equipment in the Producer Price Index.” R&D Report, Statistics Sweden, Research-Methods-Development, Vol. 25.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Dulberger, Ellen R. 1989. “The Application of a Hedonic Model to a Quality Adjusted Price Index for Computer Processors.” In Dale W. Jorgenson and Ralph Landau, eds. Technology and Capital Formation, pp. 37–75. Cambridge: MIT Press.

Dulberger, Ellen. 1993. “Sources of Price Decline in Computer Processors: Selected Electronic Components.” In Murray Foss, Marilyn Manser, and Allan Young, eds. Price Measurements and Their Uses. Chicago: University of Chicago Press for the National Bureau of Economic Research.


Ein-Dor, Phillip. 1985. “Grosh’s Law Re-visited: CPU Power and the Cost of Computation.” Communications of the Association for Computing Machinery (ACM) 28(February):142–151.

Evans, Richard. 2002. “INSEE’s Adoption of Market Intelligence Data for Its Hedonic Computer Manufacturing Price Index.” Presented at the Symposium on Hedonics at Statistics Netherlands, October 25.


Fisher, Franklin M., John J. McGowan, and Joen E. Greenwood. 1983. Folded, Spindled, and Multiplied: Economic Analysis and U.S. v. IBM. Cambridge, MA: MIT Press.

Flamm, Kenneth. 1987. Targeting the Computer. Washington, D.C.: The Brookings Institution.

Flamm, Kenneth. 1988. Creating the Computer: Government, Industry, and High Technology. Washington, D.C.: The Brookings Institution.

Flamm, Kenneth. 1993. “Measurement of DRAM Prices: Technology and Market Structure.” In Murray Foss, Marilyn Manser, and Allan Young, eds. Price Measurements and Their Uses. Chicago: University of Chicago Press for the National Bureau of Economic Research.

Flamm, Kenneth. 1997. More for Less: The Economic Impact of Semiconductors. San Jose, CA: Semiconductor Industry Association.


Gandal, Neil. 1994. “Hedonic Price Indexes for Spreadsheets and an Empirical Test for Network Externalities.” RAND Journal of Economics 25.

Gordon, Robert J. 1989. “The Postwar Evolution of Computer Prices.” In Dale W. Jorgenson and Ralph Landau, eds. Technology and Capital Formation, pp. 37–75. Cambridge: MIT Press.


Harhoff, Dietmar, and Dietmar Moch. 1997. “Price Indexes for PC Database Software and the Value of Code Compatibility.” Research Policy 24(4-5):509–520.

Holdway, Michael. 2001. “Quality-Adjusting Computer Prices in the Producer Price Index: An Overview.” Bureau of Labor Statistics, October 16.


Ishida, Haruhisa. 1972. “On the Origin of the Gibson Mix.” Journal of the Information Processing Society of Japan 13(May):333–334 (in Japanese).


Jorgenson, Dale W. 2001. “Information Technology and the U.S. Economy.” American Economic Review, 91(1):1–32.

Jorgenson, Dale W., Mun S. Ho, and Kevin J. Stiroh. 2002. “Information Technology, Education, and the Sources of Economic Growth Across U.S. Industries.” Presented at the Brookings Workshop “Services Industry Productivity: New Estimates and New Problems,” March 14. Available at http://www.brook.edu/dybdocroot/es/research/projects/productivity/workshops/20020517.htm.


Kelejian, Harry H., and Robert V. Nicoletti. c. 1971. “The Rental Price of Computers: An Attribute Approach.” Unpublished paper, New York University (no date).

Knight, Kenneth E. 1966. “Changes in Computer Performance: A Historical View.” Datamation (September):40–54.

Knight, Kenneth E. 1970. “Application of Technological Forecasting to the Computer Industry.” In James R. Bright and Milton E.F. Schieman, eds. A Guide to Practical Technological Forecasting. Englewood Cliffs, NJ: Prentice-Hall.

Knight, Kenneth E. 1985. “A Functional and Structural Measure of Technology.” Technological Forecasting and Technical Change 27(May):107–127.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Koskimäki, Timo, and Yrjö Vartia. 2001. “Beyond Matched Pairs and Griliches-Type Hedonic Methods for Controlling Quality Changes in CPI Sub-indices.” Presented at Sixth Meeting of the International Working Group on Price Indices, sponsored by the Australian Bureau of Statistics, April.


Levine, Jordan. 2002. “U.S. Producer Price Index for Pre-Packaged Software.” Presented at the 17th Voorburg Group Meeting, Nantes, France, September.

Levy, David, and Steve Welzer. 1985. “An Unintended Consequence of Antitrust Policy: The Effect of the IBM Suit on Pricing Policy.” Unpublished paper, Rutgers University Department of Economics, December.

Lias, Edward. 1980. “Tacking the Elusive KOPS.” Datamation (November):99–118.

Lim, Poh Ping, and Richard McKenzie. 2002. “Hedonic Price Analysis for Personal Computers in Australia: An Alternative Approach to Quality Adjustments in the Australian Price Indexes.”


Michaels, Robert. 1979. “Hedonic Prices and the Structure of the Digital Computer Industry.” The Journal of Industrial Economics 27(March):263–275.

Moch, Dietmar. 2001. “Price Indices for Information and Communication Technology Industries: An Application to the German PC Market.” Center for European Economic Research (ZEW) Discussion Paper No. 01-20, Mannheim, Germany, August.

Moylan, Carol, 2001. “Estimation of Software in the U.S. National Income and Product Accounts: New Developments.” OECD Paper. September. Available at http://webnet1.oecd.org/doc/M00017000/M00017821.doc.


Nelson, R. A., T. L. Tanguay, and C. C. Patterson. 1994. “A Quality-adjusted Price Index for Personal Computers.” Journal of Business and Economics Statistics 12(1):23–31.

Nordhaus, William D. 2002. “The Progress of Computing.” Yale University, March 4.


Ohta, Makoto, and Zvi Griliches. 1976. “Automobile Prices Revisited: Extensions of the Hedonic Hypothesis.” In Nestor E. Terleckyj, ed. Household Production and Consumption. Conference on Research in Income and Wealth, Studies in Income and Wealth 40:325–90. New York: National Bureau of Economic Research.

Okamoto, Masato, and Tomohiko Sato. 2001. “Comparison of Hedonic Method and Matched Models Method Using Scanner Data: The Case of PCs, TVs and Digital Cameras.” Presented at the Sixth Meeting of the International Working Group on Price Indices, sponsored by the Australian Bureau of Statistics, April.


Pakes, Ariel. 2001. “A Reconsideration of Hedonic Price Indices with an Application to PCs.” Harvard University, November.

Parker, Robert P., and Bruce Grimm. 2000. “Recognition of Business and Government Expenditures for Software as Investment: Methodology and Quantitative Impacts, 1959–98.” Paper presented to BEA’s Advisory Committee, May 5. http://www.bea.doc.gov/bea/papers/software.pdf.

PC World Magazine. 2003. “20 Years of Hardware.” March.

Patrick, James M. 1969. “Computer Cost/Effectiveness.” Unpublished paper summarized in Sharpe (1969, p. 352).

Phister, Montgomery. 1979. Data Processing Technology and Economics, Second Edition. Bedford, MA: Santa Monica Company Publishing and Digital Press.

Prud’homme, Marc, and Kam Yu. 2002. “A Price Index for Computer Software Using Scanner Data.” Unpublished working paper, Prices Division, Statistics Canada.


Rao, H. Raghaw, and Brian D. Lynch. 1993. “Hedonic Price Analysis of Workstation Attributes.” Communications of the Association for Computing Machinery (ACM) 36(12):94–103.

Ratchford, Brian T., and Gary T. Ford. 1976. “A Study of Prices and Market Shares in the Computer Mainframe Industry.” The Journal of Business 49:194–218.

Ratchford, Brian T., and Gary T. Ford. 1979. “Reply.” The Journal of Business 52:125–134.

Rosch, Winn L. 1994. The Winn L. Rosch Hardware Bible. Indianapolis: Sams Publishing.


Serlin, Omri. 1986. “MIPS, Dhrystones, and Other Tables.” Datamation 32(June 1):112–118.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Sharpe, William F. 1969. The Economics of the Computer. New York and London: Columbia University Press.

Statistics Finland. 2000. “Measuring the Price Development of Personal Computers in the Consumer Price Index.” Paper for the Meeting of the International Hedonic Price Indexes Project. Paris, France, September 27.

Stoneman, Paul. 1976. Technological Diffusion and the Computer Revolution: The U.K. Experience. Cambridge: Cambridge University Press.

Stoneman, Paul. 1978. “Merger and Technological Progressiveness: The Case of the British Computer Industry.” Applied Economics 10:125–140. Reprinted as chapter 9 in Keith Cowling, Paul Stoneman, John Cubbin, John Cable, Graham Hall, Simon Domberger, and Patricia Dutton, Mergers and Economic Performance. Cambridge: Cambridge University Press (1980).


Triplett, Jack E. 1985. “Measuring Technological Change with Characteristics-Space Techniques.” Technological Forecasting and Social Change 27:283–307.

Triplett, Jack E. 1989. “Price and Technological Change in a Capital Good: A Survey of Research On Computers.” In Technology and Capital Formation, Dale W. Jorgenson and Ralph Landau, eds. Cambridge, MA: MIT Press.

Triplett, Jack E. 1996. “High-Tech Industry Productivity and Hedonic Price Indices.” In OECD Proceedings: Industry Productivity, International Comparison and Measurement Issues. Paris: Organisation for Economic Co-operation and Development.

Triplett, Jack E., and Barry Bosworth. 2002. “Baumol’s Disease Has Been Cured: IT and Multifactor Productivity in U.S. Service Industries.” Presented at the Brookings Workshop “Services Industry Productivity: New Estimates and New Problems,” March 14. Available at http://www.brook.edu/dybdocroot/es/research/projects/productivity/workshops/20020517.htm.


van Mulligen, Peter Hein. 2002. “Alternative Price Indices for Computers in the Netherlands Using Scanner Data.” Prepared for the 27th General Conference of the International Association for Research in Income and Wealth, Djurhamn, Sweden.

VeriTest. 2003. “Business Winstone ™ 2002 Basics.” Available at http://www.veritest.com/benchmarks/bwinstone/wshome.asp. Accessed February 19, 2003.


Wallace, William E. 1985. “Industrial Policies and the Computer Industry.” The Futures Group Working Paper #007. Glastonbury, CT: The Futures Group.

Wyckoff, Andrew W. 1995. “The Impact of Computer Prices on International Comparisons of Labour Productivity.” Economics of Innovation and New Technology 3:277–293.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

ANNEX A Comparison of Variables in Hedonic Functions for Computers

Author

Data Sources

Dependent Variable

Explanatory Variables

Knight (1966, 1970, 1985)

Price: “published” rental prices.

Independent variables: own, plus “published” specifications

Monthly rental for “most typical” configuration

 

  1. Computing “power” (operations per second)

  1. s = monthly seconds per dollar of monthly rental

Definitions:

M = memory size (in words)

L = word length (in bits)

W = “word factor” (dummy variable for memory types)

k = scaling constant

t1 = time to perform 1 million operations (in microseconds)

t2 = I/O or other idle time for 1 million operations (in microseconds)

a = .05 for scientific, .33 for commercial

NB: t1 and t2 were calculated (from computer specifications and computer center operations data) as weighted average of five categories of computations.

Chow (1967)

Special government survey; Computers and Automation; and IBM

Average monthly rental for specific configurations of computers newly introduced in year t

 

  1. Multiplication time (in microseconds)

  2. Memory size (words × word length)

  3. Memory access time (“average time required to retrieve information from the memory”)

NB: Also tried addition time, rejected for multicollinearity with multiplication time (“a slightly inferior variable”). Notes other omitted hardware characteristics, which he assumes correlated with included characteristics.

Schneidewind (Sharpe, 1969)

Not specified

Monthly rental

 

  1. Memory size (thousands of characters)

  2. Memory cycles per second (words)

Skattum (Sharpe, 1969)

Not specified

Monthly rental

Same as Schneidewind

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Author

Data Sources

Dependent Variable

Explanatory Variables

Early, Barro, and Margolis (Sharpe, 1969)

Not specified

Monthly rental

  1. Memory size (in bits)

  2. Memory cycles per second (presumably in bits)

  3. Several others (including additions per second), which were not significant

Patrick (Sharpe, 1969)

Computer Characteristics Quarterly; Computers and Automation

Monthly rental for “typical” configuration, second-generation computers

  1. Space occupied (in square feet)

  2. Additions per second (in thousands)

  3. Minimum memory (in bits)

  4. Maximum memory (in bits)

  5. IBM dummy

  6. Number of months since first installation

  7. Number of machines installed since introduction

Jacob (Sharpe, 1969)

As Patrick

As Patrick, third-generation computers

  1. Additions per second (in thousands)

  2. Minimum memory (in thousands of bits)

  3. Maximum memory (in thousands of bits)

  4. Memory cycles per second (thousands of bits)

  5. Number of operations codes

  6. IBM dummy

  7. Number of months since first installation

  8. Number of machines installed since introduction

Kelejian and Nicoletti (1971)

Computers and Automation; Computer Characteristics Quarterly

Minimum monthly rental

  1. Add time (in microseconds)

  2. Storage cycle time (in microseconds)

  3. Minimum memory size (thousands of bits)

Stoneman (1976)

British Commercial Computer Digest; Computers and Automation; other

Published average price, all installations, all years machine is sold

  1. Cycle time in microseconds

  2. Maximum storage in thousands of bits

  3. Floor area in square feet

  4. Year and “generation” dummies

NB: Final set of variables selected from a much larger original set by comparing adjusted R2 for groupings of the original variables. Author comments that owing to multicollinearity floor area proxies for speed.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Author

Data Sources

Dependent Variable

Explanatory Variables

Ratchford and Ford (1976, 1979)

Auerbach Corp. (two sources), cross sections for 1964, 1967, and 1971

Average monthly rental, computer systems (CPU plus peripherals)

  1. Memory size (maximum words in storage available with particular CPU)

  2. Add time (in microseconds)

  3. Dummies for age of machine and manufacturer

NB: 36 variables tested with factor analysis; however, regression based on four variables mentioned by Chow, with two retained.

Stoneman (1978)

British Commercial Computer Digest; Computers and Automation; other

Prices of newly introduced machines

  1. Cycle time in microseconds

  2. Maximum storage in thousands of bits

  3. Dummies for year of introduction

Archibald and Reece (1979)

Computer Price Guide; characteristics from various published sources

Asking price for used IBM machines of specified configuration

  1. Add time (in microseconds)

  2. Memory size (bits) in configuration

  3. Cycle (read) time (in microseconds)

  4. Access time (in milliseconds)

  5. Number of time share features

  6. Number of CPU “intensiveness” features

  7. Printer speed (hundreds of lines per minute)

  8. Card reader speed (hundreds of cards per minute)

  9. Several other characteristics of peripherals

NB: Got “incorrect” signs for major variables, which often happens with multicollinearity and many variables

Michaels (1979)

Auerbach Corp. (same as Ratchford and Ford: 264 “configurations” of CPU and peripherals, as of July 1971)

“Basic” monthly rental for specified configuration

  1. Add time (in microseconds)

  2. Index, memory core size (thousands of bytes), and transfer speed within core (in kilobytes per second)

  3. Index, card reader speed, and card punch speed

  4. Index, number of tape drives, and maximum read-write speed

  5. Storage capacity (millions of bytes) in configuration

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Author

Data Sources

Dependent Variable

Explanatory Variables

  1. Dummies for manufacturer and introduction year (gives price index, relative to earliest machines)

NB: Justification for forming indexes based on technical assumptions—e.g., number of tape drive substitutes for speed in achieving same results.

Cale, Gremillion, and McKenney (1979)

Datapro

Price at introduction for a “balanced” system (processor plus peripherals)

  1. Memory size in bytes

  2. Size (in megabytes) of online direct access storage

NB: Addition time and other unspecified speed measures insignificant, partly owing to multicollinearity

Fisher, McGowan and Greenwood (1983)

Government lease price lists

Lease prices to federal government

  1. Memory size in thou sands of bits

  2. Addition time (including access time)

  3. Transfer rate (bytes per second)

Wallace (1985)

GML Corp.; International Data Corp; Phister (1979)

List prices of all machines

  1. Linear combination of MIPS and KOPS

  2. Memory size included or minimum memory size (units not given)

  3. Dummy variables for computer size class

  4. Dummy variables for manufacturers

Cartwright, Donohoe, and Parker (1985)

Auerbach Corp.; Datapro Corp.; and Computerworld

List prices, all machines available

  1. Speed (memory cycle time, machine cycle time, or MIPS, depending on period)

  2. Memory size (in megabytes)

  3. Maximum number of channels

Levy and Welzer (1985)

Computerworld

Published (list) prices, all machines from major producers

  1. MIPS

  2. Average memory size

  3. Dummy variables for manufacturer, and for newly introduced

Ein-Dor (1985)

Computerworld; other sources

List price, selection of 106 machines

  1. MIPS (a number of other performance measures were related to MIPS and to “average computational cost”)

Flamm (1987)

Phister (1979)

List price, all machines in source

  1. KOPS × 10−3

  2. Memory size in megabytes

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Author

Data Sources

Dependent Variable

Explanatory Variables

Gordon (1989) 1954–1979 regressions

Phister (1979)

Prices of newly introduced machines

  1. Memory cycle time (in microseconds)

  2. Memory size (in megabytes)

  3. IBM dummy

1977–1984 regressions

Computerworld

Prices of all machines

  1. Machine cycle time (in nanoseconds

  2. Memory size (in megabytes)

  3. Minimum number of channels

  4. Maximum number of channels

5. Cache buffer size (units not given)

Dulberger (this volume)

Datamation; Computerworld; IBM

List price, IBM and “plug-compatible” machines

  1. MIPS

  2. Memory size (in megabytes)—maximum and minimum

  3. “Technology class” dummy variables

NB: Each machine entered twice in the data set, once with maximum memory size available, once with minimum memory size, with the appropriate price for each.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

ANNEX B1 Variables in Computer Hedonic Functions, Hardware Components Only

 

Cole et al. 1986

Berndt and Griliches 1993

Berndt et al. 1995 (desktops)

Berndt and Rappaport 2002a

Chwelos 2003 (laptops)

Processor (CPU)

 

Speed

MIPS

MHz

MHz

MHz

MHz * CPU or benchmark scores

Memory

MB (min and max)

KB

KB (installed and maximum)

MB

MB

Cache

no

no

no

no

no

Technology variables

Chip dummies

16- or 32-bit processor chip dummies

8-, 16- or 32-bit processor chip dummies

Processor type; processor type*MHz

Intel dummy

Disk (hard) drive

 

Capacity

MB

MB

MB

MB

MB

Speed

Sum of 3

no

no

no

no

Other

no

no

Dummy for no HD

no

no

Displays (terminals, monitors, and keyboards)

 

Screen size

Number of characters

no

no

no

Size

Resolution

Dpi

no

no

no

Pixels in maximum resolution

Color

Number

Dummy

no

no

Dummy

Other

Number of function keys

no

no

no

Active or passive matrix LCD dummies

Other hardware features (if yes, see Annex B2)

no

7

6

2

8

Software features (if yes, see Annex B2)

no

no

no

no

no

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Nelson et al. 1994

Pakes 2001

Moch 2001 (Germany)

Rao and Lynch 1993 (workstations)

Holdway 2001 (U.S.)

Bourot 1997 (INSEE)

MHz

MHz, MHz^2

Test score

MIPS

MHz

MHz

MB

MB

MB

KB

MB

MB3

no

no

KB

no

no

KB

Processor type

Maximum memory; Apple*speed

Architecture dummy

no

Celeron dummy

Chip dummies

MB

GB

MB

MB

MB

MB

no

no

no

no

no

no

no

no

no

no

no

Type dummies

no

no

Size

no

Size dummies

Size

no

no

no

no

Trinitron dummy

dpi

Dummy

no

Dummy

no

no

no

Monochrome monitor dummy

no

 

Monochrome monitor dummy

no

no

5

7

6

3

9

7

2

no

yes

no

3

yes

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

 

Evans 2002

 

Barzyk 1999 (StatCan)

Dalen 1989 (Sweden)

Koshimaki and Vartia 2001

 

INSEE01

INSEE02

 

Processor (CPU)

 

Speed

MHz

b

Test score

MHz

MHz

Memory

MB

MB

MB

MB

MB

Cache

no

max

KB

no

no

Technology

Memory type; maximum memory

no

no

no

no

Disk (hard) drive

 

Capacity

GB

b

MB

MB

no

Speed

no

no

no

Access time

no

Other

no

no

Type dummies

no

no

Displays (terminals, monitors, and keyboards)

 

Screen size

no

no

no

no

no

Resolution

no

no

no

no

no

Color

no

no

no

no

no

Other

no

no

no

no

no

Other hardware features (if yes, see Annex B2)

7

4

6

no

no

Software features (if yes, see Annex B2)

no

no

no

no

no

aIncludes the same variables as Berndt and Rappaport (2001) plus microprocessor-type dummy variables and interactions between microprocessor type and clock speed.

bReplaced by external volume measure.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Statistics Finland 2000

Okamoto and Sato 2001

Lim and McKenzie 2002

van Mulligen 2002

MHz

MHz

CPU score

MHz

MB

MB

MB

MB

no

no

KB

no

Type dummy

Processor type

no

Processor type

GB

MB

MB

GB

no

no

no

no

no

no

no

no

Size

Size

17″ dummy

no

no

no

no

no

no

no

no

no

no

No monitor dummy; LCD dummy

no

Dummy variable for presence

no

4

9

3

no

no

no

no

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

ANNEX B2 Computer Hedonic Functions, Other Hardware and Software Features (for other variables and sources, see Annex B1)

 

Berndt and Griliches

Berndt et al.

Berndt and Rappaport

Chwelos

Nelson et al.

ZIP dummy

no

no

no

no

no

CDROM dummy

no

no

yes

no

no

CDROM speed

no

no

no

yes

no

CDRW dummy

no

no

no

no

no

DVD dummy

no

no

no

no

no

Sound card dummy

no

no

no

no

no

Video (MB)

no

no

no

no

no

Network card

no

no

no

no

no

Modem dummy

no

no

no

modem speed

no

Speakers dummy

no

no

no

no

no

Case type dummy

no

no

no

no

no

Warranty dummy

no

no

no

no

no

Seller dummies

yes

yes

major brand

major brand

yes

SCSI control

no

no

no

no

no

Operating system

no

no

no

no

yes

Other software

no

no

no

no

other software utilities

Other

number of floppy drives

slots available for expansion board

mobile dummy

discounted by vendor

age

extra hardware

two or more floppy drives dummy

size

weight

density

age

 

battery type

battery life index

density

discount price

weight

number of floppy drives

extended industry standard architecture bus

number of slots

number of ports

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Pakes

Moch

Rao and Lynch

Holdway

Bourot

no

no

no

yes

no

no

yes

no

no

yes

no

no

no

no

yes

yes

no

no

no

no

yes

no

no

yes

no

yes

no

no

no

yes

yes

yes

no

yes

yes

yes

no

no

yes

no

yes

no

no

yes

yes

no

no

no

yes

no

no

yes

no

no

yes

no

no

no

yes

no

Apple

no

yes

yes

no

no

no

yes

no

no

no

yes

no

yes

no

no

number of bundled applications

no

software office suite; MS Office

no

 

second floppy dummy bus width

number of graphics standards supported

business market

other cards

 

mouse dummy

 

 

 

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

 

Evans

 

Barzyk

 

INSEE01

INSEE02

 

ZIP dummy

no

no

no

CDROM

no

no

yes

CDROM speed

yes

no

no

CDRW

yes

no

no

DVD dummy

no

no

no

Sound card dummy

yes

no

no

Video (MB)

no

yes

no

Network card

yes

no

yes

Modem dummy

yes

no

yes

Speakers dummy

no

no

no

Case-type dummy

yes

no

yes

Warranty dummy

no

yes

no

Seller dummies

no

yes

yes

SCSI control

no

no

yes

Operating system

no

no

no

Other software

no

no

no

Other

number of slots

network location

 

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

Okamoto and Sato

Lim and McKenzie

van Mulligen

no

no

no

no

no

no

no

no

no

no

yes

no

no

no

no

no

no

no

no

yes

no

no

yes

no

yes

no

no

no

yes

no

no

yes

no

no

yes

no

Apple

yes

yes

no

yes

no

no

no

no

no

no

no

TV tuner

expandability

USB port

vintage

 

workstation dummies

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×

This page intentionally left blank.

Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 97
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 98
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 99
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 100
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 101
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 102
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 103
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 104
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 105
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 106
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 107
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 108
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 109
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 110
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 111
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 112
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 113
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 114
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 115
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 116
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 117
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 118
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 119
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 120
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 121
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 122
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 123
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 124
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 125
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 126
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 127
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 128
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 129
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 130
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 131
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 132
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 133
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 134
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 135
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 136
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 137
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 138
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 139
Suggested Citation:"II RESEARCH PAPER: Performance Measures for Computers--Jack E. Triplett." National Research Council. 2005. Deconstructing the Computer: Report of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/11457.
×
Page 140
Next: Appendix A Biographies of Speakers »
Deconstructing the Computer: Report of a Symposium Get This Book
×
Buy Paperback | $55.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Starting in the mid 1990s, the United States economy experienced an unprecedented upsurge in economic productivity. Rapid technological change in communications, computing, and information management continue to promise further gains in productivity, a phenomenon often referred to as the New Economy. To better understand this phenomenon, the National Academies Board on Science, Technology, and Economic Policy (STEP) has convened a series of workshops and commissioned papers on Measuring and Sustaining the New Economy.

This major workshop, entitled Deconstructing the Computer, brought together leading industrialists and academic researchers to explore the contribution of the different components of computers to improved price-performance and quality of information systems. The objective was to help understand the sources of the remarkable growth of American productivity in the 1990s, the relative contributions of computers and their underlying components, and the evolution and future contributions of the technologies supporting this positive economic performance.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!