Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 192 parts of a task in parallel, much as the above scenario described multiple processors in the same system unit performing independent threads into which a task had been decomposed.42 Computer architectures other than SISD have been rare in part because of the difficulty of expressing algorithms in a form capable of effectively exploiting the other architectures. However, this situation is changing. As Gordon Bell (1989:1100)43 notes: The good news is that a vast array of new, highly parallel machines are becoming available and that autotasking compilers are evolving to take advantage of this in a transparent fashion to a limited degree. The bad news is that not all applications can be converted automatically. Users are not being trained to use such machines in an explicit fashion. No concerted effort is in place covering the spectrum from training to research. This will require a redirection of resources. It seems evident that microanalytic simulation modeling activities are likely to be able to exploit, perhaps quite substantially, at least some of these new architectures. The extent to which their productivity will increase is difficult to determine because such an architectural shift implies a structural change in the way in which tasks are performed. The present authors believe that the beneficial effect of such new architectures will be quite substantial, even though we cannot predict with certainty how they will be realized. Economic Studies of Industry Performance Two studies of the cost of computer components are of interest. One, performed 42 Decomposition of programs into front and back ends appears to be an effective manner of exploiting comparative advantages of dissimilar but linked computer systems. For example, specific front-end processors for Mathematica (Wolfram Research, Champaign, Ill.) have been built for a variety of computer systems, while a common computationally intensive back end is written in UNIX and can run on any system, local or remote. This allows the front-end process to specialize in providing an effective user interface, such as the current implementation on the Macintosh. Similarly, the back end of the program, which does the intensive calculations, could run on a remote Cray, if desired, for efficiency. Yet the two systems could be linked through a network in such a way that a Macintosh user would be aware of only one logical system performing the work. Similarly, Apple Computer, Inc.'s, development of CL/I (originally from Network Innovations, Cupertino, Calif.), an intersystem protocol bridging, via a network connection, a front-end database user interface and a back-end SQL database engine, is meant to encourage the decoupling of these dissimilar requirements and their implementation in dissimilar environments having relative advantages for each process. 43 Bell discusses at length different machine architectures and their potential for radically improving computational speeds and productivity.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 193 by Kenneth Flamm, 44 using methodology developed by David Kuck45 and data collected by Montgomery Phister,46 analyzes data for the period 1957â1978. The second, by Jack Triplett and colleagues at the Bureau of Economic Analysis (BEA) of the U.S. Department of Commerce, arose from the recognition that computers and information technology are becoming an important component of the gross national product, and that classical methods of computing price indices for this class of goods are inadequate because of the very high rate of quality and productivity improvement over time. Phister's data, which are used in Flamm's study, cover what is fast becoming a historical period in computing. All of the data are 12 years old, which is approximately two computer generations ago. Both sets of data were gathered for all sizes of computers available at the time; Pfister used characteristics of mainframe and minicomputer systems, and Triplett also included desktop systems. Flamm notes three ways of measuring differences in performance between computer systems:47 â¢ Benchmarks Executing the same program or set of programs on different computing systems provides the best immediate comparison of systems and is often done by system purchasers to assist in determining which of several systems to purchase for a specific job mix. However, within a broader scope of inquiry the method has severe drawbacks. Job mixes are often specific to needs; any job mix represents a specific manner in which resources are combined to produce outputs. Second, the method is impossible to use over time; most computers from the 1960s are no longer available for benchmarking. Finally, the market basket of outputs that computer purchasers want changes over time; last decade's computers are relatively ineffective in meeting today's distribution of computer-related demand and conversely. â¢ Characteristic weighting One can select indices of performance on specific machines and extend that measure over a broader class of machines. This measure may be easier to generate but suffers from changes in the underlying job mix. Furthermore a weighted index of component productivity suffers from changes in the weights, including the creation of new types of components. â¢ Prices An approach based on prices rests on the hypothesis that, over time, prices are related to the underlying technical characteristics of the 44 See Flamm (1987). Flamm's Chapter 2 and Appendix A discuss problems in the measurement of computer performance. Appendix A discusses Kuck's methodology and recapitulates Phister's data; it also contains Flamm's year-to-year results. 45 See Kuck (1978:155â169). Kuck uses the notion of information processing bandwidth to approximate a single measure of output or productivity for computer system components and entire systems. 46 See Phister (1979) as well Flamm (1987: Appendix A) for a restatement of Phister's summary data. 47 Refer to the discussion in Flamm (1987:21â24)
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 194 computers. To the extent that component prices and technical characteristics can be isolated, the resulting data can be analyzed statistically to determine the rate of increase in productivity or capacity of the components. Such an approach is labeled hedonic analysis; it is the basis of Flamm's analysis and is used extensively by Triplett in his study. TABLE 7 Declining Cost of Computer Hardware, 1957â1978 System Component Percent Decline in Real Quality Corrected Cost per Year 1 Year 6 Years 10 Years CPU 32 90.1 97.9 Primary memory 37 93.7 99.0 Disk storage 28 86.1 96.3 Tape storage 28 86.1 96.3 Printers 16 64.9 82.5 Card readers and punches 12 53.6 72.1 Complete computer system 28 86.1 96.3 SOURCE: Flamm (1987: Table 2â1). Using Phister's and Keck's measures of performance and price, Flamm regressed the logarithm of these price indices over time. His results, given in Table 7, are extrapolated for periods of 6 years (1990â1995) and 10 years (one decade). The work of Triplett and his colleagues at BEA was performed to support the creation of a new price index for computers. The new index was incorporated into the revised estimates of the national income and product accounts released in December 1985.48 Such indices are important in estimating product accounts in order to assist in dealing with the problem of rapid quality change and producing quality adjusted indices. In deriving the initial indices on which the BEA indices were primarily based, Cole and her colleagues at IBM (Cole et al., 1986) restricted the input data to mainframes of the IBM 360/370/308Ã/43Ã1/3090 class, including those manufactured by plug-compatible suppliers. Deficiencies in the original IBM study include the absence of personal computers and small disk drives and the use of list prices rather than transaction prices. An update of the price index using data through fiscal year 1988 (Cartwright and Smith, 1988) remedies one of these deficiencies by introducing a separate price index for personal computers.49 However, the index for disk drives is 48 See Triplett (1986) and Cole et al. (1986) for background material explaining how the index is constructed and for discussions on the application and interpretation of hedonic methods with respect to computing systems. 49 The PC quality-adjusted price index is based on IBM PCs and three additional manufacturers, covering about 50 percent of microcomputer shipments in 1987. See Cartwright and Smith (1988) for more detail.
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 195 limited to large and intermediate drives, which accounted for less than 20 percent of total disk drives in 1984. Other deficiencies in the index are acknowledged, and work to address them continues. TABLE 8 Implicit Price Deflator and Fixed Weight Price Indices for Computers and Peripheral Equipment, 1982â1988 (index numbers, 1982=100) Year and Quarter Implicit Price Deflator Fixed-Weight Price Index 1982: I 105.9 105.0 II 103.0 101.2 III 98.5 98.3 IV 91.9 95.5 1983: I 84.0 93.5 II 78.3 87.0 III 73.6 83.0 IV 72.1 72.1 1984: I 68.5 72.1 II 65.6 70.9 III 62.9 69.4 IV 60.6 67.6 1985: I 56.2 61.9 II 52.3 57.1 III 49.5 53.6 IV 46.8 50.2 1986: I 45.5 48.5 II 45.4 48.1 III 45.2 47.7 IV 44.7 47.3 1987: I 42.9 45.7 II 40.0 42.8 III 37.7 40.5 IV 35.8 38.6 1988: I 34.7 37.4 II 34.2 36.8 III 33.7 36.2 SOURCE: Cartwright and Smith (1988: Table 2). Table 8 displays the quarterly observations that currently make up the existing U.S. Department of Commerce series for the implicit price deflator for computers and peripheral equipment. Figure 1 depicts a logarithm of implicit price deflator data, with the regression line over time. The regression equation is
FUTURE COMPUTING ENVIRONMENTS FOR MICROSIMULATION MODELING 196 FIGURE 1 Implicit computer price deflator. SOURCE: Triplett (1986). Both coefficients have t-ratios over 37, and the regression equation explains 98.2 percent of the variance. The annual rate of change of the logarithm translates into an annual rate of decrease of the price deflator index of 16.4 percent. The 16.4 percent rate of decrease of Triplett's index is significantly smaller than Phister's result of 28 percent per year as reported by Flamm. The result also seems counterintuitive based on work by the present authors in the computer industry. Several factors could account for the discrepancy: â¢ Triplett's index is historically rooted in work performed by IBM using its models and list prices; the original IBM study did not include microcomputers. The effect of excluding them would tend to bias the rate of productivity improvement downward, since much of the improvement has resulted from highly elastic demand and a mass market in microcomputer technology. â¢ The index includes more traditional computers and ma y not include emerging workstation technology. Improvements in quality and productivity are more likely to be associated with small firms on the technical frontier rather than larger, more established firms whose sales levels make them more likely candidates for inclusion in a price index. â¢ Most components included in the index are more likely to affect more traditional forms of computing than innovative applications using the newest systems. For example, the present authors suspect that any productivity increase resulting from the introduction and evolution of parallel or network architectures is unlikely to be included in the index until such architectures are more broadly accepted.