Information Technology and the Productivity Paradox
At first glance, it would seem impossible that anyone could argue that information technology (IT) has been ineffective in the U.S. economy. Over the past 25 years, microelectronics has revolutionized many services and products, the way goods are produced, and the life-styles of consumers. Advances in medicine, from computerized axial tomography (CAT) scanners to ordinary laboratory equipment, are totally dependent on microelectronics. The round-the-clock availability of automatic teller machines (ATMs) and the capability to send facsimiles of documents thousands of miles in seconds also attest to the impact of micro-electronics. And more somberly, the intact but empty facades of government buildings in Baghdad are reminders of the power of microelectronic ''smart bombs" to destroy their targets with surgical precision.
Despite these and numerous other examples of the power of IT, a growing body of scholarly research indicates that the information revolution has failed to deliver in one important respect. That is, for all its accomplishments over the past quarter century, IT has not improved the productivity of the U.S. economy or U.S. firms.
As discussed in Chapter 1, the term productivity can take on several meanings. In this chapter it refers to the ratio of output (e.g., goods produced or total sales) to inputs (labor, capital, raw materials) for a firm or for an entire economic sector. This ratio is sometimes called throughput productivity, and it is measured in physical or monetary terms. The expectation for microelectronics was that it would enable
factories and offices to produce more productively—that the ratio of output per unit of input would increase.
Several scholars who have attempted to measure the benefits of computer technology in the U.S. economy in a systematic fashion were unable to find overall productivity improvements due to IT. Some used government data on the productivity of the economy as a whole. Others examined specific industrial sectors, such as services. Still others collected data on representative samples of firms within one industry and found little or no payoff, even in industries that have invested very heavily in IT, such as the banking and insurance industries. A few researchers did find a positive contribution of IT, but sometimes of such small magnitude that it underlines rather than contradicts the concerns of other researchers regarding productivity. It is the combination of such evidence (detailed below) that leads to the belief that there is a productivity paradox regarding IT.
In this chapter I review the emerging literature on the IT productivity paradox and discuss the major studies. I also identify a series of mechanisms that explain how the potential productivity payoffs of IT are attenuated or negated. Some of the mechanisms have been well documented; others are more speculative—hypotheses with partial evidence. Taken together they begin to chart the causes of the productivity paradox. But before undertaking those parts of the chapter, I frame the discussion by explaining why the productivity paradox is so profoundly puzzling to scholars and why it should also be taken very seriously by the public at large and, especially, the computing community.
The computer revolution would appear to have been extremely successful. Initial improvements in electronics unleashed a wave of innovation, and computers rapidly diffused across an enormous range of industries. Today, computers are indispensable parts of all manner of enterprises, from multinational corporations to mom-and-pop groceries. Further, there have been dramatic improvements in the productivity of the basic technology. Microprocessors continue to provide improvements in the processing power per dollar for central processing units on the order of 20 percent a year.
Almost everyone expected the next step to be a marked improvement in productivity in the broad range of industries that had adopted computers. The need for such a productivity breakthrough was acute: Since the late 1960s the productivity of U.S. factories, service industries, and offices had been virtually stagnant, while that of the nation's international economic competitors had been rising. Firms in the United
States were losing market share, in part because of their higher cost structure (National Academy of Engineering, 1988).
The promises made for IT were lavish and typically centered on productivity payoffs. Vendors of the technology, from office automation to computer-aided design to computer-aided software engineering, assured buyers that the technology would increase productivity by requiring fewer workers to perform a given amount of work or by allowing expensive skilled labor to be replaced by cheaper semiskilled labor.
American industry believed the promises. The levels of investment in IT have been staggering. In 1990 alone, U.S. businesses invested about $61 billion in hardware, about $18 billion in purchased software, and about $75 billion in data processing and computer services (U.S. Department of Commerce, 1991). (These amounts exclude investment in telecommunications per se, beyond computers.) Within a U.S. corporation today, IT often accounts for a quarter or more of the firm's capital stock, the total value of its equipment and plant (Roach, 1988b, 1991).
For two decades IT has consumed an ever-increasing proportion of the investment dollar in U.S. industry. Overall industrial investment, however, has been roughly constant over the same period, which implies that investment in other types of machinery and equipment, as well as investment in employee training and other "soft" investments, has been lessened or deferred in favor of IT. This pattern differs from that of the nation's major international competitors. While they too have put large sums into IT, their investment in computing (especially in white-collar automation) falls far behind that of the United States (Picot, 1989).
In one sense then, U.S. industrialists have taken a huge gamble on IT, in terms of the success of their individual firms and, most especially, the nation's competitive standing. It is in the context of international competitiveness that the apparent lack of productivity gains is so shocking. It begins to look as if the gamble is failing. Thus, those who believe in the productivity paradox do not argue that computers are a bad thing. Nor do they disregard the important improvements in goods, services, and the quality of life that have resulted from IT. Rather, they are profoundly disquieted by the fact that IT does not appear to have fulfilled its most important promise, that of increasing economic productivity and thereby improving the competitiveness of U.S. industry.
The challenge is to understand the basis of the productivity paradox, to unravel the reasons why IT investments as a whole have not paid off. Has the investment gone into the wrong applications? Are some applications productive while others are not? Are there positive
productivity contributions of IT that are being offset or frittered away by psychological, sociological, or organizational dynamics within firms? To what extent do design and technological factors contribute to the paradox? Only when the nation gains an understanding of the dynamics of IT and productivity inside economic organizations and answers these questions can it expect to reverse the productivity paradox and realize the productivity potential of IT.
The studies that suggest a paucity or lack of productivity payoff from IT are of three types, each of which involves a different level of aggregation, a different unit of analysis. The first type analyzes productivity levels and IT investments in an entire economic sector, such as services, for a period of years. The expectation is that increases in IT investment over time will be reflected in improvements in sectoral profitability or productivity over time (albeit with lags).
A second type compares productivity and IT investment across several industries. The expectation is that those industries with greater penetration of IT will show greater productivity increases over time. If no relationship between IT intensity and productivity change is found, there is a prima facie case that IT is ineffective in terms of increasing productivity.
A third type focuses on representative samples of firms within one industry and looks at whether those firms with higher levels of IT investment have higher productivity or profitability (net of other factors) than similar firms with less IT. By specifically controlling for differences in size, capitalization, and other plausible determinants of productivity, this kind of study most effectively isolates the contribution of IT investment to increases in productivity.
A fourth type exists, studies of single firms, but is not discussed here. Individual case studies can be very useful in identifying mechanisms underlying productivity, and they are used for that purpose in Chapters 9–11. But one cannot determine from a collection of individual case studies whether productivity is improving in the economy as a whole. For that, one needs representative samples of firms or sector-level data. (For a synopsis of case studies of IT and productivity in individual firms, see Crowston and Treacy, 1986.)
Each of the first three approaches above has strengths and weaknesses, but in combination they are most powerful because the analytic strengths of one approach tend to offset the weaknesses of the others.
For example, confronted with sectoral evidence that increased expenditures on IT over time have coincided with stagnant productivity, Bowen (1986) suggested that without the currently high level of IT investment, the productivity trend might have been even more dismal. This is a perfectly tenable rejoinder to sectoral studies, but it fails to explain why in interindustry studies, industries with higher levels of IT investment tend to have lower levels of productivity improvement than industries with far less IT investment, or why in several studies of firms within one industry, IT-intensive firms perform no better than low-IT firms. Thus, findings on IT and productivity that hold across all three levels of analysis should be more convincing than findings limited to one type of study design, and theoretical objections to findings from one level of analysis should be viewed with caution unless they also negate findings from other levels of analysis.
Roach (1983, 1984, 1986, 1988a–c, 1991; Gay and Roach, 1986) has conducted a series of studies of the relationship between IT investment and productivity within the service sector. Conditions in the early 1980s did not seem auspicious for a dramatic leap in productivity in this sector. The rate of growth of the nation's capital stock had slowed from the 1960s to the 1980s, which did not augur well for investment-driven productivity growth (Roach, 1983). Nevertheless, in the early and mid-1980s, Roach expected that as the information sector became more capital intensive, its productivity would surge (Roach, 1984). That did not happen, however. Investment in white-collar work did indeed catch up: By 1983 the amount of "high-tech capital" per information worker achieved parity with the amount of "basic industrial capital" per production worker in manufacturing (Roach, 1986:13). But despite this infusion of capital, white-collar productivity in the service sector grew at a miserly rate of 0.7 percent a year between 1982 and 1987.
Roach is aware of the many possible causes of the nation's productivity slowdown, but he has become increasingly critical of investments in computers and other IT. He has documented the very large investments in IT in service industries over the past two decades and the extent to which those industries have become highly IT dependent. For example, he reported that 38 percent of the entire capital stock of insurance carriers is invested in IT; 26 percent for banks, and 53 percent for the communications industry (Roach, 1988b). Yet productivity has been falling in the finance and insurance industries since 1973, and
the greatest drop has occurred since 1979. The communications industry has experienced modest productivity growth, but that growth has been slowing over time, despite continuing IT investment. Even with the infusion of 84 percent of the nation's multibillion dollar IT investment, "the level of white-collar productivity in 1987 was actually no higher than it was in the mid 1960s" (Roach, 1988c:1).
Roach (1988b) has suggested that executives in charge of IT investments have been "rolling the dice" (i.e., spending large sums on projects whose productivity and profitability outcomes are uncertain while tolerating internal measurement systems that are incapable of telling them whether their investments are really paying off). He points out that the investment in IT has occurred in a period when total investment has been stagnating. In this zero-sum situation, precious investment capital has been committed to a low-payoff technology.
In contrast, the goods-producing sector in the United States has experienced a significant increase in productivity, despite its relatively low investment in IT. The implication is that IT investment in the service sector has been excessive: In Roach's (1988a:6) words, "We have over-MIP'd ourselves" (refers to a computer's capability to process millions of instructions per second). Such sentiments produced a flurry of comment in the business press (Business Week, 1988; Roach, 1988a), but that apparently did not affect IT investment. In 1988, IT absorbed 42 percent of total corporate outlays on capital equipment, and the proportion is still climbing.
A dramatically different sectoral approach to assessing the value produced by IT investments is to be found in Bresnahan's (1986) study of the financial services sector. Bresnahan used a welfare economics framework that has been applied to several other technological innovations (Mansfield, 1977). Within this framework economists conceptualize advances in one sector as providing spillovers in the form of reduced costs or extra value to downstream users of the product of that innovation. For example, advances in computer design and manufacturing techniques spill over from computer manufacturers into benefits for the immediate user of less expensive computers (the financial services sector) and the customers of that sector.
What is striking about Bresnahan's approach is that he did not measure changes in output or productivity in the downstream sector (here, financial services) in order to assess the value produced by computers in that sector. Instead he inferred "the value of the technology from the adopters' willingness to pay." More specifically, "the value spilled over [is] inferred from the demand curve of the downstream sector for the output of the advancing sector [computer manufacturing]" (Bresnahan, 1986:742). Thus, by analyzing the relationship between
the quality-adjusted price of computers in 1958 and 1982 and the demand for them (expenditures) by financial services in those two years, Bresnahan obtained a derived demand curve. The area under the curve is then conceptualized as a welfare index—the value of the spillover. Using this technique, Bresnahan concluded that between 1958 and 1982, the value of mainframe computers to the financial services industry and its customers was at least 1.5 to 2 times the expenditures on those computers. There is "a very large social gain to computerization" (p. 742).
Bresnahan drew on models that are widely accepted by economists of innovation but highly problematic for other scholars. Treating productivity and related benefits as a direct function of the demand curve for computers enabled him to bypass the thorny problem of empirically determining the magnitude of productivity changes. Moreover, the possibility that a sector could make large (and increasing) investments in a technology without obtaining benefits is ruled out by the theoretical assumptions under which Bresnahan and his colleagues work.
Bresnahan's most important assumption is that the volume of computer purchases at a given price (the demand curve) is a function of the actual value produced by computers for the buyer (rather than a function of the buyer's hopes or expectations of produced value). To the extent that purchases of a new and complex technology are like a "jump in the dark," in which productivity or profitability benefits are hoped for but not known in advance, the welfare approach is suspect. Thus, it is prudent to treat Bresnahan's findings as estimates of what benefits would obtain from computers under stringent, but questionable, theoretical premises, rather than as measures of the actual historical payoff from computers.
A striking contrast to Bresnahan's research is to be found in Franke's (1989) analyses of computerization in the financial services sector (insurance and banking) based on government time series data on industry inputs and outputs from 1958 to 1983. Capital intensity has grown very steeply in this sector since the early 1960s, largely because of the introduction of computer technologies. Disaggregating trends over time in capital productivity versus labor productivity, Franke found that while labor productivity has risen modestly, the productivity of capital has dropped precipitously since the mid-1950s. Through regression analysis, he linked changes in capital productivity to specific technological innovations, for example, magnetic ink character recognition (MICR), second-generation mainframes, and ATMs. In general, these innovations were associated with drops in capital productivity: They lowered the return on investment (ROI), rather than improving it, to the point that capital productivity in 1983 was only 22 percent of its 1957 peak.
Franke's models provide some reasons for optimism about the future, however. Microcomputers and fourth-generation computers appear to be improving productivity somewhat, although ATMs are reducing it. Thus, Franke interprets the productivity paradox as an essentially transitional phenomenon, albeit one that has resulted in three decades of declining capital productivity in financial services. He expects productivity improvements in future decades.
Osterman (1986) examined productivity using government data on employment and capital stock in 40 two-digit Standard Industrial Classification (SIC) industrial groups and survey data on the computer stock of each industry (a two-digit industrial group aggregates a number of different products). His focus was the effects of computers on managerial and clerical employment between 1972 and 1978, net of changes in output and wages. He observed a positive and statistically significant effect of computers on clerical productivity: For each 10 percent increase in computer stock, clerical employment decreased by 1.8 percent between 1972 and 1978 (net of changes in output). He also found a similar, but smaller, effect for managerial productivity.
Osterman's findings indicate that computers do have measurable productivity effects, but one must be cautious in reading them as a direct refutation of Roach's findings. Osterman's analyses included manufacturing and service industries. In order to address Roach's findings directly, one would have to know whether the productivity effect was created primarily by manufacturing or whether computers also displaced labor in service industries. It is also hard to gauge the size of the productivity effect from Osterman's measures. He described the displacement of clerks as "substantial." But whether a 1.8 percent reduction in clerical labor per 10 percent increase in computer stock is substantial depends on how much investment in IT is necessary to produce that 1.8 percent shift. Unfortunately, the measures as reported do not permit a practical assessment of the size of the effect.
Berndt and Morrison (1991) used a combination of government data sets to examine the effects of IT investment, defined broadly (computers, communications equipment, photocopiers, and the like), on profitability and productivity for a sample of 20 two-digit SIC manufacturing industries from 1976 to 1986. In most of the industries, IT's share of investment increased dramatically during the period.
Berndt and Morrison carried out a variety of econometric analyses—within-industry, across-industry, and pooled models. Their major finding on the profitability of IT was that there was "no significant
relationship," although they found a "modest but significant" positive effect in one pooled analysis. In terms of labor productivity, they found a consistent pattern indicating that IT "has not been labor saving, but is instead correlated with increases in labor intensivity and decreases in average labor productivity" (p. 28). They found a similarly negative effect when studying the impact of IT on multifactor productivity: IT investment had degraded rather than enhanced productivity during the period.
Strassmann (1985:159–162) presented data collected by the Strategic Planning Institute in a pilot study of 40 large firms. Although published details of the study are very sketchy, he reported that there was no correlation between IT costs and his measure of productivity.
In a subsequent analysis, Strassmann (1990) elaborated on his earlier study. First, he examined data sets that linked financial performance (long-term shareholder return) to an index of computer intensity for two industries, food and banking. In neither industry did he find any relationship between amount of IT investment and financial performance. He then plotted computer intensity against financial performance for some 100 manufacturing and service-sector firms, using survey data published by the magazine Computerworld. In neither the service nor manufacturing firms was financial performance correlated with computer use. Survey data from Information Week produced similarly unfruitful results.
Strassmann did not interpret all these null findings as indicating that computers did not have an impact. Rather, he decided that better measures of firm performance and computer use were needed. He developed a methodology for calculating several value-added measures of performance, which he demonstrated were good predictors of more traditional firm-level performance measures but which were, he argued, superior. Using the PIMS (profit impact of market strategies) approach (Buzzell and Gale, 1987), he surveyed some 292 predominantly manufacturing businesses to obtain value-added measures of performance and detailed information on IT. With these custom-designed measures, he found the following: (1) There was no relationship between IT expenditures and his productivity measure: "Over-achievers deliver their results with a level of [IT] spending equivalent to below-average performers" (Strassmann, 1990:138). (2) In most firms, IT expenditures on management information systems (MIS) dwarfed IT expenditures on operations, on the order of 18 to 1. (Applications in operations include point-of-sales, order-entry, and decision support systems.)
(3) Superior firms, in terms of productivity, spent less than average-performance firms did on IT (p. 139). (4) Some superior performers tended to spend proportionally more of their IT investment on operations than on MIS.
In sum, even with a methodology and data collection tailored to the purpose, Strassmann found no correlation between IT expenditures and superior productivity. He found limited evidence that low performance was related to where firms deployed their IT: Stinting operations on IT and spending a lot on MIS appeared to undercut productivity. This idea of misallocation of IT investment recurs in research reviewed below.
Loveman (1988) examined the productivity effects of IT investments on 60 U.S. and European manufacturers from 1978 to 1984. The data refer to business units (predominantly large manufacturing divisions of Fortune 500-sized firms). The data set includes quite detailed information on IT and non-IT investments and stock, as well as information on output, market share, wages, and so on. He defined productivity as the increase in output from an incremental increase in IT, net of other changes (in wages, non-IT investment, organizational structure, and so on).
Loveman used a range of econometric models, but he found that "the data speak unequivocally: In this sample, there is no evidence of strong productivity gains from IT investments" (p. 1). In most of the models, the productivity gain from IT investment was zero. Despite efforts to find IT effects for subsamples (e.g., for high-IT investors) and careful assessment of model biases and their magnitudes, Loveman could not find a statistically significant or a substantively significant effect of IT investment on productivity for the manufacturers.
Weill (1988) studied 33 strategic business units in the valve-manufacturing industry. He examined the impact of IT investment from 1982 to 1987 on return on assets (ROA) and other performance variables in 1987. He found no significant relationship between total IT investment and any performance measure, despite testing for various lags or time periods. This parallels Loveman's results. Weill, however, took his analysis an additional step by dividing IT investment into three qualitatively different types: (1) strategic IT, intended to increase sales or market share (e.g., an inventory system allowing sales staff to give accurate delivery time estimates); (2) transactional IT, such as accounts payable and order entry; and (3) informational IT, including electronic mail (email), accounting, and other infrastructural purposes. His analyses then revealed that transactional IT investment was related to better performance in terms of improved ROA and lowering nonproduction labor adjusted for sales. In contrast, strategic IT investment was not
associated generally with performance (and in the short term appeared to lower performance on two measures). Informational IT was not related in any way to any performance measure. Thus, Weill's findings suggest that the 22 percent of IT investment directed into transactional activity had some impact on performance, but the remaining 78 percent of IT investment did not. Unfortunately, he did not report the size of the transactional IT effect, only the fact that it was statistically significant (i.e., positive and nonzero).
Turner (1983) studied a representative sample of 58 mutual savings banks of diverse size. Although he documented different patterns of computerization among banks (often a function of size), he observed that ''unexpectedly, no relationship is found between organizational performance and the relative proportion of resources allocated to data processing" (p. 1).
Cron and Sobol (1983) examined 138 medical supply warehousing firms and linked the extent of computer use (determined primarily by number of software uses) and several performance measures. Analysis of variance did not reveal a significant relationship between computer use and performance measures. In fact, extensively computerized firms exhibited a bimodal distribution in performance: They performed either very well or very badly. Cron and Sobol noted that the two groups (high versus low performance) differed on dimensions such as size and growth rate, but they did not attempt a multivariate analysis controlling for such variables. They concluded, despite the bimodal findings, that "extensive and appropriate use of computer capabilities is most likely to be associated with top quartile performance" (p. 178).
Bender (1986) looked at the financial impact of information processing on a sample of 40 firms in the insurance industry. In a cross-sectional analysis, he found that IT was related to performance, defined as a firm's ratio of expenses to premium income. However, the relationship was curvilinear: Those firms with very little IT expenditure and those with a lot were worse performers than those in between. Investment in applications software was not related to performance, but investment in hardware was positively related. Bender presented a series of bivariate relationships between a performance measure and one aspect of computerization. He did not assess the combined effects of the various IT aspects (e.g., through regression) on performance, nor did he control for size, market share, type of insurance, or other possible sources of spurious correlation.1
Harris and Katz (1988) analyzed the same insurance industry data set of 40 firms. They found that high-performance firms were spending considerably more on IT than less successful firms. Although suggestive that IT was helping performance, their analysis, by their own account, was not a causal analysis. They did not control for other likely predictors of performance, such as size.
In looking at these studies overall, what is striking is the fact that despite very large investments in IT, productivity payoffs are elusive. Several of the empirical studies reviewed did not find any productivity or other performance payoff from IT investments. Others provide evidence for some payoff, but either used research designs that did not control for important sources of spurious correlation or did not document the size of the productivity payoff. No study documents substantial IT effects on productivity. It is this lack of a clearly observable and substantial IT payoff, given the very large investments in IT, that raises the question of a productivity paradox.
EXPLANATIONS AND MECHANISMS
Methodologic and Data Problems
It is possible that the negative findings on productivity are artifacts, that is, they stem from inaccurate data or methodologic problems, rather than from a shortfall in IT effectiveness. For example, the analyses by Roach and Osterman reviewed above are based on government data series on output. Measurements of output and productivity, however, are fraught with difficulties, which are compounded in the service sector by problems in counting nontangible outputs (Bailey and Gordon, 1988; Kendrick, 1988; Mark, 1988).
Mishel (1988) analyzed one key government statistical series on U.S. output and productivity. He argues that an erroneous downward adjustment made to the series in 1973 resulted in a widely held misperception of substantial growth in U.S. manufacturing output since 1973: Forty percent of the reported growth in manufacturing output between 1973 and 1985, according to Mishel, was due to this 1973 adjustment. Equally disturbing is his comparison of two major government series on productivity growth rates at the level of two-digit SIC industries, which shows extraordinary divergences between the two series: "The two series are only within 25 percent of one another (plus or minus) in seven of twenty-one manufacturing industries" (Mishel, 1988:103).
Denison (1989) was no less critical of these government data. He reported that a major distortion results from the accounting method used by government statisticians to deal with the remarkable improvements in speed and power of computers in recent decades. Statisticians have treated these improvements as indicating spectacular increases in the productivity of production in the computer-manufacturing sector. This, along with an overweighting of computers in total output, means that the productivity increases reported in recent years for U.S. manufacturing as a whole are in large part a statistical artifact of productivity increases attributed to computer manufacturers. The government series therefore greatly overstate increases in output and productivity.
The scholars who have questioned the accuracy of the government data are well versed in the details of government accounting systems. A systematic assessment of the implications of their criticisms for the findings of Roach and Osterman would require rerunning analyses with alternative government series and comparing the results, a very time-consuming task. Until such checks have been performed, the studies of Denison and Mishel and work by the Office of Technology Assessment (1988) leave one unsure of the accuracy of all industry-level and sectoral analyses of recent U.S. productivity trends. But this does not invalidate the basic idea of a productivity paradox; if anything, it strengthens it. For if government data series have overstated productivity gains, the payoff of computers may have been even lower than indicated by those statistics.
Firm-level studies such as those of Strassmann, Loveman, and Weill cited above are not dependent on government data, but they are vulnerable to other methodologic objections. Findings that parameter estimates are not significantly different from zero must be assessed in light of the statistical power of the sample. Small samples (e.g., 40 cases) can produce estimates of zero or nonsignificant estimates for IT, not because IT has no effect, but because the sample size is so small. Unfortunately, most companies guard their investment and performance data from survey researchers and thus few firm-level data sets are available.
Another methodologic point, raised by Barua et al. (1989), Cron and Sobol (1983), and Strassmann (1985), is that IT has quite different effects on productivity in high-performance firms compared with low-performance ones. They suggest that the introduction of IT into poorly run firms does not increase productivity, whereas the introduction of IT into well-run firms pays off. The implication is that there is a bimodal distribution of productivity outcomes: Firms cluster at two extremes, either doing well or doing poorly. The fact that current research practice assesses the impact of IT on representative samples of firms, in
cluding good and poor performers, means that any positive IT impact in good firms is balanced by IT's negative effect in poorly run firms. The overall (and misleading) impression is, therefore, that IT has no effect.
Social scientists are unlikely to abandon the use of statistically representative samples of firms in favor of using only high-performance companies because the loss in terms of generalizability would be too great. The theoretical point is to assess the payoff of IT to the economy as a whole, an economy that includes both well-and poorly managed firms. However, scholars can test for this effect by searching for subsets of firms within their representative samples whose experience with IT is markedly better than the norm. This approach was taken by Loveman (discussed above), who was, however, unable to find any bimodal performance effect. But the issue is amenable to additional empirical inquiry.
These methodologic and data difficulties provide some grounds for skepticism about the existence of a productivity shortfall from IT investments, although they do not appear to warrant dismissing the paradox as a statistical mirage. The uncertainty will only be resolved as more studies accumulate. For the present, it is fruitful to give tentative credence to the productivity paradox, based on the above studies, and to ask what mechanisms might explain the lack of payoff from IT.
The Shift to Slower Channels of Communication
Speaking, gesturing, writing, drawing, and demonstrating by doing are all ways of communicating information; they use different sensory channels and distinct kinds of cognitive information processing. Each of the channels differs in regard to the speed with which information is transmitted, the accuracy of transmission, and the difficulty of interpretation.
As a first approximation, productivity, when applied to communication, can be measured as the speed of production of messages, for example, words per minute. When engineering estimates are made of the productivity gains from, for example, word processors, the typical contrast is within one channel, in this case the written word. If word processors are faster (in words produced per minute) than typewriters, one assumes (as a first approximation) that they will improve personal productivity (Card et al., 1982).
The introduction of a new IT, however, not only changes activity within the same basic channel (e.g., from writing or typing to word pro-
cessing), but can also shift communication from one channel to another channel. For example, a manager decides to compose a memorandum using email rather than dictate it. Different channels have quite different speeds of transmission—speech is potentially five times faster than writing or typing (Gould and Boies, 1983:274). Thus, a new technology may simultaneously improve productivity in terms of speeding communication within a channel, and degrade productivity if it shifts communication from faster to slower channels. (This is the implication that can be drawn from a series of experimental studies by Gould (1980, 1981, 1982) and Gould and Alfaro (1984), although it is not so stated by the authors.)
The Formalization of Human Communication
Another level of complexity must be considered for the actual comparison here is not between saying certain words into a microphone and typing the identical words using email. The same semantic content will be phrased differently using different channels—a face-to-face communication may be less wordy than an email message.
To explain this phenomenon, sociolinguists use the concept of indexicality, which refers to a property of language having to do with the degree of knowledge that one expects of one's partner in a communication (Garfinkel, 1967). In a highly indexical conversation, two conversationalists assume a lot of shared background knowledge of each other, and they can speak in a terse way because of this shared knowledge. Less has to be said. In contrast, less-indexical conversation is more elaborate: Everything is explained, because less shared background knowledge is assumed to exist.
The degree of indexicality can differ markedly across communication channels or modalities. Face-to-face communication is often, but not always, highly indexical. Consequently, a shift away from speaking to another channel can change the speed of communication, not only because of the physical limitations of the media involved (speed of tongue in speech versus fingers in typing), but also because of the different degree of indexicality used for each channel. For example, Gould and Boies (1978, 1983:291) compared speaking a message into a voice-mail system, where one expects the receiver to listen to one's voice, with dictating a message, which one expects will be typed and then read by the receiver. In both cases the purpose of the message is identical, and both use the same channel (speech). But the subjects who spoke and expected their message to be heard communicated considerably faster (more indexically) than those who dictated a message to be typed and read.
Until very recently, IT investment focused on the written (typed) medium. Even as IT was (arguably) improving speed and productivity within that medium, it may have been slowing the overall speed of communication by drawing messages that might previously have been conveyed face to face, or telephoned, into the less indexical and therefore slower channel of writing. This is one potential explanation, at the individual level, of why IT has not improved productivity.
Systematic quantitative data are lacking on how much IT has shifted communication between channels, but there is no dearth of ethnographic examples. In one office observed, employees sent email messages to colleagues sitting a matter of yards away, rather than speak (Attewell, 1992b). Office etiquette had evolved to the point that it was considered intrusive to interrupt a colleague with a nonurgent spoken message. In a more elaborate example, Markus (1984) described managers sending messages by email and later conversing by telephone while looking at the same email documents on their terminal screen.
The process of shifting the communication mix toward a slower and wordier (i.e., less indexical) written/typed medium is referred to here as the formalization of communication because of IT. It is occasionally a coercive process: If everyone else uses email, a person feels obligated to follow suit. More typically, formalization occurs because people value the added clarity (lessened ambiguity) of written communication (as in the Markus example), or because senders place a value on not interrupting a colleague and thus use an asynchronous medium rather than speech (the first example). In either case, formalization represents a trade-off between maximizing speed of communication and some other value.
An analogous kind of formalization of communication occurs when IT is applied to shop-floor manufacturing. Large numbers of communications that were once conveyed informally by voice or signals are now being drawn into complex computer systems used for job scheduling, parts ordering, and so on. One striking example of this is provided by comparing the Japanese use of just-in-time manufacturing with an IT-intensive American counterpart (Warner, 1987). The Japanese typically use noncomputerized signaling (i.e., colored balls) to indicate that more materials are needed or that a job is complete. This requires little recordkeeping or elaboration of the messages. An IT-intensive counterpart found in many U.S. firms is manufacturing resource planning (MRP) software, which "decides" when and where parts are to be produced and moved based on a myriad of data inputs, from keyboarded reports of inventory to scans of the bar codes on parts and subassemblies.
The MRP approach is more powerful than signaling with balls (although several commentators have argued that it is overly complex and
error prone; see Anderson et al., 1982; Warner, 1987), but it is a more formalized and demanding method of communication than its noncomputerized alternative. Aggregated across thousands of organizational communications, this formalization of communication, facilitated or driven by IT, may cut into potential productivity improvements and counterbalance the positive contributions of IT. However, there is as yet no quantitative evidence with which to assess the magnitude of this effect.
The Quality versus Quantity Trade-off
A trade-off between the quantity and the quality of output also affects the productivity gains realized from IT. For many white-collar jobs, the introduction of IT seems to alter preexisting balances between the quality and quantity of task performance by tempting individuals to improve quality. Often the change in quality is primarily a matter of improvement in the aesthetic aspects of output rather than in its substance. But whether "real" or superficial, improvements in quality are achieved either by a slowing of work speed (a decrease in output per hour) or, more commonly, by using any time freed by IT to enhance the quality of documents.
In workplace ethnography, one observes employees devoting time and concern to formatting attractively and illustrating the most mundane of communications. Much time is spent reediting text and using spelling checkers to remove every last typographical and spelling error, even if those errors might have a minimal impact on comprehension. And a degree of attention is given to type fonts and print quality that would have been unheard of a few years ago. Among programmers, one sees untidy but workable code being reworked to obtain a cleaner, more elegant, or otherwise more satisfying product. And among managers, one observes reworking of spreadsheet models, presentational graphics, and the like.
The shift toward quality is an expression of pride in one's work and as such is a positive gain for individual employees. It also reflects the fact that the appearance of documents is important in many bureaucratic settings because it affects the authority of the message. Whether this shift improves efficiency is doubtful, as the following studies demonstrate.
In a controlled study, Card et al. (1982, 1984) found that writers composing on a word processor made nearly five times as many modifications and corrections as those writing by hand. Some of the differences between word processing and handwriting were attributable to correcting errors and some to changing margins, type fonts, and so on. But the largest differences stemmed from refining the text. Indepen-
dent composition experts evaluated the latter refinements and judged that fewer than half improved intelligibility. Overall, the quality of documents created by word processor was no better than equivalent documents produced by hand.
Pentland (1989) studied more than a thousand Internal Revenue Service (IRS) auditors who used laptop computers. He obtained measures of productivity (e.g., average time per case) and measures of quality of work, and he was able to compare subjective measures (agents' assessments of their work) with objective measures obtained from case files.2 He also regressed outcome measures on measures of the use of various software applications and on demographic and experience variables.
Looking first at the self-report data, Pentland found that productivity was unrelated to the use of various software applications but that subjective sense of quality was significantly related to the use of almost all applications. In other words, from their own reports, agents' use of computers was not enabling them to work faster, but it did enable them to do better-quality work.
Pentland found striking discrepancies between the self-report findings and analyses of objective measures of the same agents' work. None of the computerized features was associated with increased productivity measured objectively, and several were associated with lower productivity. The implication is that agents' efforts at improving quality through computers undermined their productivity.
Nor was there a "real" effect of computer use on objective quality of work. Pentland found a widespread belief among the IRS staff that use of word processing was more authoritative and would lead taxpayers to accept an unfavorable audit result. But this belief proved unfounded when tested with objective data. Agents used more word processing in big cases and in contested cases, in order to bolster their sense of professionalism and credibility, but it had no effect on the outcome.
The studies of document preparation and of the IRS indicate the ways in which computing becomes important in user impressions of quality, credibility, and self-image. Users sacrifice quantity for quality. The research also suggests that users' impressions of enhanced quality may not be borne out in terms of objectively determined measures of product quality. The quality versus quantity trade-off is thus a mechanism whereby potential gains from IT become lost.
Operator Skill and Complexity
A popular explanation for a lack of productivity payoff is that employees and organizations have not yet learned the requisite skills for using IT software and hardware efficiently. The implication is that once a few more years have passed and the computer revolution matures, the IT-using work force will have improved its skills and raised productivity.
Although appealing, this explanation neglects some important aspects of the information revolution that today turn skill development and retention into a chronic (rather than transient) problem for organizations and individuals (Attewell, 1992b). First, the very dynamism of the information revolution, the creation of a stream of new or improved products, creates a serious problem of skill obsolescence. The working knowledge that employees have painstakingly accumulated can be rendered useless if the company changes hardware or software. It is not unusual to find organizations that, in the prior 5 to 10 years, have had clerical support workers first doing text editing on, for example, Wang machines, then shifted to personal computers (PCs) with Wordstar software, then restandardized again with WordPerfect. Each change of software rendered a substantial body of prior working knowledge useless—not just a knowledge of keyboard commands, but also of strategies for getting various kinds of documents produced and for shifting data or text from one piece of software to another. Even within one brand of software, operators have to deal with software updates, the inconvenience of going from documents typed using one version of the software to those typed in another, and so on.
Skill obsolescence is not a problem solely of word processing: Software applications from accounts receivable to inventory control to financial and statistical modeling have been undergoing periodic replacement. In a study of 187 firms in the New York area, the average age of current applications was only three to four years (Attewell, 1992b). There seems little end in sight: Today's workers are having to assimilate local area network (LAN) and Windows versions of their favorite software and to master new telephone systems, email, and so on. Thus, now learning demands are repeatedly thrust onto employees whose major responsibility is to do work, not to learn about IT.
The environment of constant IT change is fueled more by the competitive dynamics of software vendors, and the behavior of in-house office IT buyers, than it is by hard-headed productivity considerations. Software manufacturers want to sell new software to established customers, and a product upgrade is an easy way to achieve that. They also dread having product reviewers rate their product as "behind the times."
Within IT-using firms, Salzman (1989) documented that the people who make decisions about adopting software are rarely those who actually use the software. Because purchasing managers are often unaware of how hard or easy the software is to use, they tend to focus on features, the numbers of things a piece of software can do. This leads to a situation in which competing software houses look for more and more features with which to dazzle potential buyers. As a result, software programs become ever larger and more complex.
From the perspective of the IT operator, skill development can take on the nature of the myth of Sisyphus: No sooner has one pushed the boulder (of learning) to the top of the mountain, than it rolls back to the bottom—all at the direction of senior management who insist on a software change. Operators who attempt to avoid skill obsolescence by sticking with software they are already skilled in using tend to be stigmatized or overruled by managers, who invoke the need for companywide standardization as an antidote to those who would cling to old software and skills. Or managers claim that cherished productive software must be abandoned because hardware manufacturers will no longer support it on their new generation of machines.
Software developers are not unaware of the burden that new and ever more complex programs place on end users. They have made great efforts to improve interfaces for greater ease of use and to enhance documentation for trouble-shooting (e.g., help screens and pop-up advice). They have also automated various human tasks, such as spelling correction. But these attempts to lighten the learning and work burdens for users often displace rather than eradicate the productivity problem. They have resulted in much larger, more complex software programs, which require faster computers, more disk space, faster access times, and so on. They also require more sophisticated setup and maintenance work. Thus, there is the irony that a business letter that could once have been written rapidly and effectively on a personal computer with 64 kilobytes (K) of random access memory (RAM) and one or two floppy disk drives now is written on a 386-K machine with several megabytes of RAM and a plethora of related memory-management, disk-caching, LAN, and other software.
It seems plausible, then, that much of the potential productivity gains from IT have been absorbed by the process of change itself and its impact on skill and performance.3 Users find themselves with obsolete skills and new programs or procedures to learn. Technical support personnel face ever-higher degrees of software and hardware complex-
ity, which create new layers of productivity-wasting problems, from ''interrupt conflicts" to "memory crowding." Strassmann's (1985) injunction that computerization pays off only if accompanied by a drastic simplification of work processes and procedures is violated by the incessant movement toward greater complexity.
There is no reason to assume that the skill-obsolescence and learning burdens of IT are only start-up or transient phenomena. They have already lasted two decades, and there are no indications that the speed of change of software or hardware is abating. All one sees in computerized workplaces is more and more change. The cost of that change must be balanced against the promise of productivity gains. But that is unlikely to occur when those who prescribe the changes are not those whose work is primarily affected by them.
Computers Generate More Work
Even if IT makes employees individually more productive, that does not necessarily translate into improved productivity for groups of individuals or the organization as a whole because information technologies are embedded in a web of political and social processes within firms (Bikson et al., 1987; King and Kraemer, 1985; Kling and Iacono, 1984, 1988; Markus, 1984). Those interpersonal, group, and organizational dynamics come into play and can absorb or redirect individual efforts and alter the goals toward which new technologies are directed.
One possibility is that employees are using IT tools to increase their output, but that their extra output is largely unproductive because it does not result in more goods and services being sold by the firm. An example would be applying the bulk of IT investment to extra paperwork and administration without realizing any ultimate payoff in terms of greater or more efficient production. Clearly, this is not what was envisioned by IT designers or expected by scholars of office automation and transaction processing. On the contrary, Leontief and Duchin (1986) and other experts believed that IT applied to white-collar work would greatly increase productivity and shrink administrative overhead. By entering engineering estimates of productivity improvements from IT into input-output analyses of the economy as a whole, they predicted 11 million fewer jobs by 1990 and 20 million fewer by 2000 as a result of automation.
The predicted displacements of clerical and administrative workers have not come to pass, however (see below). And one reason that administrative staffs have not greatly shrunk is that IT appears to be associated with a rapid increase in paperwork and its electronic equiva
lent. The study of New York area firms mentioned above, (Attewell, 1992b) assessed the changes in employment and work load resulting from the introduction of specific IT applications. Some examples of an IT application, defined as a computerized work task or combination of tasks associated with particular employees, are (1) a computerized system for processing accounts payable and receivable; (2) a system for entering orders, querying inventory, and generating shipping slips; and (3) a system for analyzing loan risks.
The employment changes associated with the IT applications were far less dramatic than would have been anticipated by Leontief and others. Only 19 percent of the applications studied led to shrinkages in employment on those tasks, 20 percent led to increases, and 61 percent showed no change. Some of the job shrinkages were quite dramatic. However, the overall effect is what is analytically important, that is, the sum of losses and gains across all applications in all firms. In total, the job losses were equivalent to 1.7 percent of the total employment of the sample firms. Job gains were equivalent to a 3 percent increase. The overall effect was an expansion in employment of 1.3 percent.
The reason why the overall employment changes were small is that in the very applications in which productivity improvements were most marked, there was an equally striking increase in work load after the application was implemented. Thus, in a sample of 489 applications for which there were complete employment and productivity data, managers reported that mean output per worker rose by 78 percent compared with the immediately prior technology—a substantial productivity effect. In those same applications, however, the volume of work also jumped by 76 percent, effectively absorbing almost all of the potential productivity gain.4 (Kraut et al., 1989, found a similar increase in the volume of work in their study of computerization.)
There are several distinct explanations for the marked expansion of paperwork or information output that follows computerization. Economists note that as the unit cost of a good falls, the demand for that good increases. For example, as word processors make editing more convenient, the number of drafts a document goes through increases. Similarly, as computer-aided design makes certain aspects of drafting and design work easier, the number of drawings produced before a design is settled upon also increases (Salzman, 1989).
An expansion in the information output of computers requires an
increase in the amount of processing being done and, thus, an increased need for computers and for processing power. Thus, while the unit cost of information processing is falling, the resulting demand for processing may grow even faster, such that the total volume and cost of processing in the organization can reach new highs (Bailey and Chakrabarti, 1988:97). (In economic terms, the price elasticity of demand for IT is greater than one.) This effect is exacerbated by the fact that computing is heavily subsidized in most firms. End users rarely pay the full cost of mainframe time, software support, LAN maintenance, and so on. This spurs the demand for IT.
Although the economist's language of cost is appealing, and the falling unit cost of processing certainly explains much of the IT expansion, focusing on cost can obscure another important aspect of the phenomenon. Within an office, what most employees experience is not cost (to the organization) but effort (for the individual). It is the fact that costs are relatively invisible, while personal effort is quite tangible, that gives computer technology some of its counterproductive sting. It takes little effort to make several extra copies, but it does cost. It takes minimal effort on the part of the originator to send copies of an email message to several colleagues, but that places a substantial burden on the recipients to read that message (Bowen, 1986). It may take less time for an executive to compose and edit a memorandum on his or her PC than to assign the work to a secretary. But in cost terms, given their relative salaries, that may be a less efficient approach.
In sum, IT has been designed to lower the effort burden for an individual user, and it often succeeds in doing so. In each case the benefit is tangible, but the cost becomes invisible because IT links people's work in subtle rather than obvious ways (through data bases and email instead of face-to-face contact). This can make it harder to tell whether actions that help one person's job performance rebound unfavorably on someone else's productivity down the line. The cost also becomes invisible because IT is increasingly shared and IT expenses are removed from the immediate view of the user. The traditional secretary could have a fairly immediate sense of consumption of typing paper, the cost of repairing a typewriter, and so on. The costs and other consequences of using a departmental laser printer for drafts of documents, or of leaving large amounts of old messages or data on fast-access disk storage devices, are far less obvious.
Economists would view these phenomena as examples of "principal/agent problems" within organizations, that is, the gap between what is rational for the individual employee and what is rational for the firm. Agency problems are chronic features of organizations, but IT can exacerbate them. Information technology makes the production process ever more capital intensive. It widens the gap between the interests of
employees, who use IT to improve the productivity of their labor (while largely ignoring capital and other costs), and the interests of the organization, which tries to optimize its total factor productivity—capital as well as labor.
Burgeoning Administrative Overhead
Much of the productivity payoff from automating lower-level jobs and relatively routine tasks has subsequently been expended in hiring new, higher-paid employees. The most obvious reason for the extra employment is that new technical skills are required to support computer systems. Employment in computer specialties (systems analysis, programming, and so on) had grown to over a million persons by 1989. In addition to those formally assigned to computer-related jobs, researchers have noted the existence and importance of informal computer experts (computer "mavens," "gurus," "power users"), many of whom fill staff positions but whose work as computer experts may equal their formal staff responsibilities (Bikson et al., 1990). The amount of employment represented by informal computer experts "hidden" in operating departments is not known.
The issue of IT and administrative overhead goes well beyond the expansion in the number of technical experts, however. Government data series document that the administrative component of private sector firms has been growing for several decades. Figure 2-1 shows that administrative overhead, far from being curtailed by the introduction of office automation and subsequent information technologies, has increased steadily across a broad range of industries.
Although there is a widespread perception that the growth in administration implies employment of more clerks and secretaries, analyses of government statistics indicate otherwise. Based on data from Klein (1984) and the Bureau of Labor Statistics (1989), for example, the number of managers employed in the United States increased from 7.3 million in 1972 to 14.2 million in 1988. Managerial employment growth, not clerical growth, is driving current administrative expansion (see Figure 2-2).
It seems likely that some significant part of the recent growth in managerial employment within firms reflects the growth in information systems and the complexity of managing them. Certainly, some case-study data suggest a direct link. Figure 2-3 shows data on employment shifts for a leading insurance firm that introduced email and extensive office automation in an attempt to control administrative overhead. Many clerical jobs were lost, but numerous new managerial jobs were created.
Bailey and Chakrabarti (1988:86–101) have offered a rather different argument regarding increased employment and the productivity paradox. They hypothesize that the efficiency gains of IT may have been spent on increased employment in marketing, sales, and service staff necessitated by the intensification of domestic and international competition. They developed a microeconomic model to simulate the loss of productivity gains through such employment. However, empirical data on the growth of sales and related occupations in various (nonretail) industries suggest that their explanation is wide of the mark. The growth in sales employment has been much smaller in absolute numbers than that of managers. It is not a major component in the growth in administrative overhead detailed above. Figure 2-4 presents data for three sectors (manufacturing; finance, insurance, and real estate; and services) that illustrate these trends.5
In Figure 2-4, the logic for studying composition within industries, rather than across the economy, is that it avoids effects due to the relative expansion of one economic sector versus another. The period 1982–1988 was used because a change in occupational classifications made data collected before 1982 not strictly comparable with data collected after 1982; 1988 was the most recent year for which data were available.
Information and the Managerial Taste for Control
To understand why the infusion of IT appears to have resulted in the expansion of managerial ranks requires consideration of the role of managerial information systems and the dynamics of control and power in the modern enterprise. Although many employees believe their bosses are immensely powerful, the experience of top management is often the reverse: Executives often find it very difficult to change the organization's course, and their policy initiatives can become bogged down, ignored, or even reversed lower down the organizational pyramid. This leads, according to organizational sociologists, to an incessant quest for tighter control by top management. Executives alternate among instituting new rule systems, productivity measurement, and direct surveillance by supervisors in unceasing attempts to gain control over their subordinates and thus over firm-level performance (Blau, 1955; Gouldner, 1954; Merton, 1940). This quest is rarely successful: Attempts to control backfire, are blunted, or result in dysfunctional adaptations by employees. Nevertheless, this does not stop each generation of managers from pursuing greater control over subordinates (Beniger, 1986).
Several recent developments have tied this taste for control to an
enthusiasm for quantitative information. Intellectual advances in operations research, microeconomics, and managerial science, combined with the dissemination of these disciplines in an occupation increasingly populated by masters of business administration (MBAs), have convinced many managers that rigorous decision making is possible if only one can obtain numerical data on aspects of a firm's performance and apply quantitative analytic methods. This has encouraged a culture that seeks to "manage by numbers." Executives who emphasize management by numbers (or by "facts"), who demonstrate intellectual prowess in memorizing or penetrating dense financial and performance data presented by subordinates, are lionized in business publications (e.g., Pascale and Athos, 1982:92–97).
Thus, after being initially applied to routine transaction processing in the 1960s, IT in the 1970s and 1980s was harnessed to the task of providing management with quantitative performance data. Management information systems, decision support systems, and executive support systems reflect a massive investment in elaborate reporting and control systems for management. The direct cost of such systems—hardware, software, and systems staff—is very large. Weill (1988) estimated that "informational IT" constituted 56 percent of total investment in IT in his particular industry. Strassmann (1990:120) estimated that 64 percent of total IT costs was spent for managerial as opposed to operational purposes. To this must be added the indirect costs, in managerial hours spent studying MIS data, of decisions made using these data, and of new staff and managerial positions opened for persons who enter or analyze such data.
There is a widespread feeling among managers that MIS data indeed help them manage their jurisdictions better. Many report being better informed and feeling more in control (Attewell, 1992b). But there is no proof that these feelings correspond to actual improvements in managerial decision making, and there is even less evidence that the marginal improvement in managerial decision making made possible by computerized information systems justifies the very large cost of those systems.
A series of studies of decision support tools, especially the "what-if" scenarios on spreadsheets, illustrates these effects (Davis and Kottemann, 1992; Kottemann and Remus, 1987, 1991; Kottemann et al., in press). In experimental settings, managers and MBA students were asked to make a decision regarding a production scheduling. Some used a Lotus spreadsheet's what-if capacity, which enabled them to simulate alternative decisions; others worked unaided. The researchers found that the decision makers using what-if approaches made decisions that were no better than those of the unaided subjects. This mirrors a larger
research literature indicating that what-if modeling is sometimes worse than (Kottemann and Remus, 1987, 1991), sometimes no different from (Fripp, 1985; Goslar et al., 1986), and sometimes better than (Benbasat and Dexter, 1982; Sharda et al., 1988) unaided decision making. However, their most striking finding, which was replicated in several studies, was the degree to which managers and MBA students believed that they make better decisions using what-if spreadsheet models, despite the fact that their performance was no better (and in some experiments worse) when they used such methods. The researchers called this overestimation of the effectiveness of this computer technique the illusion of control and cognitive conceit. The effect was widespread and apparently resistant to disconfirmation from experience: Subjects continued to overvalue the what-if technique even when told of its practical limitations.
Computerized MIS and tools such as what-if spreadsheet scenarios have become a routine feature of large corporations, and there is little likelihood that any firm would forgo them. Managerial culture has become habituated to managing by numbers, even though it may be counterproductive (Attewell, 1987; Levy, 1989). The management-by-numbers culture places new work burdens on managers: In several firms I studied almost any request for new machinery or investment had to be accompanied by a rationale entailing spreadsheet models of cost and payoff based on MIS data, often illustrated by pie charts and other graphics. Today's managers spend many hours (often at home) preparing such proposals. An earlier generation might have made the request in a simple memorandum.
Since the "information culture" within firms is driven by managerial desires, it is rarely opposed. But it has enormous costs, in terms of hardware, software, and labor time spent in collecting and analyzing data, which are rarely balanced against its benefits. Some critics have suggested that management by numbers actively contributes to productivity decline. Perhaps the most eminent is W. Edwards Deming (1986:76), the father of statistical quality control, who noted the following:
To manage one must lead. To lead one must understand the work that he and his people are responsible for. . . . It is easier for an incoming manager to shortcircuit his need for learning and his responsibilities, and instead focus on the far end, to manage the outcome—get reports on quality, on failures, proportion defective, inventory, sales, people. Focus on outcome is not an effective way to improve a process or an activity . . . management by numerical goal is an attempt to manage without knowledge of what to do, and in fact is usually management by fear. . . . Anyone may now understand the fallacy of "management by numbers."
Competition versus Productivity
In the 1960s, information technologies were primarily conceived of as methods for lowering the unit cost of processing various kinds of highly routinized paperwork (e.g., transaction-processing systems). In the 1980s, computer systems were characterized as ''strategic information systems," competitive weapons to wrest market share from rival firms. While the two uses of information systems are not mutually exclusive, they have quite different implications for productivity.
A firm that uses IT as a strategic weapon seeks to increase its market share, and thereby its profits, at the expense of its competitors. This will typically mean that the successful firm expands to accommodate the increased market share. But the firm need not necessarily improve its productivity (output per unit of input) to increase its profits: An increase in output/sales at the old level of productivity will still generate increased profits. Thus, profitability is divorced from productivity. Nor will the productivity or profitability of the industry as a whole be improved through this strategic use of IT: The firm is redistributing market share, not creating more wealth with less input. In such a situation there is a disjuncture between what benefits an individual firm and what benefits an industry or economy. Increased market share clearly benefits individual firms, but the economy at large benefits only if productivity or quality is also increased (see Bailey and Chakrabarti, 1988).
If this hypothetical situation was common, one would expect to find large investments in strategic IT yielding increased market share (but not increased productivity) for some successful firms but with negligible impact on industry-wide productivity or profitability. This industry-level outcome is consistent with Roach's findings (see above), and one can find illustrative evidence (not proof) in some of the most lauded firm-level examples of strategic information systems.
American Hospital Supply (AHS) Corporation has been portrayed as an outstanding example of successful IT use. It is widely used as a case study in business school curriculums (e.g., Harvard Business School, 1986). By installing order-entry terminals in the purchasing departments of its hospital customers, and later providing inventory management software to them, AHS made it easier for its customers to order medical supplies and speeded its response to orders. Based on this innovative use of IT, which required large investments in hardware, software, and systems personnel, AHS was able to develop an enviable degree of customer loyalty, and its sales and market share zoomed.
Table 2-1 presents the performance ratios of AHS during a decade of investment in IT and rapid growth in market share. Sales and profits boomed. But several indices of productivity—gross profit as a percentage of sales, operating expenses as a percentage of sales, operating earnings as a percentage of sales—showed no improvement at all, or had decreased, at the end of the period. This did not hurt the firm: It was growing and generating more profits, even though it was no more efficient than before. This becomes a cause of concern, however, when translated into an industry-wide or economy-wide phenomenon. For if IT investment is focused on the strategic goal of increasing market share and is shunted away from productivity-enhancing areas, costs may increase and productivity will stagnate. In the long run, this could leave those industries in which strategic IT investment dominates highly vulnerable to competition from firms that maintain a cost-lowering strategy.
IT and the Service Approach
Although American Hospital Supply illustrates the effects of strategic investment in IT, it is also an example of the use of IT to gain customers through improved service. In recent years a powerful current among American managerial theorists has extolled the importance of customer service for the overall success of a business (e.g., In Search of Excellence). IT is often used to actualize this philosophy—computerized inventory systems enable salespeople to give accurate assurances about availability of products, order-entry systems are used to speed delivery times, and so on.
American companies have allocated substantial proportions of IT investment to service activities in the hope of winning customer approval and market share. If IT investments in service succeeded in attracting market share or allowed prices to be raised to reflect the improved service component, there would be a payoff at least to those who first adopted the technology. And, as discussed in the prior section, firms like AHS did just that. But one can also identify forces that make it rather difficult to earn profits from IT-assisted service provision.
To capture profits, firms need (1) a period of time during which investments in a new service give them a temporary monopoly, thereby differentiating them from the competition, and (2) a willingness on the part of customers to pay a premium for the service-enhanced product. Such conditions have occurred for certain IT services, for example, airline reservations systems. But other IT pioneers have found themselves with a very short period in which to capture market share and capital-
TABLE 2-1 Performance Ratios of American Hospital Supply Corporation, 1974–1984
ize on IT investment. The introduction of ATMs by banks proved enormously popular with customers. However, it took relatively little time for other banks to follow suit. Moreover, although no consumer bank can hope to survive today without having them, ATMs have not generated large new profits for banks. On the contrary, the highly competitive environment in banking has made it difficult to charge customers for ATM service.
Nor have ATMs enabled banks to cut costs by employing fewer tellers (Haynes, 1990). Available evidence suggests that customers use them for transactions they would not have made before. For example, they take out smaller sums of money at more frequent intervals.
There is nothing new to the idea that technological innovation gives the first-comer a short-term advantage that is soon lost as the industry as a whole adopts the technology. Karl Marx, for example, noted the phenomenon in his comments on nineteenth-century textile manufacturing in Britain. What is new is the rapidity with which IT-based service innovations can be copied by competitors, the short window for recouping one's investment in the innovation, and the apparent reluctance of customers to pay for service improvements compared with their willingness to pay for better tangible goods. Taken together, these developments place an unusual burden on IT investors. More and more industries (like the banks with ATMs) have to make large IT investments to "stay in the game," whether or not an improvement in firm level profitability or productivity results.
Consumers, and thus society at large, clearly benefit from the below-cost provision of IT services. The phenomenon looks less benign, however, when viewed from the perspective of corporations. Having to invest in IT in order to stay in the game and suffering poor returns on IT investment as a result detracts from capital accumulation. This would not be serious, except for the fact that it occurs during an era of intense competition and productivity stagnation, when investment should be productively deployed.
Information technology has led organizations to place greater demands on their suppliers and customers for information. Such demands can often only be met by further investments in IT. For example, in the early 1970s, insurance companies that processed medical insurance claims began to install costly mainframe-based interactive claims payment systems. There were several reasons why the companies chose to shift from manual or batch processing of claims to interactive computerized processing, but two are relevant here. At the time, the installation of computers in hospitals and doctors' offices for generating bills
had resulted in a dramatic increase in the number of duplicate bills being generated by such computers and presented by clients for payment. This placed extraordinary burdens on manual claims processors, who had to avoid paying for the same medical service twice. Claims payment had to be computerized to deal with this double-billing assault from others' computers.
Simultaneously, firms that paid for group health insurance for their employees began asking the claims payment companies for ever more detailed breakdowns of how each dollar was expended—on what medical procedures, for which covered person, and so on. Detailed reports had not been feasible, and had therefore not been provided, when recordkeeping was entirely manual. But with the advent of on-line claims processing, those insurance companies that had not developed computerized systems capable of analyzing payments found themselves losing clients to highly computerized competitors.
In these examples one can see the truth in Ellul's (1954) macabre vision of technology, in which technologies create needs that only more of the technology can fulfill. The possibility of computerized data has enabled government to demand more and more detailed information from hospitals, military contractors, banks, and so on. It has stimulated bank customers to expect 24-hour information on their accounts, and users of overnight delivery services to expect rapid tracing of their packages. Whatever the long-term implications of such phenomena for profitability and economic growth, in the immediate term computers are placing greater burdens of information work upon organizations. In highly competitive environments, or when faced with legally mandated demands, firms may have no way of capturing the cost of this investment. Their provision of information therefore reduces, rather than increases, their efficiency.
The relationship between investment in IT and productivity is paradoxical. Research suggests that the strong productivity gains that were expected from IT have not manifested themselves—in the economy as a whole, in particular industries, or for representative samples of firms. The empirical evidence on the question is mixed, and this review has considered issues of data and methodology that might "explain away" the paradox. While more research on this question is clearly needed, the preponderance of evidence suggests that the shortfall of productivity payoff from IT should be treated as credible and that the next step—looking for forces that are undermining or attenuating potential gains from IT—should be taken.
Pointing to a productivity paradox does not mean that IT investments have been ineffectual. In this chapter the focus has been specifically on productivity, not on other important goals or areas of impact, such as increasing market share, improving service, or improving quality. Market share is critical for the competitiveness of individual firms, and quality and service are important to consumers and for economic competitiveness.
Nevertheless, one should not shrug off the importance of a productivity shortfall because of market share, quality, service, or other potential benefits of IT. To reiterate an earlier point, increases in productivity are central to keeping unit costs down and, thus, to enabling firms to compete successfully in the international arena. Increased productivity is also a major source of salary increases for the industrial labor force. If firms can produce more per person, they can afford to pay higher wages. Anemic progress in productivity has been a prime cause of two decades of stagnant wages for a large proportion of the working U.S. population. Conversely, generating higher productivity is the key to higher living standards in the future. If IT is to achieve its promise, then, it must enhance productivity as well as quality and service.
Going beyond the evidence suggesting a productivity paradox, this chapter sought to identify several mechanisms that undercut or attenuate the potential productivity payoffs from IT in organizations. Some of the mechanisms identified are firmly grounded in research, others are more tentative. All would benefit from additional empirical scrutiny.
The general pattern that emerged from the discussion of mechanisms is that IT creates a series of trade-offs at various levels of an organization. The potential benefits of the technology may be channeled into alternative directions—either doing the original work more efficiently (productivity enhancing) or doing a different kind of activity or the same activity more often. Such trade-offs were identified at different levels, from the individual to the organizational. At the individual level, various researchers have found that employees may channel the technology's potential into improvements of quality and appearance, rather than quantity of work. Initial evidence suggests that employees often favor the former, thereby attenuating potential productivity gains. At the group level, IT can result in an expansion of the work to be done or its complexity, rather than accomplishing the original amount of work with fewer inputs. A great deal of IT resources are also invested in managerial information systems and management by numbers, rather than in automating direct operations. According to data from Weill (1988) and Strassmann (1985, 1990), this trade-off seems to be associated with lower performance. Finally, at the organizational level, IT is sometimes channeled toward strategic, competitive, or service activi-
ties that, while laudable in their own right, may be achieved at the expense of potential productivity gains.
The next step is to document, through additional research, the magnitude and full implications of these trade-offs and to study the articulations of levels: how individual-level, group-level, and firm-level processes intertwine and affect one another such that productivity improvements at one level do not simply translate into productivity improvements at higher levels. Several of the chapters that follow focus on assessing productivity dynamics across levels of an organization.
Anderson, J.C., R.G. Schroeder, and S.E. Tupy. 1982. Material requirements planning systems: The state of the art. Production and Inventory Management Fourth Quarter:51–66.
Attewell, P. 1987. Big brother and the sweatshop: Computer surveillance in the automated office. Sociological Theory 5(Spring):87–99.
1992a. Technology diffusion and organizational learning. Organization Science 2(4):1–19.
1992b. Skill and occupational changes in U.S. manufacturing. Ch. 3 in P. Adler, ed., Technology and the Future of Work. London: Oxford University Press.
Bailey, M., and A. Chakrabarti. 1988. Innovation and the Productivity Crisis. Washington, D.C.: The Brookings Institution.
Bailey, M., and R. Gordon. 1988. Measurement issues, the economic slowdown and the explosion of computing power. Brookings Papers on Economic Activity 2:347–430.
Barua, A., C. Kriebel, and T. Mukhopadhyay. 1989. A New Approach to Measuring the Business Value of Information Technologies. Unpublished manuscript, Graduate School of Industrial Administration, Carnegie Mellon University, Pittsburgh.
Benbasat, I., and A.S. Dexter. 1982. Individual differences in the use of decision support aids. Journal of Accounting Research 20(1):1–11.
Bender, D. 1986. Financial impact of information processing. Journal of Management Information Systems 3(2):232–238.
Beniger, J.R. 1986. The Control Revolution. Cambridge, Mass.: Harvard University Press.
Berndt, E., and C. Morrison. 1991. High Tech Capital, Economic Performance, and Labor Composition in U.S. Manufacturing Industries: An Exploratory Analysis. Unpublished manuscript, National Bureau of Economic Research, Cambridge, Mass.
Bikson, T.K., B. Gutek, and D.A. Mankin. 1987. Implementing Computerized Procedures in Office Settings. Santa Monica, Calif.: RAND.
Bikson, T.K., C. Stasz, and J.D. Eveland. 1990. Plus Ça Change, Plus Ça Change: A Long-Term Look at One Technological Innovation . Santa Monica, Calif.: RAND.
Blau, P. 1955. The Dynamics of Bureaucracy. Chicago: University of Chicago Press.
Bowen, W. 1986. The puny payoff from office automation. Fortune (May 26):20–24.
Bresnahan, T. 1986. Measuring spillovers from technical advance: Mainframe computers in financial services. American Economic Review 76(4):742–755.
Bureau of Labor Statistics 1989. Handbook of Labor Statistics. Bulletin 2340. Washington, D.C.: U.S. Department of Labor.
Business Week. 1988. The productivity paradox. Special report. Business Week, June 6:100–102.
Buzzell, R.D., and B.T. Gale. 1987. The PIMS Principles. New York: Free Press.
Card, S.K., J.M. Robert, and L.N. Keenan. 1982. On-line Composition of Text. Unpublished manuscript, Xerox Palo Alto Research Center, Stanford, Calif.
1984. On-line composition of text. Pp. 231–236 in Proceedings of Interact '84. First IFIP Conference on Human-Computer Interaction. New York: North Holland Publishers.
Cron, W., and M. Sobol. 1983. The relationship between computerization and performance. Information and Management 6:171–181.
Crowston, K., and M. Treacy. 1986. Assessing the impact of information technology on enterprise level performance. Pp. 299–312 in Proceedings of the Seventh International Conference on Information Systems. San Diego, Calif.: International Conference on Information Systems.
Davis, F.D., and J. Kottemann. 1992. User Misperceptions of Decision Support System Effectiveness: Two Production Planning Experiments. Unpublished manuscript, School of Business Administration, University of Michigan, Ann Arbor.
Deming, W.E. 1986. Out of the Crisis. Center for Advanced Engineering Study. Cambridge, Mass.: MIT Press.
Denison, E.F. 1989. Estimates of Productivity Change by Industry: An Evaluation and an Alternative. Washington, D.C.: The Brookings Institution.
Ellul, J. 1954. La Technique ou l'Enjeu du Siecle. Paris: Librarie Armand Colin. English translation published as The Technological Society. New York: Vintage.
Fisher, L. 1990. Paper, once written off, keeps a place in the office. New York Times (July 7):1.
Franke, R. 1989. Technology revolution and productivity decline: The case of U.S. banks. Pp. 281–290 in T. Forester, ed., Computers in the Human Context. Cambridge, Mass.: MIT Press.
Fripp, J. 1985. How effective are models? International Journal of Management Science 31(1):19–28.
Garfinkel, H. 1967. Studies in Ethnomethodology. Englewood Cliffs, N.J.: Prentice-Hall.
Gay, R., and S. S. Roach. 1986. The productivity puzzle: Peril and hopes. Economic Perspectives (April 10):1–15.
Goslar, M., G. Green, and T. Hughes. 1986. Decision support tools: An empirical assessment for decision making. Decision Sciences 17 (1):79–91.
Gould, J. 1980. Productivity of white-collar workers. Pp. 14–33 in P. Mitchell, S.J. Nassau, and S. Struk, eds., Improving Individual and Organizational Productivity: How Can Human Factors Help? Washington, D.C.: Human Factors Society.
Gould, J. 1981. Composing letters with computer-based text editors. Human Factors 23(5):593–606.
1982. Writing and speaking letters and messages. International Journal of ManMachine Studies 16:147–171.
Gould, J., and L. Alfaro. 1984. Revising documents with text editors, handwriting-recognition systems, and speech recognition systems. Human Factors 26(4):391–406.
Gould, J., and S. Boies. 1978. Writing, dictating, and speaking letters. Science 201:1145–1147.
1983. Human factors challenges in creating a principal support office system—The speech filing system approach. ACM Transactions in Office Information Systems 1(4):273–298.
Gouldner, A. 1954. Patterns of Industrial Bureaucracy. New York: Free Press.
Harris, S.E., and J.L. Katz. 1988. Profitability and information technology capital intensivity in the insurance industry. Pp. 124–130 in Proceedings of the Twenty-First Annual International Conference on System Sciences, Vol. IV. Hollywood, Calif.: Western Periodicals Co.
Harvard Business School. 1986. American Hospital Supply Corporation. (A) The ASAP System. Harvard Business School Case Services. Cambridge, Mass.: Harvard Business School.
Haynes, R.M. 1990. The ATM at twenty: A productivity paradox. National Productivity Review 9(3):273–280.
Kendrick, J.W. 1988. Productivity in services. Pp. 99–117 in B. Guile and J.B. Quinn, eds., Technology in Services: Policies for Growth, Trade, and Employment. Washington, D.C.: National Academy Press.
King, J.L., and K. Kraemer. 1985. The Dynamics of Computing. New York: Columbia University Press.
Klein, D. 1984. Occupational employment statistics for 1972–1982. Employment and Earnings (January 1984).
Kling, R., and S. Iacono. 1984. The control of information systems: Developments after implementation. Communications of the ACM 27:1218–1226.
1988. The mobilization of support for computerization: The role of computerization movements. Social Problems 35:226–243.
Kottemann, J., and W. Remus. 1987. Evidence and principles of functional and dysfunctional decision support systems. International Journal of Management Science 15(2):135–144.
Kottemann, J., and W. Remus. 1991. The effects of decision support systems on performance. Pp. 203–214 in H.G. Sol and J. Vecsenyi, eds., Environments for Supporting Decision Processes. New York: North-Holland.
Kottemann, J., F.D. Davis, and W. Remus. In press. Computer assisted decision making: Performance, beliefs and the illusion of control. In Organizational Behavior and Human Decision Processes. New York: Academic Press.
Kraut, R., S. Dumais, and S. Koch. 1989. Computerization, productivity, and the quality of work-life. Communications of the ACM 32:220–238.
Leontief, W., and F. Duchin. 1986. The Future Impact of Automation on Workers. New York: Oxford University Press.
Levy, S. 1989. A spreadsheet way of knowledge. Pp. 318–326 in T. Forester, ed., Computers in the Human Context. Cambridge, Mass.: MIT Press.
Loveman, G. 1988. An Assessment of the Productivity Impact of Information Technologies. Working paper 90S-88-054, Sloan School of Management, Massachusetts Institute of Technology, Cambridge.
Mansfield, E. 1977. The Production and Application of New Industrial Technology. New York: W.W. Norton.
Mark, J. 1988. Measuring productivity in services. Pp. 139–159 in B. Guile and J. Quinn, eds., Technology in Services: Policies for Growth, Trade, and Employment. Washington, D.C.: National Academy Press.
Markus, M.L. 1984. Systems in Organizations. Marshfield, Mass.: Pitman.
Merton, R.K. 1940. Bureaucratic structure and personality. Social Forces 18:89–96.
Mishel, L. 1988. Manufacturing Numbers: How Inaccurate Statistics Conceal U.S. Industrial Decline. Washington, D.C.: Economic Policy Institute.
National Academy of Engineering 1988. The Technological Dimensions of International Competitiveness. Washington, D.C.: National Academy of Engineering.
Office of Technology Assessment. 1988. Technology and the American Economic Transition. Washington, D.C.: U.S. Government Printing Office.
Osterman, P. 1986. The impact of computers on clerks and managers. Industrial and Labor Relations Review 39:175–186.
Pascale, R.T., and A.G. Athos. 1982. The Art of Japanese Management. New York: Warner Books.
Pentland, B. 1989. Use and productivity in personal computing. Pp. 211–222 in Proceedings of the Tenth International Conference on Information Systems. Boston: International Conference on Information Systems.
Picot, A. 1989. Assessment of the Current Developments and Trends in Office Automation
and Information Services: Selected Comparative Empirical Data from Germany. Paper presented at workshop on Information Systems: A Strategic Challenge for Corporations, Boston, Mass. (Available from the author c/o Institut fur Organisation, Luswigs-Maximilian Universitat, Ludwigstrasse 22, 8000 Munchen 2, Germany.)
Roach, S.S. 1983. The new capital spending cycle. Economic Perspectives (July 13):1–13.
1984. Productivity, investment, and the information economy. Economic Perspectives (March 14):1–14.
Roach, S.S. 1986. Macrorealities of the information economy. Pp. 93–104 in R. Landau and N. Rosenberg, eds., The Positive Sum Strategy. Washington, D.C.: National Academy Press.
Roach, S.S. 1988a. Stop the dice rolling on technology spending. Computerworld Extra (June 20):6.
Roach, S.S. 1988b. Technology and the services sector: America's hidden challenge. Pp. 118–138 in B. Guile and J. Quinn, eds., Technology in Services: Policies or Growth, Trade, and Employment. Washington, D.C.: National Academy Press.
Roach, S.S. 1988c. White-Collar Productivity: A Glimmer of Hope, A Special Economic Study. New York: Morgan Stanley & Co.
Roach, S.S. 1991. Services under siege: The restructuring imperative. Harvard Business Review (September-October):82–91.
Salzman, H. 1989. Computer aided design: Limitations in automating design and drafting. IEEE Transactions on Engineering Management 36(4):252–261.
Sharda, R., S. Barr, and J.C. McDonnell. 1988. Decision support system effectiveness: A review and empirical test. Management Science 34(2):139–159.
Strassmann, P. 1985. Information Payoff. New York: Free Press.
Strassmann, P. 1990. The Business Value of Computers. New Canaan, Conn.: The Information Economics Press.
Turner, J. 1983. Organizational Performance, Size, and the Use of Data Processing Resources. Working Paper CRIS #58, Center for Research of Information Systems, Stern Graduate School of Business Administration, New York University.
U.S. Department of Commerce. 1991. US Industrial Outlook 1991. Washington, D.C.: U.S. Department of Commerce.
Warner, T. 1987. Information technology as a competitive burden. Sloan Management Review 29(1).
Weill, P. 1988. The Relationship Between Investment in Information Technology and Firm Performance in the Manufacturing Sector. Unpublished Ph.D. dissertation, Stern School of Business Administration, New York University.