There was a problem loading page 164.
Infrastructure: The Utility Of
Past As Prologue?
Corporation for National Research Initiatives, Reston, Virginia
NOTE: The opinions and views expressed herein are those of the author and do not necessarily reflect those of the Corporation for National Research Initiatives (CNRI). © 1997 by the Corporation for National Research Initiatives. Reprinted by permission.
In 1890, advocates of direct current (DC) electric power systems employed alternating current (AC) to electrocute first a dog and then a condemned criminal at the Auburn (New York) state prison in a flamboyant attempt to demonstrate that AC was unsafe. The incident is perhaps the best-known episode in the so-called "War of the Systems," which came to an end with the invention of the rotary converter in 1892, which enabled existing DC distribution systems to be integrated into the more efficient AC systems, and completion of the Niagara Power Project in 1895, which showed that large generating plants and associated transmission lines capable of meeting regional needs could, indeed, be built.1
Electric power is one of four infrastructure history studies sponsored by the Corporation for National Research Initiatives (CNRI). The others address railroads, telephones and telegraphs, and banking; a fifth, radio, is in progress. These studies collectively examine attributes of infrastructure through literature reviews in American history, economics, political science, and sociology. Initially, three questions were posed:
• When and how did take-off occur?
• What were the public and private roles?
• And, how did an infrastructurecharacterized by access, "shareability," and economic advantageemerge?
These questions worked well for the first three studies: rail, telegraphy/telephony, and electricity. But the unspoken assumption behind these questions is technologythe application of engineering and science to accomplish a purpose. In the course of the fourth study, banking, which turned out to be about information, we began to look at the problem of infrastructure somewhat differently, examining properties of ubiquity, interdependence, and reciprocity, independent of a given technology or set of technologies. This focused attention on the
organizational and management structures, which had formed important elements of all of the preceding studies but had not occupied center stage.
Finally, all four of the infrastructures were subject to regulation during the New Deal. Indeed, much of the current deregulation is designed to dismantle the world that the New Deal put in place. From a policy perspective, then, the studies not only delineate more clearly what the relative and changing public and private roles were but also explain how the New Deal approaches to regulatory policy came to be, at least with respect to these four industries.
The Perils of Drawing Historical Analogies
The remainder of this paper discusses themes and observations common to all four of the subject infrastructures. But a word or two is necessary on the perils of drawing historical analogies. All four of these infrastructures obtained shape and form during a period of extraordinary growth. Between 1790 and 1850, the western boundary of the United States moved from the Mississippi River to the Pacific Ocean; population in the same period grew by an average of about 30 percent per decade. After 1860, population growth fell off to a mere 20 percent or so per decade until 1910.2 Urbanization increased dramatically after 1870. In 1890, the U.S. Census Bureau announced that the frontier was closed, and three years later, historian Frederick Jackson Turner proposed his frontier thesis, which was at least partially a eulogy for this period in American history. By 1920, more than half of the nation's population lived in cities. This meant that through the second half of the nineteenth century and into the twentieth, there was a growing concentration of demand for networked technologies such as water, power, and communications as well as for inter- and intra-regional transportation and financial services. Moreover, the late nineteenth and early twentieth centuries saw prices fall so that construction of the physical infrastructure of electricity, for examplethe generating plants, transmission lines, power stations, and substationstook place in an environment of declining real costs which could be passed off to consumers as lower rates while the companies still turned a profit.
The flip side was wages. Real wages increased in the 1920s, the period in which recognizably modern suburbia proliferated, creating an environment of new construction and consumer demand that made extension of power and phone lines attractive, easy, and relatively cheap. Indeed, the residential market for electricity, with its demand in the evening hours, now became more attractive as a means of continuing to balance peak load. The distribution system was largely in place, and the marginal cost of the "last mile"that is, connections to individual residenceswas relatively low compared with the total construction cost of the system, including the generating plant and long-distance transmission lines. Economies of scale based on improvements to generating and transmission technologies were increasing, and the cost as well as the price of electricity fell.3
This stands in marked contrast to current debates over strategies for funding construction of the "last mile" for the digital communications infrastructure.
The second cluster of differences concerns public/private roles. At the birth of the republic, most people thought of the government as local parish or countryand perhaps as the state. The federal government was a dim presence, known to most of the people in the form of the postal system. Eligibility requirements, imposed by the states, meant that many men could not vote; universal suffrage for men was not the norm until the 1820s, and women were first granted the vote at the territorial levelin Wyoming in 1869. (Wyoming granted women the vote at the state level in 1890 when it entered the Union.) African Americans, enslaved or free, were denied the vote until passage of the 15th Amendment in 1870, and again, restrictive eligibility requirements excluded most blacks from the vote, particularly in the Deep South, until the twentieth century.
The vote is the most direct means of broad participation in civil life. Just as this participation was circumscribed on a number of grounds in the nineteenth century, so, too, was the government's perception of its intervention in the life of its citizens. The Civil War (1861 to 1865) represented a massive intervention in daily life, calling up "volunteers" in both North and South; levying direct taxation; and affecting the economy significantly through the sale of bonds, federal regulation of the currency, and procurement of goods and services, thus laying the foundation for a number of private fortunes. But these were the exception rather than the rule. Even the transcontinental land grants to the railroadswhich amounted to an area greater than California and Nevada combinedwere modest relative to the total cost. Carter Goodrich concluded that combined state and federal financial assistance to the transcontinental railroads amounted to about 13.5 percent of their total construction cost, and that this assistance was substantially less than that provided for canals.4 Sustained intervention by the government in American daily lives, as measured by per capita increase in government revenues and expenditures, appears to have increased consistently after 1890 and to have begun with localnot federalauthority.5
Federal regulation, marked by the organization of the Interstate Commerce Commission (ICC) in 1887 to regulate the railroads, was initially a forum for resolving disputes and was embraced by some figures in the industry as a way of setting uniform national policy in an environment of competing state policies. However, by the New Deal, the regulatory agency was seen as a more active instrument, and the government, rather than acting as a mediator, was seen as having a positive obligation to ensure a minimal standard of security for its citizens. This is obvious in both social and economic programs, e.g., the Social Security Administration and the banking reform that expanded the scope of the Federal Reserve, established the Federal Deposit Insurance Corporation, and regulated the structure of the industry.6
Thus, for most of American history, government was a distant presence. Research labs, like Thomas Edison's in Menlo Park, New Jersey, arose with
corporate support. His was dominated by the telegraph giant, Western Union, itself controlled by William Vanderbilt, who had financial interests in both rail and telegraphy. Not surprisingly, Western Union sponsored research into domains that resonated with its business goals. In 1873, Western Union announced its willingness to reward handsomely any inventor who could achieve multiplexing on its lines, thus increasing capacity without additional investment in the wired plant. This led directly to the simultaneous invention of the telephone by Elisha Gray and Alexander Graham Bell. Edison also came up with a receiver design at Western Union's behest. His lamp and associated DC generating and distribution system represented the most successful in a series of attempts to challenge the gas companies by producing superior interior illumination at a competitive price.7 Thus, the great nineteenth-century infrastructures arose by processes of competition, compromise, and consensus in which the public presence was, at best, a facilitator and at times a mediator.
What Falls Out?
Economic growth, deflation, and different expectations of government are three important differences that shaped the development of the infrastructures studied. But six themes do fall out as common to all four of the studies:
• Period of experimentation,
• First-order substitutions and feedback effects,
• Evolution of new structures,
• Not always the ''best" technology,
• Natural monopolies, and
• Physical plant and service.
Each of these observations is discussed in greater detail in the next sections.
All of these examples witnessed a period of experimentation in which there were winners and losers and in which a new technology or technologies per se were necessary but not sufficient for take-off. Railroads are the obvious example. Most of the technologies required for self-propelled steam engines on rails (i.e., locomotives) were developed by the 1830s, but take-off, measured by a leap in miles of rail construction, did not occur until the 1850s. There were, moreover, numerous small railroad companies that were gradually incorporated into larger corporate systems. But this was a surprisingly slow and at times contentious process that required decades.
The standardization of the gauge is a case in point. By 1860, there were seven gauges for 30,626 miles of track. Of these seven, the standard, 4-foot, 8.5-inch
gauge represented the bare majority of mileage (53.3 percent). The second most common gauge was the 5-foot gauge, which was concentrated in the South, a region that was further isolated by insufficient intra-regional rail links, including a critical lack of bridges across major rivers. More generally, the effort by many southern cities to secure an urban hinterland resulted in highly localized lines emanating from the major cities but not connecting them.8
Three considerations drove conformity to the "standard" gauge, a precondition to interconnection: the big eastern railroad firms, eager to tap into the rapidly expanding markets in the West, particularly for western grain, which required transport across many states and many independent rail lines; the outbreak of the Civil War, which underlined the need for efficient east-west transportation and communications from both political and military perspectives; and finally, specification in 1862 of the 4-foot, 8.5-inch gauge for construction of the new transcontinental roads. Between 1870 and 1880, most of the companies outside the South adopted the uniform gauge; 3 percent merely built a third rail. Following a meeting among leading railroad interests in the South on February 2, 1886, the southern lines were brought into conformity with the 4-foot, 8.5-inch gauge.9
Standardization of gauge as well as increasing conformity in signaling, scheduling, and administrative procedures (e.g., cross-licensing access to track; through-ticketing and bills of lading; inventory control and management) enabled freight to flow across tracks and equipment controlled by competing interests. At the same time, mergers and acquisitions meant that many of the smaller companies built to service Portland, Maine, or Baton Rouge, Louisiana, were incorporated into larger entities, resulting in a pattern of many losers and a few big winners. Similar patterns characterized both telephony and electricity.
The Bell interests had enjoyed a 20-year patent monopoly in telephony, but with the expiration of key patents in 1893, the number of telephone companies serving local or regional markets exploded. Much to Bell's corporate dismay, the organization found itself confronted by a potential welter of services, technical standards, and lively competition. Indeed, in 1903, more than half of the nation's 1,051 incorporated towns and cities hosted more than one telephone company. In 1915, at least 40 percent of the telephone exchanges in cities with a population of 5,000 or more competed with another local exchange, and dual service continued to exist in parts of the Midwest and Plains until 1924. By the end of that year, however, AT&T, then under the jurisdiction of the Interstate Commerce Commission (ICC), had bought 223 of the 234 independent telephone companies subject to the agency's jurisdiction.10
Electricity tells a similar story. Until the widespread adoption of AC technology, service areas, limited by the short, one-mile range of DC distribution, tended to be relatively compact. It was fairly easy for a small electric utility to identify a market. Thus, between 1887 and 1892, 28 separate firms offering electric service were formed in Chicago alonenot including users who purchased independent, self-contained plants. In their analysis of the structure of the electric
utility industry in 1897-1898, Hausman and Neufeld conclude that most firms were only marginally profitable. Weaker firms found it difficult to raise capital, which is one reason put forth for the founding of municipally owned electric plants.11
The integration of DC into AC systems meant that economies of scale and scope were technologically possible as well as desirable since high-voltage AC transmission over distance was more efficient but meant higher threshold costs. Hausman and Neufeld found that strong power companies offering a higher rate of return tended to be older and larger, to have bigger generators, to rely on hydro rather than steam, to have a strong commitment to AC generation, and to have a better load factor (i.e., the ratio of average to peak demand). These firms had the potential to enjoy substantial cost savingsconditions, the authors observe, "which would be expected to presage a major period of consolidation," and which did, indeed, occur. Power generation and transmission companies evolved notions of holding companies as a way to leverage capital and manage broad distribution. Led by Samuel Insull of Commonwealth Edison, industry executives cultivated state regulatory agencies that mandated standardized service and interconnection. By 1924, 7 holding companies controlled 40 percent of the nation's generating capacity, and 16 holding companies generated three-fourths of the nation's electrical power. Thus, even the publicly owned municipal utilities, which provided service to end-users, were dependent on private power providers and transmission line companies for access to bulk power.12
So far, we have discussed the overall shape and form of these industries. In each case, there was an initial period of expansion and proliferation followed by consolidation into a fewor one, in the case of telephonycorporate giants. This was in some cases pushed by the requirements of the technology, e.g., electricity. But this was not necessary; telephony, for example, could have existed as a series of interconnecting yet independent companiescorporate consolidation and management were not necessary.
In each of these cases, there was a product that let end-users or consumers do something or have something better. The substitution effect is most obvious in electricity. There already existed a market in interior illumination provided by candles, kerosene, and gas. Edison intentionally set out to provide a superior product that was cost-competitive with the equivalent gas service, and the pricing of electricity was established in terms of competition with gas.13
Telephony was also an improvement on existing local communications technologies. In 1873, Western Union enjoyed a monopoly over telegraphic service, which was primarily between cities. About 10 years earlier, the telegraph giant had begun to experiment with combined telegraphic and delivery services as a way to provide local communications connections. Western Union also began to
explore switching technologies that allowed financial information to flow from several banks to a single clearinghouse and then from the clearing house back out to the banks. The initial market for telephony was believed to be local, thus filling the gap in service. Telephony was initially constrained by signal attenuation to a range of about 20 to 30 miles in urban environments where cable was laid below grade, although transmissions across distances of 800 miles could be achieved with open-air lines. By 1890, Bell interests were already pursuing interurban transmission in head-to-head competition with the telegraph monopoly.
Rail transport was also conceived of as a substitution, in this case, for transport via canal or overland. Although canals had achieved the first major cost savings, rail held the advantage in perishables and high-value goods, where the desire for speed outweighed higher costs.14 The differential between rail and water has been a matter of some debate. In general, though, competition between rail and water tended to lower all freight charges. Similar inter-product competition also tended to keep electric utility rates relatively low and encouraged utility executives to cooperate with regulatory agencies, thus distancing themselves from the contentious and adversarial positions taken by the gas companies.
Niches, Organization, and Efficiency
Eventually, niches for different services formed and new structures and services evolved. For example, early nineteenth-century turnpike companies, never as profitable as hoped, quickly gave way in the long-distance market to both canals and railroads. On the other hand, expanding numbers of middle- and long-distance routes via either rail or water increased the need for short-distance overland services of 15 miles or less. This increased demand more than offset the loss of long-distance business. Plank roads, constructed on the same principle as wooden sidewalks, were introduced after the mid-1830s, and wagons dominated the short haul, that is, distances less than 15 to 20 miles long, where the rate was cheaper than either rail or canals and time was not a constraint. This was seen as an advantage to some entrepreneurs. John Murray Forbes of Boston, who controlled the Michigan Central, avoided construction of branch lines and encouraged local construction of plank roads affording access to the railroad without his having to expend capital to reach markets. Water transport via coastal, lake, or river steamer or by canal barge had the advantage in medium to long hauls, averaging 650 miles, especially where the commodities shipped were high bulk and low value. Innovations during the nineteenth century tended to reduce costs mainly over medium to long distances. Waterways were good albeit not perfect substitutes for rail and generally had the advantage in shipping high-bulk/low-value goods over long distances. Rail possessed the advantage in shipping high value items over medium and long distances and in shipping high-bulk/low-value commodities over medium distances.
Similar differentiation characterized power. Competition with electric companies spurred gas producers to cut prices and improve the product. "Water" gas, introduced in 1880s, was considered greatly superior to the earlier coal gas; it was cleaner and provided better light. The manufacturing process required a larger scale of operation, which increased the costs of entry but also resulted in economies of scale. In a newly competitive environment, Consumers Gas of Chicago was able to offer still lower prices, thus forcing the price of gas to fall from $3.00 to $1.75 per 1,000 cubic feet. The new gas technology resulted in similar price competition in Houston and a 40 percent decrease in local rates. With electricity beginning to encroach upon the lighting market in upper- and upper-middle-class households and commercial establishments, gas seemed poised to capture the market of middle and working class homes where kerosene light was still the norm.
Discovery of natural gas fields and realization of the thermal applications of gas led to further service and product differentiation. Between 1900 and 1940, higher-income urban households adopted electricity first and tended to prefer electricity for lighting and natural gas for hot water and perhaps cooking, with an oil burner for heat. Middle- and lower-income residents converted to new energies more slowly. They selected electricity for light first, then shifted from a coal to a gas stove, and finally added a gas hot-water heater.15 Thus, consumers chose among multiple energy technologies, and the urban energy landscape as late as the 1920s was characterized by a mix of coal, oil, gas, and electricity.
Applications of electricity in the heavy industries took place after World War I as a result of continued advances in technology as well as soaring prices for both coal and labor. But the implications of electrification were more profound than substitution of one power source for another. Applications of central station-generated electricity in manufacturing and industry had begun in the 1890s among small users who realized the advantages of the small AC electric motor in providing fractionalized power in highly segmented, labor-intensive processes where needs were historically too small to justify a large investment in a steam engine: the apparel industries, chemicals, printing, and several equipment manufacturers (electrical, non-electrical, and transportation), and metal fabrication. This cluster of industries remained at the forefront of electrification through 1954. Large-scale enterprises, characterized by substantial sunk costs in existing technology and by power- and heat-intensive processes (lumber, paper, petroleum, stone/clay/glass, and primary metals), consistently lagged in adopting electric power. Given the scale of their facilities and the importance of the heat byproduct (e.g., steam) to their industrial processes, managers of these industries tended to install self-contained electric generating plants when they did decide to go electric after 1919.16
Electricity thus offered small-scale enterprises access to power that formerly they did not have. In both the heavy and light industries and manufacturing plants, electrification revolutionized the organization of work. Prior to the introduction
of electricity, industry relied on centralized, belt-and-shaft systems linked to a single prime mover (either water or steam-powered). The advent of electricity and the electric motor enabled a restructuring of industrial processes to a more efficient, decentralized unit drive system where energy was made available at the point of use. Unit drive systems possess numerous advantages. Elimination of the centralized line shaft system reduced fuel inputs and increased energy efficiency by reducing friction losses implicit in belt-and-shaft distribution. Factory structures no longer needed to support heavy mechanical systems, permitting lighter, single-story factory layouts, which in turn permitted better materials handling and flexible work flows. Finally, components of the process became independent, and having to fix a problem in one did not shut the entire system down. Walter Devine, who has conducted the seminal work on electrification and organization of industrial processes, argues that reduced energy requirements resulting from efficient application of electricity in unit-drive systems resulted in higher productivity of capital and labor. And economist Harry Oshima finds that in textiles, six labor-intensive mechanized processes in the era of steam were reengineered to 25 processes without a concurrent increase in labor inputs.17 Thus, electrification enabled efficiencies in industrial and manufacturing processes.
The efficiency cum labor substitution argument set forth by Oshima is part of a larger literature that addresses the relationship between technology and growth in the American economy in the late nineteenth and the twentieth centuries. Two themes in this literature resonate with contemporary concerns: one is the relationship between technology and labor, and the second is the so-called "productivity paradox." The productivity paradox consists in the fact that although electricity was adopted as early as 1889, measurable gains in aggregate national productivity began to appear only in the 1920s, after large industrial plantsmetals, petroleum, transportationshifted to electric power. Why the lag?
For one thing, early adopters were small, labor-intensive manufacturing plants where good light and access to fractionalized power were important. But their impact on the total industrial sector was small relative to the heavy industries, which did not electrify until after 1919. According to economist Arthur Woolf, this transition occurred in the context of rapidly falling prices for electricity, escalating prices for coal, and increased costs of labor. Woolf concludes that firms took advantage of cheaper energy costs to offset higher labor costs by restructuring their operations.18
His analysis has been criticized as overly reductionist and too reliant on the costs of electricity and labor as the principal determinants without taking into account the engineering flexibility and efficiencies that electric power enabled or the process of incremental adoption that began with smaller enterprises.19 Finally,
the expectation that a new technology will be quickly recognized and adopted underestimates the significance of the process of experimentation, which characterized all of the infrastructure technology we have studied, as well as the implied costs of converting an installed base and investment in the status quo to something new. Indeed, the existing investment in DC systems and technologies is one reason for the emotional intensity that characterized the "War of the Systems" in the 1890s.
The pace of adoption of electricityor any technologyis one dimension of understanding technological diffusion. A second is the "goodness" of that technology. Here, a certain tautology tends to enter the discussion, which goes as follows: technology x enters the common use and is considered the "best" solution because it has become the successful solution, and success is equated with value.20 There is substantial evidence to the contrary. The adoption of the standard gauge, for example, was the consequence of tradition, the founder effect, and the advantages of a network. The standard 110 volts in power distribution is a function of the economic analysis that one of Edison's researchers undertook to determine the costs of a system that would be price competitive with gas, where the principal cost was the price of copper. Finally, William Paul Barnett's dissertation on the diffusion of telephony challenges this tautology and elucidates the interaction between technology diffusion and demand.
The explosion in demand and proliferation of companies after 1894 had created a manufacturing bottleneck. Western Electric, a wholly owned Bell subsidiary, could not keep pace with the demand for equipment. As a result, lesser but still satisfactory devices were developed and used by the independent companies. During the first decade of the twentieth century, there were basically two competing technologies: the common battery system, which presumed a common power source and a physically centralized system; and the more primitive "magneto" instrument, which relied on local, individualized battery power contained in each device.21
Barnett is interested in the relationship between technology and competition. He found that independent telephone companies offering combinations of local and long-distance services both cooperated and competed with each other. Successful companies enjoyed cooperative relationships within a technological standard, even though that standard was not necessarily the most sophisticated technology available. Thus, relatively primitive, single-exchange common-battery companies, offering local service, thrived in complementary relationships with multiexchange companies, providing regional long-distance service, so long as local and regional servers shared a standard technology and interconnection was possible. Technological sophistication only appears to have mattered when two companies competed for the same market niche, as in the competition for regional long-distance service. There, companies with a more advanced technology enjoyed an advantage. Thus, Barnett describes a gradual process of successful local and long-distance market/service differentiation within the context of a
standardized and interoperablebut not necessarily the most sophisticatedtechnology.
What is the "best" technology then? Barnett's research suggests that it is the technology that satisfies end-users, which may or may not be the most sophisticated available. These results were confirmed by Kenneth Lipartito's research on telephony in the South. Lipartito found that Southerners did not necessarily want the sophisticatedand expensiveservice offered by AT&T. When telephone service was introduced to Huntsville, Alabama, a local newspaper editor observed that residents wanted connections to nearby townsnot to New York City or Washington, D.C.22
We come now to the vexing theoretical question of natural monopoly. Explanations of natural monopoly begin with a model of the market based on supply and demand, and focus their analyses on factors affecting supply (or production). Thus, one definition of a natural monopoly looks at the production plant and finds that natural monopolies are characterized by high fixed costs of the physical plant. Another definition of natural monopoly argues that a natural monopoly exists if multiple producers will result in excess capacity and waste.
The railroad companies quickly learned that excessive competition could result in overbuilding and waste. Frustrated by the Pennsylvania Railroad's dominance of the Pittsburgh market, for example, Andrew Carnegie financed William Vanderbilt's efforts to build a parallel line into Pittsburgh. Vanderbilt had his own quarrel with the Pennsylvania Railroad: the company had bought an interest in the New York, West Shore, and Hudson, which, in 1881, proposed to build a line into New York City on the west side of the Hudson Riverthus duplicating the exclusive rail route into Manhattan owned by Vanderbilt's New York Central. By this time, however, J.P. Morgan had developed a substantial interest in the rail companies as an underwriter as well as a financier. Concerned by the overbuilding and ruinous rate wars between the Pennsylvania Railroad and the New York Central, he assembled the principals of both firms on his yacht, the Corsair, and a deal was struck.23
In the case of networked systems, the issue of competition is complicated by the question of network externalitiesthat the value of the system increases as more members join it. Thus, with respect to telephony, the natural monopoly thesis argues that the inherent value of a single, integrated network to consumers, together with the wastefulness of multiple systems, meant that large systems tended to devour small ones, and that efficiencies increase when the system encompasses the maximum number of users. The theoryfrequently articulated by AT&T representatives in the twentieth centuryseems to confirm the traditional view of telephony as a natural monopoly, which provided cost-effective, high-quality service to its subscribers.24
There are, however, a few problems. For one, interconnected and networked systems do not necessarily achieve economies of scale, a lesson railroaders learned quite painfully and the telephone companies relearned when they tried to design switchboards that could accommodate large numbers of users.25 For another, the model presumes that demand is constant and homogeneous, a contention that historical data do not support.
Barnett demonstrated differentiated demand in Iowa and Pennsylvania, and Liparito also showed that Southern users were uninterested in superior technology and long-distance connections; they prudently bought the cheapest service. It is a rational decision from an economic point of view, but it is not consistent with the theory. Southern consumers clearly did not seek to maximize their network access, and the nature of Southern demand was obviously different from the interests clamoring for broader access. Even within the South, different strategies were required in North Carolina from those that had worked in Georgia.26
Finally, Claude Fischer's analysis of the spread of telephony between 1900 and 1940 finds that early subscribers wanted residential telephone service for a combination of reasons. Some, notably physicians, saw its commercial and professional value. Indeed, one of the entrepreneurs in Antioch, California, was a doctor who, it is said, envisioned substituting telephone service for homing pigeons to maintain communication with his dispersed, rural patients. A second group of subscribers saw its social value; this group contained a disproportionate number of women and rural residents, precisely the group left out of the AT&T business plan, which, based on the example of Western Union, emphasized national service for businesses. But Fischer found that telephone service tended to be adopted first among professional and socially elite households and then to percolate more slowly through the local socioeconomic structure.27
Fischer's underlying interest concerns the diffusion of technology, and he sees in the example of the telephone evidence that consumer behavior and demand affect technological development and expansion of service. Thus, cheaper alternatives such as direct dialing and party lines, initially resisted by corporate AT&T because they might diminish AT&T's high quality of service, were implemented to capture the consumer market in some cases already served by independent competitors. Fischer also found that corporate advertising first focused on business and commercial applications. When the marketing message was adjusted again in the 1920s to stress the sociability of telephony, the change in content came in response to patterns of consumer behavior, rather than in anticipation of them.28
These studies all challenge the association of natural monopoly and telephony by demonstrating the importance of consumer demand (rather than production efficiencies) to the process of diffusion of technologies and related services. In Chapter 3 of his dissertation, Mueller constructs a series of purely theoretical models in which he considers outcomes based on either a single service, dual service, or concentration of demand. He argues that dual service
can exist in situations in which demand is concentrated but not uniform. Conversely, interdependent demand and widely distributed communication patterns tend to result in convergence. This finding resonates both with the patterns that Lipartito found in North Carolina and Virginia as well as with the success of the local independent company in Fort Wayne, Indiana, where communications demand was primarily local and regional and could clearly be met by the local independent company.29
Finally, despite AT&T's undisputed technological advantage in long-distance service, Mueller points out that demand for long-distance service was small and concentrated in a limited stratum of business users.30 Theoretically, at least, this type of demand can sustain dual service, with different entities offering service in complementary nichesjust as they, in fact, did in Barnett's analysis. Under the latter scenario, AT&T's superior long-distance capability could have enjoyed complementary relationships with regional companies. Since dual service, which would have afforded integrated local/long-distance service through cooperating organizations, is theoretically possible (hence monopoly is neither natural nor inevitable), he looks elsewhere for an explanation, turning his attention to strategy.
The key decision, he argues, was interconnection. Bell initially refused to interconnect with the independent telephone companies, fighting competition with lawsuits over alleged patent infringement and rate wars. Independent companies in Wisconsin and Ohio also initiated lawsuits, attempting to compel Bell to provide them interconnections because the telephone was a common carrier and therefore obligated to provide service impartially. These suits were generally either unsuccessful or were withdrawn, when independent companies realized that providing access for Bell to their subscribers ceded a valuable asset. By 1897, both AT&T and the independents subscribed to a theory of competition, which posited subscriber access territories (that is, the geographical areas in which subscribers were physically located) as a fixed resource. Thus, telephone service offered by one company necessarily diminished the potential market of a second company. Neither AT&T nor the independent companies recognized that the distinct communication needs of urban businessmen and those of small-town and rural residents might in principle have been served by overlapping telephone service areas.31
Bell's business model was consciously patterned on Western Union: high-end, national service to business clients under a single corporation that controlled both the wired plant and the service delivery. This was quite similar to the railroads, which owned the roadbed and the rolling stock and provided transportation services. In their day, this was a departure from existing practices: early
nineteenth-century turnpike companies constructed roads and charged a toll for access and use. They failed.
Despite the innovation in management represented by the railroads, over time the profitable and competitive enterprises were frequently those in which the service was separated from the physical plant and in which the physical system was segmented. American Express is a case in point. American Express was begun as a fast freight service; shippers promised their clients rapid delivery, used the railroads as a common carrier, and were able to charge a selection of affluent clients what the market would bear. This meant that American Express targeted the lucrative end of the market, where the profit margins were high, and did not bear the cost of maintaining the plant itself. Rather, American Express could take advantage of what competitive pricing did exist from the transport companies while charging end-users for a special service.
A similar differentiation in service occurred in the electrical power industry. The industry evolved into a tripartite organization of large power-generating plants, transmission line companies, and local utilities that provided service to end-users. The most profitable part of this system was in long-distance transmission. Transmission line companies did not have to bear the extremely high cost of constructing power generating plants but controlled access to bulk power by the local utilities. In this case, the profitability was in segmenting the service and controlling the intermediary function. This wasand isalso the competitive part of the industry, suggesting that monopoly control is not necessary for profitability even though excessive competition can result in overbuilding and ruinous competition, as the railroads discovered and the Corsair incident illustrates.
All of the above examples have been drawn from the infrastructures that developed around science and technology. Banking feels like the odd man out. Banking is fundamentally a service industry based on information; it is the sum of a series of informational transactions, based on shared concepts, procedures, and relationships that enabled commodities and funds to flow within and among regions. Banking is, therefore, an infrastructure of and about informationinformation in the form of discounts, interest, and prices, and information (or misinformation) that allowed consumers (including other bankers and investors) to make decisions about spending and saving.
Eighteenth- and early nineteenth-century merchants considered banking an auxiliary of trade. After about 1840, the volume and complexity of financial transactions, which resulted from expanding population, public works, and economic development, precipitated specialization among the emerging financial services. Finance became separate from commerce, and banking segmented into commercial and investment houses. The range of financial intermediaries, which
included savings and loans, brokerages, and insurance companies, enabled the nation's savings to be invested in countless projects ranging from home mortgages to railroads. Of these, commercial banks were probably the most important,32 and much of the shape and form of banking varied within increasingly restrictive thresholds and boundaries established by state and federal laws, mediated by local demands for credit and opportunities for investment.
Antebellum banks provided a source of credit and a circulating medium of exchange by issuing redeemable bank notes. An antebellum bank made money if it invested wisely, and if its notes stayed in circulation. Most of the bank's investment capital came from its investors. If a bank filed, not only did the investors lose whatever they had put into it, but the various noteholderswho ranged from small merchants and households to other banksalso suffered because the bank notes were now unredeemable and hence worthless. Thus, early banking is largely about managing currency, but in the process, a series of cooperative structures emerged.
The Suffolk system (1819 to 1858) in Massachusetts was one response to the problem of shaky bank notes and represented an attempt to stabilize the system by increasing confidence, or trust, in bank notes. The Suffolk Bank intended to reverse Gresham's Law by driving out bad money with good; to a large extent the bank succeeded, partially by threatening to redeem large numbers of notes issued by irresponsible banks, and partially by compelling other banks to participate in the system and to maintain reserves with the Suffolk bank as security for their notes.33 Free banking (1837 to 1863) was another strategy; states established minimum entry requirements and stipulated purchase of public securities that were held as security against bank note issues. Finally, discount rates, which emerged in the context of domestic money markets, were a key indicator of the relative risk associated with a given bank's notes, and the market itself discriminated between good money and bad, while facilitating flows of capital within and among regions.34
Two antebellum tools, reserve requirements and discount rates, remain features of the modern banking infrastructure and have become instruments of regulatory policy. Vestiges of three other antebellum innovations also survive in modern practice: cooperative bank insurance, correspondent relationships, and the clearinghouse. Of these, the clearinghouse may be the most significant as an example of emergent cooperative behavior in a competitive system.
New York City emerged as the nation's financial center in the 1850s, and not surprisingly, many innovations originated there or in Albany. The New York Clearinghouse was organized in 1853. Like Suffolk, it relied on cooperation among interdependent institutions to achieve greater operational efficiency and institutional stability among the members and hence to instill greater public confidence in the banking system. Clearinghouses required reserves, instituted disclosure requirements, and came to act as a lender of last resort by issuing loan certificates, which member banks used during periods of financial stress. Interdependence
among banks was furthered by correspondent relationships among banks and bankers' balances (reserves deposited by one bank in another as security for checks and notes), which were codified by New York state law. The New York state hierarchical model of country and city banks took on national proportions, and by the Panic of 1857, actions by New York City bankers had wide impacts.35 Not surprisingly, the Civil War legislation, which mapped out a pyramid of relationships among banks patterned on New York's law and practice, also set New York City at the apex. Elements of the clearinghouse system were even perpetuated in the Federal Reserve system (1913), including check clearing, hierarchical organization based on the size of the bank and the population it served, and voluntary membership. The ironyand the conundrumis that structural interdependence, which strengthened the position of any one member, also enabled weakness in one part of the system to travel throughout it,36 as evidenced in crises in 1857, 1907, and 1929-1930.
This paper carries a question in its title. If the utility of history is to provide solutions, then the answer is ''proceed with caution," because the solutions that were right in 1890 are unlikely to work in 1990, and it would be folly to map rate structures from 1900 to 2000. Indeed, the Bell system's initial decision to pattern its business strategy on Western Union's successful national monopoly fundamentally misunderstood the implications of telephony as a social phenomenon, and the error brought Bell intense competition in many local and regional markets. But if the point is to provide a common framework and baseline of experience, then history has something to offer.
Whether by abstracting service delivery from the physical plant or by segmenting the system, differentiation of services as a means of providing competition and introducing a profit incentive carries a few questions. How will the physical plant be built and who will maintain and upgrade it? And what can be done about cream-skimming, that is, offering services that skim off the lucrative end of the market so that serving the entire market becomes unprofitable? Historically, both the telephone company and the electrical power companies have used cross-subsidization among market segments as a pricing strategy. In the 1920s, residential electric rates subsidized industrial rates, and from the end of World War II until 1974, long-distance telephone rates subsidized local rates.
Federal intervention has been one means of ensuring fairness, either through regulation or by creating incentives through tax-advantaged or below-market loans in the case of rural electrification. Indeed, the Rural Electrification Administration (REA) is considered a success story. In 1930, about 10 percent of the farms in the United States had access to electricity; by 1946, half of the nation's farms were electrified and the program was solvent. Nearly every dollar that had been loaned out had been repaid. The REA did result in the organization of more
than 1,000 decentralized, small-scale cooperatives serving 5 million families. In the postwar period, it also led to competition with local utility companies that effectively increased their range of service. In some instances, the power companies skimmed off the most lucrative customers by building their lines through the most profitable areas, thus depriving local cooperatives of an important segment of their customer base. Moreover, in the late 1940s, efforts by the Truman administration to continue to expand public power were frustrated by Congress' refusal to fund construction of transmission lines. In 1952, transmission line construction was included in the appropriation to Southwest Power. Through a combination of REA funding and federal flood control policies, which resulted in hydroelectric dam construction, national electrification was accomplished in the 1950s.37
Federal intervention clearly resulted in expansion of electricity to underserved rural populations. However, it is less clear whether public intervention has historically best served the interests of the consumer or those of the producer. Consider the example of banking. Key concepts and relationships of the banking infrastructure, which were invented to improve its stability and shore up public confidence, migrated from the private to the public sector, where legislation and regulation broadened the scope of their impact. These include the discount rate, reserves, checks and check clearing, and interdependence through cooperative structural relationships. Many historical and contemporary observers argue that banking remains sound and profitable so long as public confidence in it remains strong; panics occur when the public loses confidence. But when voluntary private cooperative solutions seemed to fail, public reform efforts have stepped in, and public requirements, whether in the form of free banking thresholds or New Deal mandates, have become a means of building public confidence in what is ultimately a private system.
On the other hand, the contents of the New Deal reforms do not appear to have addressed the underlying economic causes of the Great Depression,38 and the separation of investment from commercial bankingthe core New Deal reformin fact met the needs of bankers themselves. During the expansionist 1920s, the distinctions between investment and commercial banking had blurred. Real wages were rising, and large numbers of small accounts became attractive, since they could accumulate into significant pools of capital. In this new market, the commercial banks enjoyed several advantages. They had access to a new source of funds through depositors' accounts; both investment banks and trust companies had traditionally relied on the resources of a select clientele. It was very successful. Between 1927 and 1930, the percentage of bond issues that originated with banks and their affiliates doubled, while the influence of private investment banking shrank accordingly. But when the crash came, more people experienced the collapse directly, and their bankers became their targets. Faced with hostile hearings on Capitol Hill, private investment bankers, who had seen commercial banks encroach upon their securities business in the 1920s, lined up
behind the separation of investment and commercial banking, thus reducing competition in a declining market. Commercial bankers themselves came on board in part to forestall more severe regulation and in part because the new legislation promised to exclude investment banks from demand deposit business, relieving commercial banks from the need to pay interest on demand deposits and securing them a part of the market.39
Proponents of the capture theory of regulationwherein the industry subject to regulatory authority "captures" or subverts the commission or agency to its own endscan easily see in the structure of early regulatory agencies how stabilization of the industry met corporate needs by limiting competition. Indeed, Gregg Jarrell argues that utility and power companies that were subject to regulation after 1912 actually saw relatively higher profits, higher rates, and lower output after regulation than before. Thus, he believes that state regulation at the turn of the century was a "proproducer policy."40 William Emmons in his dissertation in business economics comes to somewhat similar conclusions with respect to regulation in the 1930s. Namely, state regulation appears to have had little or no effect upon prices, but prices tended to fall when competition was present. Thus, competitionnot regulationresulted in lower prices to consumers.41
Whether the private sector would have expanded into otherwise unprofitable or marginal markets in electrical power or telephony without federal involvement through mandates, preferential loans, and hydropower construction projects is a separable and as-yet unanswered question. Moreover, as the history of banking illustrates, with liquidity crises and panics occurring with depressing 20-year regularity between 1800 and 1930, the price of competition may be a level of instability that is considered unacceptable.
For the near term, we are likely to live out the curse of interesting times. But based on these historical examples, I would venture a few predictions:
1. There will be several, not one, "killer apps" in the information technologies and they will possess the following characteristic. They will be services that clearly meet users' immediate needsproduct substitutionand enable consumers to begin to do things differently, just as small consumers of central power sources took advantage of fractionalized delivery of power to obtain interior illumination and to begin to mechanize their labor-intensive processes. We have already seen this in the introduction of word processing and spreadsheet programs as well as the deployment of intranet and e-mail technologies. Recall, too, that the demand is differentiated, and not all "users" are conventional end-users. For example, there will be a market for intermediate services just as there exists a machine tool industry to support manufacturing. The information technologies will thus ripple through our institutions, so becoming ubiquitous, sometimes noticeable, like the lamp on the wall, and sometime invisible, like the wiring behind it.
2. These "killer apps," like Edison's lamp, will not be ends in themselves but will unlock an underlying technology and science that are sufficiently robust to support other kinds of activities, just as the significance of the lamp was not interior illumination but the system of power generation and delivery that eventually enabled the creation of a power infrastructure. Many of today's information applications themselves will thus migrate into the information infrastructure wherein the defining characteristic is the ability to support more advanced applications and services.
3. There will be winners and losers. We have already seen this exemplified in former household names that are now barely memories. This is not an aberration of the high-technology world but rather characterizes all of the infrastructures we have studied from railroads to banking. How stability and fairness will be achievedwhether through government regulation and/or incentives, or through market mechanisms, such as pricing strategies, or through a combination of public and private strategiesremains to be seen. History is replete with examples and experiments, some of which succeeded and some of which failed. What they show conclusively is that we humans are inventive and time will tell.
1. Thomas P. Hughes, Networks of Powers: Electrification in Western Society, 1880-1930 (Baltimore and London: The Johns Hopkins University Press, 1983), 108; Thomas P. Hughes, "The Electrification of America: The System Builders," Technology and Culture 20 (January 1979): 143.
2. Population data are based on the U.S. census and are tabulated in the appendices to Bernard Bailyn et al., The Great Republic (Lexington, Massachusetts: D.C. Health and Company, 1977), xviii.
3. David E. Nye, Electrifying America: Social Meanings of a New Technology (Cambridge, Massachusetts: MIT Press, 1990), 260-61.
4. Carter Goodrich, Government Promotion of Canals and Railroads, 1800-1890 (New York: Columbia University Press, 1960), 271. On the size of the grants, see Lloyd J. Mercer, Railroads and Land Grant Policy: A Study in Government Intervention (New York: Academic Press, 1982), 7.
5. See J.B. Legler, R. Sylla, and J.J. Wallis, "United States City Finances and the Growth of Government, 1850-1902," Journal of Economic History 48 (1988): 347-56.
6. The expansion of federal authority into social welfare issues during the New Deal is discussed in Robert L. Rabin, "Federal Regulation in Historical Perspective," Stanford Law Review 38 (1986): 1192, 1243-1261.
7. On corporate support for Edison's research, see Hughes, "The Electrification of America," 130-32; on Western Union's competition and the invention of the telephone, see David A. Hounshell, "Bell and Gray: Constrasts in Style, Politics, and Etiquette," Proceedings of the IEEE 64 (September 1976), 1306; on the significance of Edison's systems approach, see Hughes, Networks of Power, 21; on prior experiments in electric lamp design, see Robert Friedel and Paul Israel, Edison's Electric Light: Biography of an Invention (New Brunswick, N.J.: Rutgers University Press, 1986), 115; on the competition with gas, see ibid., 123, 206-207.
8. George Rogers Taylor and Irene D. Neu, The American Railroad Network, 1861-1890 (Cambridge, Massachusetts: Harvard University Press, 1956), 14, 42-45, 48.
9. Taylor and Neu, The American Railroad Network, 52-58; Goodrich, Government Promotion of American Canals and Railroads, 179-81; Thomas E. Root, Railroad Land Grants from Canals to Transcontinentals (Tulsa, Oklahoma: Natural Resources Law Section Monograph Series, No. 4. Section of Natural Resources Law, American Bar Association and the National Energy Law and Policy Institute, University of Tulsa, 1987), 19-20.
10. William Paul Barnett, "The Organizational Ecology of the Early American Telephone Industry: A Study of the Technological Cases of Competition and Mutualism" (Ph.D. dissertation, University of California, Berkeley, 1988), 12; Milton Mueller, "The Telephone War: Interconnection, Competition and Monopoly in the Making of Universal Telephone Service, 1894-1920" (Ph.D. dissertation, University of Pennsylvania, 1989), 3; Peter Temin and Louis Galambos, The Fall of the Bell System: A Study in Prices and Politics (Cambridge: Cambridge University Press, 1987), 11.
11. William J. Hausman and John L. Neufeld, "The Structure and Profitability of the U.S. Electric Utility Industry at the Turn of the Century," Business History 32 (April 1990): 232-33.
12. Hausman and Neufeld, "The US Electric Utility Industry," 238-39, 241; Hughes, "The System Builders," 157-59. American Power and Light was, for example, instrumental in organizing Utah Power and Light; see John S. McCormick, "The Beginning of Modern Electric Power Service in Utah, 1912-1922,'' Utah Historical Quarterly 56 (winter 1988): 4-22. On the extent of electrical power controlled by the holding companies, see Jacobson, "Water Works, Electric Utilities, and Cable Television," 83. On the dependence of municipal utilities on private companies for bulk power, see Richard Rudolph and Scott Ridley, Power Struggle: The Hundred Year War Over Electricity (New York: Harper & Row, 1986), 38-41.
13. Friedel and Israel, Edison's Electric Light, 123, 206-207.
14. Prior to the opening of the Erie Canal in 1825, the costs of shipping one ton of wheat or flour from Buffalo to New York City show that the price fell from $100 by road to $10-$12 by the Erie Canal. Moreover, a single canal barge could haul a load ten times the size of that drawn by a four-horse Conestoga wagon on the best toll roads. Robert William Fogel, "Notes on the Social Saving Question," Journal of Economic History 39 (1979): 30, 49-50; Patrick O'Brien, The New Economic History of the Railways (New York: St. Martin's Press, 1977), 83; on the primacy of the transfer from road to water, see Albert Fishlow, American Railroads and the Transportation of the Antebellum Economy (Harvard Economic Studies CXXVII. Cambridge, Mass.: Harvard University Press, 1965), 44, 55, 77; John B. Rae, The Road and Car in American Life (Cambridge, Mass., and London: MIT Press, 1971), 20; John F. Stover, Iron Road to the West: American Railroads in the 1850s (New York: Columbia University Press, 1978), 160-64. There has been extensive discussion with respect to calculating the rates via water and rail reflecting the variations by commodity, distance, and destination as well as widespread rebating. For a review of the technical literature on this issue, see David L. Lightner, "Railroads and the American Economy: The Fogel Thesis in Retrospect," Journal of Transport History 4 (1983): 21-26.
15. Harold L. Platt, The Electric City: Energy and the Growth of the Chicago Area, 1880-1930 (Chicago: University of Chicago Press, 1991), 46-47; "Houston's First Battle Over Utility Rates," The Houston Review: History and Culture of the Gulf Coast 9 (1987): 59-68. The principal output of petroleum refineries from 1859 to 1900 was kerosene, which was used as an alternative to gas and electricity for interior illumination. Gas required connections to a central source. Although electricity might be obtained from a self-contained plant, the threshold cost was relatively high, and there remained the problem of exhausting the heat generated by the plant. Kerosene, however, was easily transported, provided acceptable, steady light, and did not require hook-ups to a centralized system. Still, by 1900, petroleum refiners could see that electric lighting would soon displace kerosene and focused their attention on developing stoves and furnaces, which had been available but were not widely used. Oil burning furnaces went on the market in the 1920s but did not begin to displace coal-burning furnaces until the price of
coal soared after World War II; see Ruth Schwartz Cowan, More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave (New York: Basic Books, 1983), 94-95; Mark H. Rose, "Urban Environments and Technological Innovation: Energy Choices in Denver and Kansas City, 1900-1940," Technology and Culture 25 (July 1984), 532-34.
16. Richard B. Du Boff, Electric Power in American Manufacturing, 1889-1958 (New York: Arno Press, 1979), 64, 71-74, 98-100, 134-35.
17. In a group-drive system, one energy source supplies power to several machines; in a unit-drive system, there is one energy source per machine. Walter J. Devine, "From Shafts to Wires: Historical Perspective on Electrification," Journal of Economic History 43 (June 1983): 347-68, 371; Harry T. Oshima, "The Growth of U.S. Factory Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century," Journal of Economic History 44 (March 1984): 164. This summary of the advantages of unit-drive systems, which is based on Devine's important essay, is found in Paul A. David, "The Dynamo and the Computer: An Historic Perspective on the Modern Productivity Paradox," American Economic Review 80 (May 1990): 358.
18. Arthur G. Woolf, "Electricity, Productivity, and Labor Saving: American Manufacturing, 1900-1929," Explorations in Economic History 21 (April 1984): 178, 189. The average price of electricity fell by 50 percent between 1910 and 1929; coal prices tripled in the same period, and wages doubled.
19. David E. Nye, Electrifying America: Social Meanings of a New Technology (Cambridge, Massachusetts: The MIT Press, 1990), 186-87. Nye somewhat oversimplifies Woolf's thesis, since Woolf himself acknowledges (based on Devine's research) "the tremendous amount of freedom in plant design" (p. 177) afforded by electric power, although he predicates willingness to use electric power on its falling prices. Moreover, Woolf goes on to conclude that "the process of electrification allowed for substantial management and factory design changes that greatly enhanced productivity"; see Woolf, "Electricity and Productivity," 189.
20. This tautology is particularly acute in the history of the internal combustion engine and the automobile, although it also riddles the history of telephony. For a discussion, see David A. Kirsh, "The Electric Car and the Burden of History: Studies in Automotive Systems Rivalry in America, 1890-1996" (Ph.D. dissertation, Stanford University, 1996), 5, 22-31.
21. Barnett, "Organizational Ecology of the Early American Telephone Industry," 13, 16-17.
22. Kenneth Lipartito, The Bell System and Regional Business: The Telephone in the South, 1877-1920 (Baltimore: The Johns Hopkins University Press, 1989), 93.
23. Stover, Iron Road to the West: American Railroads in the 1850s, 117.
24. Kenneth Lipartito, "System Building at the Margin: The Problem of Public Choice in the Telephone Industry," Journal of Economic History 49 (June 1989): 323; The Bell System and Regional Business: The Telephone in the South, 1877-1920 (Baltimore: The Johns Hopkins University Press, 1989), Chapter 1.
25. Milton Mueller, "The Switchboard Problem: Scale, Signaling, and Organization in Manual Telephone Switching, 1877-1897," Technology and Culture 30 (July 1989): 534-60, passim.
26. Lipartito, "System Building at the Margin," 332.
27. Claude S. Fischer, America Calling: A Social History of the Telephone to 1940 (Berkeley: University of California Press, 1992), 136, Chapters 4, 5, and 7 for detailed discussion of conclusions stated on pp. 261-63.
28. Fischer, America Calling, 47-48, 81-83; see also "'Touch Someone': The Telephone Industry Discovers Sociability," Technology and Culture 29 (January 1988): 32-61.
29. Milton Mueller, "The Telephone War: Interconnection, Competition and Monopoly in the Making of Universal Telephone Service, 1894-1920" (Ph.D. dissertation, University of Pennsylvania, 1989), 132-33; Lipartito, The Bell System and Regional Business, 167.
Mueller bases his argument on interdependent demand theory, particularly as articulated in 1989 by W. Brian Arthur, at that time a professor of economics at Stanford University. Arthur
postulated a theory of "increasing returns," by which he meant that the utility of a given technology increases as more people select that technology. However, Arthur also stresses that the initial selection may be a matter of historical accident, rather than the result of economic efficiencies or technological superiority. Over time, as increasing returns tend to create positive feedback that magnifies otherwise random variation, the process of adopting technology will tend to converge on a standard. Arthur describes the process of technological adoption as "a random walk with absorbing barriers," wherein the absorbing barrier is the point at which a given technology has a large enough market advantage to compel most users to conform to it, a phenomenon he calls "lock-in." The firm that controls the critical technology will, therefore, have obtained a monopoly position. The synopsis of Arthur's thesis is based on Mueller's discussion; see Mueller, "The Telephone War," 44-46. The original paper is W. Brian Arthur, "Competing Technologies and Lock-in by Historical Events,'' The Economic Journal 99 (March 1989): 116-31. The most obvious, current example of the phenomenon Arthur describes is Beta versus VHS. Arthur is not the only architect of the theory of increasing returns, but he has given it a rigorous, econometric expression that nonetheless provides for fuzzy vagaries of historical circumstances.
30. Mueller, "The Telephone War," 47-48.
31. Mueller, "The Telephone War," 183-97.
32. On the importance of commercial banks relative to other financial services entities, see Larry Schweikart, "U.S. Commercial Banking: A Historiographical Survey," Business History Review 65 (1991): 606-607. Schweikart's essay is an excellent review of the literature as of about 1990 as well as an introduction to the fundamental issues in the field.
33. On the Suffolk system, see Donald J. Mullineaux, "Competitive Monies and the Suffolk Bank System: A Contractual Perspective," Southern Economic Journal 53 (1987): 884-98.
34. There is a substantial literature on free banking. In addition to Schweikart's essay, the following offer some perspectives on this period: James A. Kahn, "Another Look at Free Banking in the United States," American Economic Review 75 (1985): 881-85; Hugh Rockoff, "Institutional Requirements for Stable Free Banking," Cato Journal 6 (1986): 617-34; "The Free Banking Era: A Reexamination," Journal of Money, Credit, and Banking 6 (1974): 141-68; Arthur J. Rolnick and Warren E. Weber, "Inherent Instability in Banking: The Free Banking Experience," Cato Journal 5 (1986): 877-90; "New Evidence on the Free Banking Era," American Economic Review 73 (1983): 1080-91; George A. Selgin and Lawrence H. White, "The Evolution of a Free Banking System," Economic Inquiry 25 (1987): 439-57.
35. Charles W. Calomiris and Larry Schweikart, "The Panic of 1857: Origins, Transmission, and Containment," Journal of Economic History 51 (1991): 807-34 provides an overview of the literature as well as a context-dependent explanation of the panic itself based on the structure of the financial markets and contemporary events. See also: Richard H. Timberlake, "The Central Banking Role of Clearinghouse Associations," Journal of Money, Credit, and Banking 16 (1984): 1-15; Gary Gorton, "Clearinghouses and the Origin of Central Banking in the United States," Journal of Economic History 45 (1985): 277-83; Gary Gorton and Donald J. Mullineaux, "The Joint Production of Confidence: Endogenous Regulation and Nineteenth Century Commerical-Bank Clearinghouses," Journal of Money, Credit, and Banking 19 (1987): 457-68.
36. See, for example, Bruce D. Smith, "Bank Panics, Suspensions, and Geography: Some Notes on the 'Contagion of Fear' in Banking," Economics Inquiry 29 (1991): 230-48.
37. On federal flood control policy, see D. Clayton Brown, Electricity for Rural America: The Fight for the REA (Contributions in Economics and Economic History, No. 29; Westport, Connecticut: Greenwood Press, 1980), 109-13; Jeanette Ford, "Electricity for a Region: The Southwest Power Administration," Chronicles of Oklahoma 60 (Winter 1982-1983): 455; Charles Coate, "The New School of Thought: Reclamation and the Fair Deal, 1945-1953," Journal of the West 22 (April 1983): 58-59. On competition between rural cooperatives and
local power companies, see David Mitchell, "The Origins of the Robertson Electric Cooperative," East Texas Historical Journal 25 (2, 1987): 71-79.
38. Explaining the Great Depression, which is notable primarily for its duration, has taken on a contentious life of its own. Current explanations emphasize its complexity and combination of macroeconomic, internal, and monetary dimensions. What is clear, however, is that it was not caused by commercial bankers invading the precincts of investment bankers. For overviews, see Barry Eichengreen, "The Origins and Nature of the Great Slump Revisited," Economic History Review 45 (1992): 213-39; Schweikart, "U.S. Commercial Banking," 633-35; Eugene Nelson White, "Before the Glass-Steagall Act: An Analysis of the Investment Banking Activities of National Banks,'' Explorations in Economic History 23 (1986): 52.
39. George J. Benston, The Separation of Commercial and Investment Banking: The Glass-Steagall Act Revisited and Reconsidered (New York: Oxford University Press, 1990), 136-38; White, "Before the Glass-Steagall Act," 34-37; "The Political Economy of Banking Regulation, 1864-1933," Journal of Economic History 42 (1982): 39; Robert Eli Litan, "An Economic Inquiry into the Expansion of Bank Powers" (Ph.D. dissertation, Yale University, 1987), 45-47; Vincent P. Carosso, Investment Banking in America (Harvard Studies in Business History 25; Cambridge, Mass., Harvard University Press, 1970), 249-51; George David Smith and Richard Sylla, "The Transformation of Financial Capitalism: An Essay on the History of American Capital Markets," Financial Markets, Institutions & Instruments 2 (no. 2, 1993), 27.
40. Gregg A. Jarrell, "The Demand for State Regulation of the Electric Utility Industry," Journal of Law and Economics 21 (October 1978): 292-93.
41. William Monroe Emmons III, "Private and Public Responses to Market Failure in the U.S. Electric Power Industry, 1888-1942" (Ph.D. dissertation, Harvard University, 1989), 172-73.
Computer And Communication
Impacts On The Organization Of Enterprise
And The Establishment And Maintenance
Of Civil Society
University of California, Irvine
This paper is the first step toward a comprehensive review of critical issues in the social and economic impacts of computers and communications technologies. It is broad in its coverage, corresponding to the charge we were given.1 The paper considers social and economic impacts of at several levels: groups, organizations, trans-organization, and society. It also discusses the policy-relevant implications of the research and issues for future research. It is grounded in empirical research as well as established and emerging theory. However, it proceeds from the assumption that the changes being wrought by computers and communications technology are of such magnitude that fundamental theoretical aspects of social understanding might be challenged. Thus, we are not bound by the isomorphic constraints of existing disciplinary tradition in our analysis. The ultimate purpose of the paper is to challenge the community of scholars engaged in research on the profound socio-economic and socio-technical changes under way.
The audience for the paper includes scholars in the social sciences, broadly defined, including economics, sociology, political science, psychology, communications, and management. It concerns, as well, scholars from the science and engineering disciplines who are concerned with the effects of their creations and with a desire to learn more about how the processes of technological design, development, implementation, and maintenance can be improved in the general interests of human welfare. Finally, it includes scholars from the humanities and
1The charge for this paper was given by Hal Varian in a message of May 5, 1997: "I suggest that you try to broaden your overview to include a survey of the potentially policy-relevant economic and social science contributions to computers and communications issues. Relevant issues include the role of government, privacy, free speech, intellectual property issues, employment, training/education, commerce, communities, and organizations. I realize that you cannot address all of these issues in depth, but it would be useful to look at as many as possible."
the arts who are interested in the changes, actual and potential, that computers and communications technology imply for human experience and self-reflection.
The paper is preliminary, as any paper of this breadth must be. It is a collage, assembled by the authors based on their own research and that of others. Despite the limitations of this preliminary paper, it supports several strong conclusions with implications for future research. The paper hopefully provides a background for, and a stimulus to, discussion at the workshop.
This paper discusses three areas:
• The organization of enterprise,
• Establishment and maintenance of civil society, and
• Recommendations for research.
The rationale for this organization is dependent on several underlying assumptions that must be understood if the discussion is to make sense. The first assumption is that all social phenomena must ultimately be understood in ways that account for individual action. Although our analysis takes place above the individual level, beginning at the lowest point with characteristics of goal-oriented work groups, we recognize that the foundations of our discussion must be traced to explanations of individual intention and action, no matter how extensively mediated or channeled by higher-level social forces or conditions of the natural world. Moreover, we assume that the closer our analysis is to the individual level, the greater will be the power of individualistic explanations in accounting for what we observe. At the higher end of our analysis, at the broad levels of culture and society, the power of individualistic explanations is expected to weaken considerably.
These assumptions justify our selection of the categories of enterprise and civil society as the two levels of discussion for our purposes. We acknowledge that other conceptual schema can work to organize this discussion. However, we feel that this scheme has the advantages of capturing all the major social science perspectives needed for the task, while remaining parsimonious and indicative of the distinction between "micro" and "macro" approaches to social phenomena common throughout the social sciences.
Enterprise in our use refers to those human activities that are undertaken for the direct production of goods and services, whether under a market-based governance structure or a policy-based regime. Enterprise includes everything from local businesses to multinational firms in the private sector, and everything from special districts for services such as flood control to multinational military forces in the public sector. The defining features of enterprise include the pursuit of production objectives within specific production constraints, an inherent logic of
production (however poorly understood by the participants in the enterprise), and tangible measures of performance in the accomplishment of the objectives. We adopt a perspective that is influenced primarily by microeconomics, social psychology, and organizational politics, each of which has close ties to individualist views of social behavior. The operating assumption is that the people involved in enterprise will, to the extent that they are able, act in ways that conform to economic rationality, are socially tractable at the group level, and are politically stable and salient. The successful pursuit of these goals will produce equilibrium conditions that are not easily disturbed. This implies that existing equilibria, produced largely before the advent of recent computers and communications technologies, will not be easily disturbed. The onus, from this perspective, is on showing that impacts from those technologies are real, significant, and lasting.
Civil society refers to the larger social order that makes economic rationality, social tractability, and political stability possible at the local level of groups, organizations, and production sectors. This position proceeds from the premise that democratic government is a satisfactory, though not necessarily optimal, form of social governance for achieving those ends, while other forms of social governance, such as autocracy, oligarchy, and plutocracy, are not. Our focus is mainly on social institutions that shape the organization of enterprise, and is predicated on the assumption that civil society enables particular organization of enterprise, and not the other way around. Our assessment of computers and communications technologies in the realm of civil society draws on intellectual perspectives from political theory, the sociology of institutions, and cultural anthropology, and to a lesser degree, institutional economics.
We have attempted to anchor our discussion in empirical research findings whenever possible, although in this draft we have not specifically cited the research we have used. However, a hallmark of research on the social impacts of computers and communications is that coverage of this vast field has been sporadic and episodic. In addition, the high probability that use of these technologies is changing fundamental aspects of enterprise and civil society makes it exceedingly difficult to anchor some of our conclusions and recommendations in established empirical evidence. As we suggest in the conclusion to this paper, a major challenge for the research community is to define, design, and implement research into the social impacts of computing and communications that serve as sound guidance for technology design and development, organization of enterprise, and the establishment and maintenance of civil society in the information age.
Computers, in this review, refer to substantially more than the basic machines associated with computing. Computer technology is a "package" that encompasses a complex, interdependent system comprising people (computer specialists, users, managers), hardware (computer mainframes, peripherals, telecommunications
gear), software (operating systems, utilities, and application programs), techniques (management science models, procedures, organizational arrangements), and data. Computing and communications technologies are increasingly intertwined in the everyday functioning of socio-technical systems at all levels of organizations and society. Other information technologies, especially mass communication technologies such as film, audio recordings, radio, television, and print, have implications for organizations and society, but we do not include them in this analysis. We focus on computers and those key information technologies that tend to be closely linked with computers, mainly data communications.
The Organization of Enterprise
We begin with the assumption that the underlying drivers that shape individual actions leading to the organization of enterprise are individual desires for economic rationality, social tractability, and political stability. These objectives are meaningless unless understood within a context of civil society that embodies abiding social values, and thus must be seen as derivative of rather than generative of those values. It might therefore seem that the organization of enterprise is determined by civil society, but this is not the case. Certain forms of social order make particular kinds of enterprise difficult or even impossible, but within the space of what is possible under a given social order, the organization of enterprise can be seen as a matter of choices made by individuals and groups. It is further assumed that the individuals and groups making such choices are at least to some degree blind to the outcomes that particular choices entail, and thus, the discovery of satisfactory organization schemes will be at least to some degree discovered through trial and error as opposed to created entirely by design.
This section deals with the organization of enterprise by dividing the subject into key issues that relate to computers and communications technologies. These are the concept of organization, the organization of production and distribution, organizational structure, enterprise boundaries, mediation patterns, organizational politics and process, and work life.
Perhaps the most important potential impact of computers and communications on organizations is a shift in the very concept of "organization" as an economic and social entity. Once considered to be semipermanent and routinized by definition, ideal organizations increasingly have come to be seen as flexible, change-oriented, and able to shift their boundaries, alliances, and partnerships rapidly to meet changing conditions. Computers and communication technologies increasingly permit anytime, anywhere communication, synchronous and asynchronous collaboration, and tight linkages in operational processes within and between organizations (e.g., manufacturers and their suppliers and distributors,
and manufacturers and the direct buying public). The concept of the "adhocracy"a fluid organization in which members some and go as interests changehas emerged as competition for the concept of bureaucracy. After many decades of increasing vertical integration of production and growth as a totem of success, many organizations have divested themselves of every function that was not a core competence, and that could possibly be "outsourced" or bought on the market. Small really did become beautiful, at least in principle. Young entrepreneurs who started little companies in their garages built novel ideas into huge companies and fortunes, capturing the imagination of the world. And the mighty such as AT&T, IBM, and GM appeared shaken as the world they had built started to collapse around them.
Yet, as recent history has shown, organizations such as AT&T, IBM, and GM have by no means been pushed aside by the changes of the information age. They have adopted and adapted the technologies and harnessed them in ways that have allowed radical "downsizing" of work forces while retaining and in some cases enhancing top management control over firms' performance and profitability. And the start-up companies created in garages have found it necessary to adopt time-honored aspects of organizational hierarchy in order to function effectively. This lesson from recent history reveals an important but frequently overlooked aspect of the information revolution: that its revolutionary character is being channeled through pathways established by powerful social and institutional forces that are not necessarily swept aside by the effects of technology, no matter how powerful those effects are.
Much of the rhetoric about profound change in organizations has been speculative and undisciplined, based more on idealized views of what organizations ought to be rather than on the practical realities that shape organizational form and function. One can construct scenarios of organizational demassing and decentralization, but one also can just as easily construct sound arguments that computer and communication technologies give new life to the traditional bureaucracy. Functions normally carried out by middle managersinformation gathering, decision making within directives, communications with lower-level staff, and monitoring and upward-reporting of activities carried on belowcan be replaced to some extent with technology. The resulting "flattening" of the organization through the elimination of middle managers has been said to bring greater "empowerment" of remaining employees. But technological change can just as easily allow significant increases in organizational centralization, tighter monitoring of employee activity, more effective enforcement of compliance with the desires of top management, and the redesign of tasks in ways that make it difficult for employees to act outside of prescribed patterns.
There are reasons to be confident about profound changes under way in the character and concept of organizations as a result of new computer and communications technologies. At the same time, it is important not to let ideological enthusiasm substitute for careful reasoning and empirical research. The remainder
of this section on the organization of enterprise explores the ways in which impacts might arise, and forms the base for discussion of needed research.
Economic activity has long been divided into production of goods and services and their distribution to the final consumers. Neither production nor distribution makes sense without the other, and the concept of the value chain has emerged to unite them in an end-to-end scheme. Of particular importance is the concept of coordination. Coordination is both necessary and costly, and in the past several decades much attention has been focused on transaction costseasily observable costs of coordinationto explain how and why production and distribution are organized the way they are. In particular, the focus has been on the choice between organizing by hierarchy, meaning the imposition of a policy regime to ensure coordination of various components of production and distribution, vs. organization through markets, meaning interaction among parties governed by the balancing of supply and demand through the marketplace.
It is misleading to place hierarchy in opposition to market mechanisms, because this implies that the two are in some consistent way substitutable for one another. In practice, they tend to be complementary. At the most rudimentary level, markets can be seen as an organic innovation to facilitate economic exchange between individuals with minimum social overhead and no social direction other than to permit participants to pursue their own welfare. In this characterization, hierarchies evolve mainly for coping with imperfections in markets such as occur in the case of public goods that, for various reasons, are socially desirable but unlikely to be provided by individual investors in a market setting (e.g., national defense). Although not directly substitutable for each other, here is an important sense in which societies choose to organize production and distribution predominantly around either hierarchies or markets.
An extreme comparison is the command economies vs. market economies, characterized by the two sides of the Cold War. A more useful example for this discussion is that between vertical integration of production and distribution of products and services, conceivably under a single company throughout the value chain, versus a disaggregated value chain in which products and services are passed along from one organization to another through sale in markets, being "assembled" as they go, before reaching the final consumer. Where workable, markets are usually considered to be more efficient than hierarchies, and in a condition of open choice among options, hierarchies emerge mainly as a consequence of market failures. The political position that derives from this is that the market should govern as much of production and distribution as possible, while hierarchies serve as a kind of back-up in the event of market failure. Given that we have already segregated civil society from this discussion, it is not necessary to argue this point in detail at the level of organization of enterprise.
It has been argued that computers and communications technologies can precipitate a shift from hierarchical organization of production and distribution to more market-like forms. The logic of this argument is that an infrastructure of computing and communication technology providing 24-hour access at low cost to almost any kind of price and product information desired by buyers would reduce the informational barriers to efficient market operation, and presumably facilitate a shift from hierarchy to market organization. If this infrastructure also provided the means for effecting real-time transactions such as sales based on such information, whole classes of intermediaries such as sales clerks, stock brokers, travel agents, and so on, whose function is to provide an essential information link between buyers and sellers, might be eliminated. Removal of intermediaries would not only prune the existing hierarchies that now govern buying and selling, but would also reduce the costs in the production and distribution value chain, further encouraging the shift toward markets.
Organization structure refers to the social organization of authority and responsibility assignments within groups with production objectives, predicated on needs of specialization and division of labor. A long-standing concern in organizational structure is over the centralization versus decentralization of decision authority within organizations. Research indicates that use of computers and communications technologies per se has neither a centralizing nor a decentralizing influence. The prevailing organizational context in which the technologies are used is a much stronger influence on whether organizations centralize or decentralize than is the technology, which can support either type of arrangement. In general, use of these technologies tends to reinforce existing organizational tendencies, and in some cases can be a powerful tool in facilitating organizational changes. An organization that wishes to decentralize can implement information systems that provide to lower level managers the information necessary for decentralized decision making. Organizations wishing to centralize can use the technologies to facilitate surveillance by top management over lower-level managers. Computers and communications technologies have been used to downsize middle management when there is congruence between the centralization of authority in the organization and centralization of control over use of computing resources. In organizations where both are decentralized, middle managers use the technology to enhance their value to senior management and to maintain their relative influence and size.
Computers and communications technologies have enabled the emergence of the trans-organizational enterprise. As producing organizations downsize and
outsource in order to focus on their core competencies and shed organizational weight and overhead, they become less capable of providing full-range products and services for the markets they supply. This is particularly true in the case of highly complicated, limited production products (e.g., large information systems and commercial aircraft) but also extends to more traditional manufacturing sectors. The "manufacturer" in many cases is not deeply involved in the actual fabrication of parts, or even in assembly. These arrangements are quite different from those represented by the large, integrated manufacturers who bought large numbers of supplies and parts from small suppliers. These arrangements often entail interlocking minority ownership arrangements, long-term supply/buy commitments, sharing of product and market information, cooperative design and manufacturing, and risk sharing. They depend on rapid and effective communication and information links among partners, supplied through information technology.
The trans-organizational enterprise can alter alliances and allegiances and create community at various social levels. For example, Detroit was the center of the global automobile industry while the U.S. automobile industry dominated the world. As Japanese competition emerged, the result was not a shift in centers (e.g., Detroit to Yokohama), but rather, industry realignment around a global market for "world cars," with manufacturing and assembly taking place in many countries. This shift has had many ramifications, including imbalances in merchandise trade, use of transborder transfer pricing to manipulate national taxation systems, and the shifting patterns of employment as jobs are sent "offshore." While not directly causative of these changes, computers and communications technologies have made them possible. The technologies also have brought about a blurring of the traditional links between employment and place. Large U.S. firms have set up major information processing centers in India and the Philippines, where skilled programming talent can be found at low prices. These foreign workers "commute" to work as "virtual" guest workers over the satellite and fiber-optic links that tie them to their employers. In a sense, the technologies have eliminated national borders, and in the process have made many national labor policies moot.
Many aspects of enterprise rely on mediation to function correctly within the larger production system. For example, many supply and distribution chains use the mediation of brokers, expediters, agents, and so on. Mediators often survive through successful exploitation of information asymmetries, such as the knowledge necessary to match a cargo carrier's excess capacity to the needs of a onetime shipper. In principle, computers and communications technologies can facilitate disintermediation by linking the parties along the value chain and reducing information asymmetries. For example, these technologies have facilitated
the evolution of enhanced "mail order" retailing, in which goods can be ordered quickly by using telephones or computer networks and then dispatched by suppliers through integrated transport companies that rely extensively on computers and communications technologies to control their operations. Nonphysical goods, such as software, can be shipped electronically, eliminating the entire transport channel. Payments and reconciliation can be done in new ways. The result is disintermediation throughout the distribution channel, with cost reduction, lower end-consumer prices, and higher profit margins.
Another, more subtle example can be seen in heavily computerized "warehouse" department stores that have taken market share from traditional department stores by capitalizing on the cost-saving advantages of advanced supply chain management. In the simplest form of this model, the end retailer and the manufacturer disintermediate the distributors that once sat between them, driving associated costs from the system. In the more extensive model the retailer disintermediates itself from the traditional job of retailingbuying wholesale and resellingby never taking possession of the merchandise. In this model, the manufacturer receives point-of-sale (POS) information on sales of its products at a given location directly from the retailer, updates its planning for manufacturing and distribution, sends a restocking order to distribution, dispatches the goods to the retailer location, and stocks the shelves. The manufacturer still owns the merchandise, which the end-consumer then picks up and carries through the point-of-sale terminal, beginning the cycle again. The retailer is no longer a retailer in the traditional sense of the term, but rather an "access provider" through which end customers are delivered to the manufacturer's point of presence, and who charges the manufacturer an "access fee." The traditional distributors and their costs are disintermediated in this model, and transaction costs associated with the retailer's purchase of goods from the manufacturer are eliminated.
These changes raise interesting questions about the possible shifts from hierarchies to markets in production and distribution. The examples above appear to reduce the operation of markets and result in vertical integration or at least vertical channel partnerships, wherein suppliers and retailers develop close and perhaps collusive relationships. Market dominance could only reemerge if exclusive vertical partnerships proved to be unsustainable. The crux of the arguments favoring markets over hierarchies revolves around the powerful attractor of reduced transaction costs and the possibility of greater innovation in transactions than is possible under the constraints of hierarchy. It has been argued that economic organizations predicated on ongoing auction, negotiation, and coalition-building behaviors, without the overhead and conservatism of hierarchy, could unleash an unprecedented wave of economic growth and innovation.
Still, the question of whether hierarchy can or should be replaced by markets
is open. Many interorganizational information networks have been used to forge stable and long-lasting relationships among selected economic partnersquite the opposite of the impersonal ''spot market"and the ecology of these networks can be quite complicated. For example, travel agents theoretically are able to move freely among various computerized airline reservation systems, and airlines can easily disintermediate agents by building direct connections to passengers. In fact, agents tend to lock into one system as their primary reservation aid, and build their business around that system. The companies that own the reservation systems were originally owned by the airlines, and they kept tight hold on travel agents through incentives and constraints, such as forgiving costs of terminal rentals and discouraging agents from using more than one system. This behavior was so common that federal regulation was enacted to make reservation systems "neutral" in order to reduce anticompetitive practices. In this case, the hierarchy of the government stepped in to ensure the vitality of the market, which was threatened by the use of computers and communications technologies.
There are important questions about who benefits from movement toward markets in place of hierarchies. Those who occupy key positions in existing hierarchies are likely to fight loss of power, and the elimination of old hierarchies will probably give way to new hierarchies as the new market structures become understood and exploitable for the long-term advantage of particular parties. Already the consumer credit network industry that has made possible "profile" advertising has come under severe criticism from privacy advocates and consumer groups for invasion of privacy and appropriation of consumers' "information property." It is not clear that the powerfully seductive vision of the move from hierarchies to networks in economic organization is comprehensive in its consideration of what must change.
The fundamental question of organizational politics is who gains and who loses from change. Some have predicted that computers and communications technologies will shift power to technocrats; others have suggested that use of these technologies will strengthen pluralistic features of organizations by providing different interest groups with the tools to respond to their opposition. Most research suggests that such power shifts are rare, and that organizational elites typically use their control over resources to shape the acquisition and application of computers and communications technologies in ways that perpetuate their power. However, there are exceptions. The availability of data in electronic form can empower new participants in decision-making processes, while the spread of networked PCs, e-mail, and other technologies provides opportunities for new actors to gain influence. The unresolved issue is whether, and to what degree, these technologies alter systematically the balance of political power within organizations.
The use of computers and communications technologies enhances the ability to organize, maintain, and retrieve information needed for decision making and allows modeling in which large amounts of information can be mined to provide insights that help decision makers evaluate different scenarios. Applied to group decision making, these technologies seem to enable efficient handling of complicated decision problems that are not easily managed without technological support. More broadly, the use of these technologies in decision making appears to have the effect of enforcing a stronger discipline on the process of deliberation, with more careful attention to underlying assumptions and sensitivities.
Computers and communications technologies allow individuals to communicate with one another in ways complementary to traditional face-to-face, telephonic, and written modes. They enable collaborative work involving distributed communities of actors who seldom, if ever, meet physically. They permit individuals, groups, and organizations ready access to rich arrays of information, often in machine-readable form, that permits data exchange for local or remote processing without costly conversion. These technologies utilize communication infrastructures that are both global and always "up," thus enabling 24-hour activity and asynchronous as well as synchronous interactions among individuals, groups, and organizations.
Computers and communications technologies can change the nature of work by altering the quality of the work environment, the nature of job skills, and the quality of social interaction within the organization. One effect involves the levels of job stress and work pressure experienced by information workers. Some studies have found that automated systems decrease time pressure, while others suggest that the technologies have speeded up work and increased the level of stress and time pressure on workers. Other studies have found the technologies have had a positive effect on workers' job satisfaction, sense of accomplishment, and interest in their work, a greater sense of control over their work, and a sense of enhanced status among coworkers and clients. With few exceptions, the technologies have not resulted in "deskilling" of work, but instead have expanded the number of different tasks that are expected of workers and the array of skills needed to perform those tasks. The exceptions appear in certain types of factory floor and clerical work, in which the use of the technologies has resulted in some deskilling if not elimination of such jobs.
Social interaction in organizations has been affected by the use of computers and communications technologies. Peer-to-peer relations across department lines have been enhanced through sharing of information and coordination of activities. Interaction between superiors and subordinates has been made more tense
because of social control issues raised by the use of computerized monitoring systems, but on the other hand, the use of e-mail has lowered barriers to communications across different status levels, resulting in more uninhibited communications between supervisor and subordinates.
The impacts of computing on work life have been basically positive with respect to individuals' job satisfaction, sense of accomplishment, interest in their work, control over their work, and social interaction with peers and superiors. However, computers and communications have also speeded up work, increased job pressure and time pressure, deskilled low end clerical-type jobs, and eliminated certain clerical and factory-floor jobs.
There has long been concern over the impacts of computers and communications on employment. The ability of computers and communications to perform routine tasks such as bookkeeping more rapidly than humans led to concern that people would be replaced by computers and communications. The response to this argument has been that even if computers and communications led to the elimination of some workers, other jobs would be created, particularly for computer professionals, and that growth in output would increase overall employment.
The net effect of computers and communications on employment is still a matter of considerable debate. Employment in particular jobs, such as telephone operators and bank tellers, has undoubtedly decreased with the increased use of computerized switching systems and automatic teller machines. Such clear-cut cases are uncommon, however. The statistical measures used to determine employment conditions are not precise enough to isolate the effects of one factor such as the use of computers and communications. After decades of computerization of all sectors of the economy, the United States has generally achieved full employment periods of economic expansion, while experiencing cyclical unemployment during periods of recession. The ratio of public-sector to private-sector employment has not changed much either. It is more likely that computers and communications have led to changes in the types of workers needed and in wage rates for different occupations rather than to changes in total employment. For example, research shows that computer users receive higher pay than noncomputer users in the same jobs. It does appear that use of computers and communications has resulted in a shift of jobs from the United States to other countries, particularly to Asia but also to generally lower wage locations.
Establishment and Maintenance of Civil Society
Civil society is the social order in which the "rules of the game" are articulated and enforced for individuals, groups, organizations, and sectors. Our primary
focus in this discussion is on the institutions involved in the construction of democratic government, and on the relationships between the people and government. We also address the process by which individuals are elected and appointed to serve in institutionally defined positions of influence and authority in the democratic governance structure.
Computers and communications technologies have taken on a highly visible role as tools of government and as symbols in the ongoing debate about how government ought to function. There has been considerable speculation over whether use of these technologies can and will alter the functioning of democratic government. There are a variety of forms of democratic government. We choose to focus our discussion on the constitutional form of democratic government found in the federated structure of the United States. The United States is the oldest and greatest user of computers and communications technologies among large democratic countries; effects on its democratic institutions should by now be apparent.
Our discussion covers three areas: effects on the fundamental structure of democratic institutions predicated on separation of powers and the concept of federalism; effects on the relationship between government and the people; effects on the processes of deliberation and constitutional operation. It also touches on risks inherent in high levels of dependence on technology.
The U.S. form of democratic government is predicated on two key assumptions. The first is the separation of power horizontally across the key functions of governmentthe legislative, executive, and judicialin order to ensure that each branch holds the others in check. In principle, differential use of computers and communications technologies by one of the branches could undermine the checks, thereby providing substantive, procedural, functional, or symbolic advantage compared to the other branches. The second assumption is that power should be separated vertically in order to keep as much of the authority of government as close to the citizen as possible. In principle, the construction of national information systems for criminal justice, taxation, welfare, and so on might enhance the power of the central government in comparison to the regional and local governments.
The introduction of computers and communications technologies in the U.S. federal government was accompanied from the start by speculation that power would accrue to the branch with the most technology. Given the preponderance of technology in the executive branch, one would expect it to gain advantage over the legislative and judicial branches. In fact, no such power shift has occurred.
The separation of powers doctrine ensures that each branch has separate functions, that each is constitutionally and politically independent of the other, and that each has inviolate recourse through which to check the others. Computers and communications technologies do not and cannot fundamentally change these constitutional relationships. Three examples serve to illustrate:
• Example 1. Assume that, as a result of its greater computing, information, and analytic capabilities, the executive branch gains power over the smaller, less experienced, and diffuse bureaucracies supporting the legislative branch. The legislative branch can limit and control executive branch computerization by stopping the purchase of new computer systems through legislation, by strangling the procurement process through audits and inquiries, and by raising politically damaging questions of faulty procurement, cost overruns, mismanagement, and other evils resulting from executive computerization. The legislative branch can also request data from executive agencies, which are usually willing to comply in exchange for favorable treatment of their appropriations. Finally, the legislative branch can buy its own computers, develop its own information systems, and operate its own analytic models with its own staff. Through these mechanisms, the legislative branch can readily establish parity with and independence from the executive branch.
• Example 2. Assume that the executive branch tries to influence judicial review or overload the judicial branch with data from its vast stores of computer databases. The judiciary is the least computerized of the three branches of government and so is considered most vulnerable to the information that the executive branch can amass in support of its legal and policy preferences. The judiciary, in response, can use its tremendous power over legal proceedings to hold the executive branch to answer for its actions. The judiciary can grant or deny standing of parties, can determine the materiality of information, and can in effect declare all or part of the executive branch's information to be "non-information" and therefore inadmissible in any of its proceedings. The judiciary, alone among the branches, has the power to decide what information "is" within its own house.
The judiciary can also force the executive branch to provide the information it wants, when it wants it, and in the form it wants it, regardless of whether the information yet exists or what it costs the executive to get it. Finally, where violations of federal law may be involved the judiciary can override executive branch attempts to withhold information under claims of "executive privilege." In summary, the judiciary's powers overwhelm any advantage the executive branch may gain from computers and communications technologies.
• Example 3. Assume that the legislative branch seeks to gain advantage over the executive through the use of computers for oversight. Even if an "ideal" computerized system for legislative oversight were in place, the executive could stall in the provision of information, could provide misinformation and disinformation, and could refuse outright to provide information requested by the
legislative branch. In such a confrontation, only the judiciary would have the power to mediate the disagreement. The most powerful response of the executive branch is the ability of the executive to take his or her viewpoint directly to the citizens, thereby marshaling popular support and potentially nullifying the effects of oversight by the legislature. The use of computers and communications technologies is unlikely to produce power shifts from the executive to the legislative branch in this area either.
The branches are able to check one another in virtually any case where computers and communications technologies play a role, simply because the powers of democratic institutions transcend whatever advantage the technologies can confer.
Another possibility is that acquisition of vast computer databases could give one level of government exceptional power over other levels. The most common speculation has been that the central government gains power at the expense of the regional and local governments. There is no evidence that this has happened, and moreover, it is unlikely that such a shift could happen. For one thing, the central government does not need computers and communications technologies to gain a power advantage because it already has the supremacy of federal law on its side. The states have wide powers of autonomous action (i.e., the residue of powers not conferred by the Constitution upon the federal government) but not independence. Also, intergovernmental relationships seldom involve the federal government "ordering" state and local governments about. Instead, most federal actions affecting states involve the federal government paying for national programs, such as unemployment and social welfare, that are implemented by state or local governments, or holding out carrots and sticks to induce state and local governments to adopt particular policies or programs.
It is conceivable that the careful use of computers could permit the federal government to be more heavy-handed in its superior role by enabling federal agencies to better monitor state compliance with federal expectations. However, the current political trend is in the opposite direction. The dominant trend of federalism is toward devolvement of funding, administration, and oversight responsibility to the state and local level. As in the case of separation of powers across the branches of government, the distribution of power across the levels of the federated government system is itself a central part of democratic governance and the institutions that ensure such governance. Use of computers and communications technologies is highly unlikely to affect this as time goes on.
In the foregoing discussion we address the impacts of computers and communications technologies on democratic institutions. At a more fundamental level, there is concern that these technologies can affect the relationship between
government at all levels and the citizens of the country. A central principle of the U.S. form of democratic government is the desire to protect citizens from government tyranny. At issue is whether the use of computers and communications technologies could give government the power to overwhelm constitutional safeguards against abuse of individuals or groups. Creating a well-balanced distribution of power between individual citizens and the government created by and for those citizens is a central problem in the maintenance of civil society. The issue is not whether individuals are imperiled by a faceless government armed with computers, but rather whether duly elected representatives, working through appropriate constitutional mechanisms, will engender computer-dependent abuse of individual rights.
Most of the concern over this issue is expressed in the debate about computers, databanks, and personal privacy. There has been considerable speculation and discussion of scenarios about the potential problems for privacy due to the computerization of government record-keeping activities, but there has been little empirical evaluation of the privacy-related consequences of the use of computers and communications technologies. The debate has at times been largely ideological. With enough data and the right computer systems, authorities will be able to monitor the behavior of large numbers of individuals in a systematic and ongoing fashion. The issue is no longer what authorities can do, but what they choose to do in surveillance of the population.
Privacy is a politically sensitive topic, but as a concept in society and law it is surprisingly not well developed. Existing uses of computerized databanks have not yet abridged personal privacy sufficiently to require constitutional action or even substantial Supreme Court action on the matter. Nevertheless, the privacy issue is being played out in the realms of rhetoric, legislation, and executive action. The controversy is likely to persist due to the creation and interconnection of large systems containing personal information and the relatively weak enforcement of existing privacy legislation.
Computers and communications technologies do not appear to be serious agents of change in democratic government, at least as seen thus far. However, there is a chance that these technologies will have a very substantial influence on the political processes that lead to the election of representatives and the mobilization of national political movements. Much has been written about the effects of communications media, particularly the mass media of radio and television, on the processes by which public opinion is formed and guided, and on the political contests that determine who will govern. The addition of advanced forms of public opinion sensing and computerized direct-mail systems has created a package of tools that are transforming the nature of the political process. There is concern that the extensive manipulation of public moods through the use of
technology will decrease the electorate's overall awareness of the issues, and increase the tendency toward the election of individuals on the grounds of media image and single-issue direct-mail advertising. The ultimate concern is the deliverance of the role of political opinion making, and thereby the mobilization of political bias, into the hands of technicians who stand between actual political leaders and the electorate. This can result in reduced influence of the electorate over political leaders, and potentially, the means for wholesale distortion of the issues by political leaders with skilled "image-making" technocrats.
The impact of computers and communications on political fund raising and campaigning could prove to have significant effects on the political process, not because of any particular weakness of the Constitution itself or as a result of changes in the structure or function of the governmental system, but because changes would be part of larger effects of automation on the mobilization of bias among interest groups in the population. The concept of constitutional democracy depends on an informed electorate, capable of discriminating among candidates based on their overall strengths. Critics contend that extensive use of television in campaigns has decreased the quality of debate and reduced attention to the issues. Highly targeted, single-issue fund raising and campaigning conducted through computer-assisted direct mail or targeted telephone solicitation could contribute to such a trend. The Constitution itself addresses only the major offices and issues of enfranchisement, and not the protocols of party behavior or campaigning. It is possible that computing-based changes in the conduct of political contests will eventually have an effect on the ways the Constitution is interpreted and implemented.
An orthogonal view of technology and its impact on social life implies more subtle and possibly more important concerns for democratic government. This view engages concern over the application of computers and communications technologies to mass surveillance, national information systems, and political campaigningin particular, to the question of what is really important in the determination of who should govern. This concern is manifest in Aldous Huxley's Brave New World, in which technological advancements were deliberately, and to a large measure democratically, applied toward elimination of need and stabilization of the social order. The new world was the epitome of successful technocracy, to the point that circumstances that gave rise to jealousy were preempted through ubiquitous use of technology. Technology was used not to give expression to malicious and destructive tendencies, but rather to support well-intentioned efforts to eliminate the causes of strife. In the process, the removal of strife eliminated existential choice, and thereby, freedom. Technology maximized efficiency in exchange for unavoidable limitations on individual privacy, choice, and freedom.
This story is useful for considering the ultimate impacts of computers and communications technologies on democratic government. The world depicted by Huxley evolved over a protracted period of time, and each step along the way
posed a choice: to live with the contradictions of the present, or to remove them with technical solutions. To the extent that democratic government is threatened by the application of information technology, the threat does not come from weaknesses in the Constitution or the government it shapes. Rather, the threat comes when the governed fail to protect and defend their rights to personal privacy. Whether the growing use of information technologies in mass social surveillance or in partisan political contests is leading to this end remains to be seen. However, this analysis gives sufficient evidence to warrant renewed concern and to prompt increased monitoring of computing activities conducted by government or used in political processes.
A civil engineer working on the large California Water Project, which brings water from the Sacramento/San Joaquin river delta to Southern California, once remarked, "If we don't build this canal, we won't need it." The creation of vital infrastructure ensures dependence on that infrastructure. As surely as the world is now dependent on its transport, telephone, and other infrastructures, it will be dependent on the emerging information infrastructure. In a sense, this is an inevitable price of technological progressdependency occurs only when the thing depended on is very valuable to the dependent. At issue here is the character of dependence that is likely to evolve, and the institutional responses to that dependency.
Dependency on technology can bring risks. Failures in the technological infrastructure can cause the collapse of economic and social functionality. Regional blackouts of electricity service in the Northeast during the 1970s and 1980s resulted in significant economic losses. Blackouts of national long-distance telephone service, credit data systems, electronic funds transfer systems, and other such vital communications and information processing services would undoubtedly cause widespread economic disruption. Dependency can also result in unanticipated, downstream consequences in the form of negative externalities such as pollution. Reliance on nuclear weapons as a key component of strategy during the Cold War resulted in an at-any-cost development and production program that left large areas of the United States terribly polluted, perhaps so badly that they must eventually be entombed and sacrificed as a cost of the war. Although it is difficult to imagine dependence on information technology producing an equivalent environmental catastrophe, toxic materials used in the manufacture of semiconductors and other hardware components have polluted manufacturing sites throughout the country that must now be cleaned up.
Perhaps most important, high levels of technological dependency create more than the risk of economic difficulty from failure. When technologies are instrumental in the construction and maintenance of institutions, and workable substitutes are not available in the event of failure, institutional collapse is possible. A
useful example of this is the uni-modal transportation infrastructure of the Los Angeles region. The entire region is dependent on a single transportation infrastructure: vehicles on roadways. The failure of any major component of that infrastructurefuel availability, roadways, traffic controlsfor any lengthy period of time would bring the entire region of 12 million people to a halt. The Los Angeles region is at risk not only because the existing infrastructure constitutes a single point of failure capable of threatening the region, but also because commuting long distances to work using that infrastructure is a widespread and accepted cultural norm. The failure of transportation would strike at the heart of a nondiscretionary social institution. The collapse of two bridges on the Santa Monica Freeway during the 1993 Northridge earthquake was minor given the hundreds of miles of freeway in the region, yet the cost to the city's economy was at least a $1 million per day during the reconstruction, even after every available alternative transport mode and scheme was implemented.
In summary, technological dependency is not necessarily something to be avoided; in fact, it is probably impossible to avoid altogether. What must be considered is the exposure brought from dependency on technologies with a recognizable probability of failure, no workable substitutes at hand, and high institutional and social costs as a result of failure.
Conclusions and Implications
Computerization Is a Complex Social Phenomenon. The process of automation involves more than the acquisition and implementation of discrete components of technology. Automation is a social phenomenon involving a "package." The adoption and diffusion of information technology are influenced by both demand-pull and supply-push factors. Demand forces dominate the evolution of large, complex, custom applications, while supply forces appear to exert a major influence on the evolution of smaller packaged applications.
The Impacts of Computers Are Seldom as Predicted. Common predictions about the effects of using information technology frequently fail to materialize as expected. The failure of a prediction is not a signal that the outcome is negative. Rather, it is a sign that the impacts are richer and more complex than anticipated. Computerization has not resulted in widespread job displacement of middle managers because it has actually increased their job scope and roles in many cases. And, while management information system skill bureaucracies do not fit the ideal-type service bureaucracy, they frequently produce leading-edge applications of the technology. The important lesson from the research, then, is that failures of expectation and prediction are commonplace in the world of automation. The technology and its applications are best characterized as evolutionary
in impact rather than revolutionary. Indeed, many organizational managers desire stability and work against surprises. Therefore, new information technology is generally introduced slowly so that it can be adapted to meet the organization's needs, and so that the staff can adapt to the technology's introduction.
Technology Is Political. Rational perspectives on change seldom acknowledge the explicitly political character of technology. They emphasize organizational efficiency, concentrate on the positive potential of technology, and assume organization-wide agreement on the purposes of computing use. In contrast, political perspectives see efficiency as a relative concept, embrace the notion that technology can have differential effects on various groups, and reflect the belief that organizational life is rife with social conflict rather than consensus. From a political perspective, organizations are seen as adopting computing for a variety of reasons, including the desire to enhance their status or credibility, or simply in response to the actions of other organizations. Moreover, applications of the technology can cause intra-organizational conflicts. Decisions about technology are inherently political, and the politics behind them may be technocratic, pluralistic, or reinforcing, with different consequences for different groups in each case.
Political perspectives are essential for understanding technology's role in organizations. Technocratic politics helps explain the relationships between the technologists and end-users; pluralistic politics helps explain the relationships among various user interests vying for access to computing resources; and reinforcement politics helps understand the effects of computing on power and authority in organizations. Reinforcement politics has proven to be important in explaining decisions about computerization in organizations, wherein the technology is used primarily to serve the interests of the dominant organizational elites. Reinforcement occurs sometimes through the direct influence of the elites, but more often it occurs through the actions of lower-level managerial and technical staff in anticipation of the interests and preferences of the elites. The political mechanisms used to determine the course of organizational automation will vary, depending on the broader political structure of the organizations themselves, and these mechanisms tend to remain stable over time.
Management Matters in Complex Ways. Prescriptive literature is full of admonitions about the importance of management in effective use of information technology. However, empirical research into the role of management and the efficacy of management policies is lacking. Research of the Irvine School has demonstrated the crucial role of management action in determining the course of automation, even in cases where major environmental changes were present. Moreover, there are distinct patterns of management action that yield different outcomes. Effective management of computers and communications technologies is much more difficult than suggested, however. Specific policies are contingent in their effects on the state of computing management as well as the characteristics
of the organization. Policies recommended in the practitioner literature have proven to be associated with serious problems in the computing environment, and it is unclear whether the policies are not working, whether they have not yet had time to work, or whether they work only under special conditions.
Research Requires the Use of Multiple Perspectives. Review of the research shows that systematic research into social impacts requires understanding and use of multiple disciplines for viewing the interaction of technology, organizations, and society. The work reviewed has used perspectives from the social sciences (political science, economics, sociology, psychology, communications, and management) and from the social analysis of computing in the information and computer sciences. Perhaps more important than the multidisciplinary character of this research, however, is the value of drawing on multiple intellectual perspectives when exploring fundamental causes of social change.
All meaningful explanations of the social aspects of the use of information technology proceed from an ideological base. All scholars have interests and values that influence the theories and explanations they construct. These interests are important not only in prescriptive work; they also figure markedly in the descriptive and explanatory work in the field. By recognizing the fact that explanations are at least in part ideological, and that ideology is an essential and required component of social analytic work, we are able to ''triangulate" on a set of facts from several explanatory positions. This approach permits explaining social phenomena more comprehensively and precisely by gathering insight from various points of view, and using contrasting elements from various perspectives to test the intellectual coherence of alternative perspectives. The multiple-perspectives approach leads to increased self-consciousness during observation and explanation, and increased precision, because explicit perspectives can be examined in light of the facts and other perspectives for explaining the facts.
The dominant analytical perspectives in the computer and communications field have traditionally been tied to the supply-push world of technical development, coupled with a rational-economic interpretation of managerial behavior. These explanatory perspectives have considerable power and have yielded useful results. However, they have distinct limits. Technological determinism and narrow managerial rationalism do not explain the variance observed in the patterns and processes of adoption and routinization of information technology in various tasks, and they fall far short of explaining the considerable differences in successful use of the technology across organizations. Indeed, such perspectives are at a loss to explain the fact that "success" in the use of information technology is singularly elusive. As economist Eliot Soloway has stated so succinctly, the effects of the information revolution have shown up everywhere but in the profit figures.
There certainly are technical and economic-rational elements to be considered
in understanding use of information technology in organizations. Missing, however, are the more finely grained explanations of volition in shaping the behaviors of those that adopt and use the technology, or that react to the effects of its use. While it is clear that information technology has brought major opportunities for change to organizations, it is the individuals and features of the organizations within which they work that determine whether given technologies are adopted and how they will be absorbed into the complex of factors operating in modern organizations. Organizations are political, social, and managerial constructions that involve interactions among competing and cooperating groups, each of which seeks to pursue some mix of its own and common interests, within the framework of broader organizational and social constructions of what is appropriate and expected. Since the true consequences of using information technology are unforeseeable, the actions of individuals in organizations are always based to some extent on faith, social pressure, perceived political advantage, and other factors, in addition to "cost-benefit" calculi covering applications to given activities.
Research Requires a Critical Perspective. Research indicates that there is often a gulf between expectations and subsequent experience with the use of information technology. It is important, therefore, that research proceed from a critical stance. It should be concerned with challenging existing ideas, examining expectations about technology and organizations, and counteracting unsubstantiated biases in both. It should focus particularly on the important role played by ideology and expectations in the use of information technology. The expectations of managers and others in organizations influence the choices they make in adopting and using technology. Managers who believe in technological solutions are likely to introduce information technology on faith, while discounting other considerations. And experiences with technology shape future expectations about the efficacy of technology in meeting organizational needs. The ongoing relationship between expectations and outcomes is a crucial part of understanding the dynamics of use of information technology in organizations.
In taking a critical stance, it is useful to start from common expectations and accepted explanations, and then attempt to corroborate them with empirical evidence. When the corroboration is incomplete, explanations can be modified, expanded, or displaced in order to develop a more accurate fit of theory with the facts. The combination of the critical stance and the multiple-perspectives approach reveals biases inherent in popular claims and provides leverage to think critically about alternative explanations.
Social Analysis Requires Innovation in Research Design. The Irvine School has produced methodological as well as substantive contributions. Most are innovations in research design that are especially suited to social analysis. The basic research strategy of the group is that the scale of research has to match the
scope of the problem one seeks to address. Large, complex, and multifaceted problems require similar approaches. Given customary constraints (shortage of knowledge, resources, and talented people), one is challenged to focus both energy and effort.
Five recommendations can guide research. The first is to focus on leading adopters of the technology when studying the effectiveness of policies for managing computing. This focus enables determination of what works and what does not in the process of innovating, and can lead to advice that will bring others up to the level of the leading performers. The second, when studying policies, is to sample sites at the extremes of policy application (e.g., high and low centralization, insignificant and extensive user training). This approach maximizes the variance on the policies and provides a better indication of the basic direction in the relationships. The third is to use census surveys to investigate the extent of a technology's diffusion, the extent of its use, and the nature of its organizational impact. In addition to elimination of sampling bias, a census provides a good indication of the distribution of patterns of diffusion throughout a population of organizations. The fourth is to concentrate on long-term study of organizational and social impacts. Such impacts cannot be studied over the short term because changes occur slowly, the effects of the use of technology are indirect more often than direct, and the organization and the technology are interactive. The fifth is to use a mix of methodsquantitative and qualitative secondary data analysis, survey research, longitudinal research, international comparative researchand a mix of measures in the research in an effort to achieve better measurement and to triangulate the results of various studies.