Page 163

C
Commissioned Papers



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 163
Page 163 C Commissioned Papers

OCR for page 163

OCR for page 163
Page 165 Infrastructure: The Utility Of Past As Prologue? Amy Friedlander Corporation for National Research Initiatives, Reston, Virginia NOTE: The opinions and views expressed herein are those of the author and do not necessarily reflect those of the Corporation for National Research Initiatives (CNRI). © 1997 by the Corporation for National Research Initiatives. Reprinted by permission. In 1890, advocates of direct current (DC) electric power systems employed alternating current (AC) to electrocute first a dog and then a condemned criminal at the Auburn (New York) state prison in a flamboyant attempt to demonstrate that AC was unsafe. The incident is perhaps the best-known episode in the so-called "War of the Systems," which came to an end with the invention of the rotary converter in 1892, which enabled existing DC distribution systems to be integrated into the more efficient AC systems, and completion of the Niagara Power Project in 1895, which showed that large generating plants and associated transmission lines capable of meeting regional needs could, indeed, be built.1 Electric power is one of four infrastructure history studies sponsored by the Corporation for National Research Initiatives (CNRI). The others address railroads, telephones and telegraphs, and banking; a fifth, radio, is in progress. These studies collectively examine attributes of infrastructure through literature reviews in American history, economics, political science, and sociology. Initially, three questions were posed: • When and how did take-off occur? • What were the public and private roles? • And, how did an infrastructure—characterized by access, "shareability," and economic advantage—emerge? These questions worked well for the first three studies: rail, telegraphy/telephony, and electricity. But the unspoken assumption behind these questions is technology—the application of engineering and science to accomplish a purpose. In the course of the fourth study, banking, which turned out to be about information, we began to look at the problem of infrastructure somewhat differently, examining properties of ubiquity, interdependence, and reciprocity, independent of a given technology or set of technologies. This focused attention on the

OCR for page 163
Page 166 organizational and management structures, which had formed important elements of all of the preceding studies but had not occupied center stage. Finally, all four of the infrastructures were subject to regulation during the New Deal. Indeed, much of the current deregulation is designed to dismantle the world that the New Deal put in place. From a policy perspective, then, the studies not only delineate more clearly what the relative and changing public and private roles were but also explain how the New Deal approaches to regulatory policy came to be, at least with respect to these four industries. The Perils of Drawing Historical Analogies The remainder of this paper discusses themes and observations common to all four of the subject infrastructures. But a word or two is necessary on the perils of drawing historical analogies. All four of these infrastructures obtained shape and form during a period of extraordinary growth. Between 1790 and 1850, the western boundary of the United States moved from the Mississippi River to the Pacific Ocean; population in the same period grew by an average of about 30 percent per decade. After 1860, population growth fell off to a mere 20 percent or so per decade until 1910.2 Urbanization increased dramatically after 1870. In 1890, the U.S. Census Bureau announced that the frontier was closed, and three years later, historian Frederick Jackson Turner proposed his frontier thesis, which was at least partially a eulogy for this period in American history. By 1920, more than half of the nation's population lived in cities. This meant that through the second half of the nineteenth century and into the twentieth, there was a growing concentration of demand for networked technologies such as water, power, and communications as well as for inter- and intra-regional transportation and financial services. Moreover, the late nineteenth and early twentieth centuries saw prices fall so that construction of the physical infrastructure of electricity, for example—the generating plants, transmission lines, power stations, and substations—took place in an environment of declining real costs which could be passed off to consumers as lower rates while the companies still turned a profit. The flip side was wages. Real wages increased in the 1920s, the period in which recognizably modern suburbia proliferated, creating an environment of new construction and consumer demand that made extension of power and phone lines attractive, easy, and relatively cheap. Indeed, the residential market for electricity, with its demand in the evening hours, now became more attractive as a means of continuing to balance peak load. The distribution system was largely in place, and the marginal cost of the "last mile"—that is, connections to individual residences—was relatively low compared with the total construction cost of the system, including the generating plant and long-distance transmission lines. Economies of scale based on improvements to generating and transmission technologies were increasing, and the cost as well as the price of electricity fell.3

OCR for page 163
Page 167 This stands in marked contrast to current debates over strategies for funding construction of the "last mile" for the digital communications infrastructure. The second cluster of differences concerns public/private roles. At the birth of the republic, most people thought of the government as local— parish or country—and perhaps as the state. The federal government was a dim presence, known to most of the people in the form of the postal system. Eligibility requirements, imposed by the states, meant that many men could not vote; universal suffrage for men was not the norm until the 1820s, and women were first granted the vote at the territorial level—in Wyoming in 1869. (Wyoming granted women the vote at the state level in 1890 when it entered the Union.) African Americans, enslaved or free, were denied the vote until passage of the 15th Amendment in 1870, and again, restrictive eligibility requirements excluded most blacks from the vote, particularly in the Deep South, until the twentieth century. The vote is the most direct means of broad participation in civil life. Just as this participation was circumscribed on a number of grounds in the nineteenth century, so, too, was the government's perception of its intervention in the life of its citizens. The Civil War (1861 to 1865) represented a massive intervention in daily life, calling up "volunteers" in both North and South; levying direct taxation; and affecting the economy significantly through the sale of bonds, federal regulation of the currency, and procurement of goods and services, thus laying the foundation for a number of private fortunes. But these were the exception rather than the rule. Even the transcontinental land grants to the railroads—which amounted to an area greater than California and Nevada combined—were modest relative to the total cost. Carter Goodrich concluded that combined state and federal financial assistance to the transcontinental railroads amounted to about 13.5 percent of their total construction cost, and that this assistance was substantially less than that provided for canals.4 Sustained intervention by the government in American daily lives, as measured by per capita increase in government revenues and expenditures, appears to have increased consistently after 1890 and to have begun with local—not federal—authority.
5 Federal regulation, marked by the organization of the Interstate Commerce Commission (ICC) in 1887 to regulate the railroads, was initially a forum for resolving disputes and was embraced by some figures in the industry as a way of setting uniform national policy in an environment of competing state policies. However, by the New Deal, the regulatory agency was seen as a more active instrument, and the government, rather than acting as a mediator, was seen as having a positive obligation to ensure a minimal standard of security for its citizens. This is obvious in both social and economic programs, e.g., the Social Security Administration and the banking reform that expanded the scope of the Federal Reserve, established the Federal Deposit Insurance Corporation, and regulated the structure of the industry.6 Thus, for most of American history, government was a distant presence. Research labs, like Thomas Edison's in Menlo Park, New Jersey, arose with

OCR for page 163
Page 168 corporate support. His was dominated by the telegraph giant, Western Union, itself controlled by William Vanderbilt, who had financial interests in both rail and telegraphy. Not surprisingly, Western Union sponsored research into domains that resonated with its business goals. In 1873, Western Union announced its willingness to reward handsomely any inventor who could achieve multiplexing on its lines, thus increasing capacity without additional investment in the wired plant. This led directly to the simultaneous invention of the telephone by Elisha Gray and Alexander Graham Bell. Edison also came up with a receiver design at Western Union's behest. His lamp and associated DC generating and distribution system represented the most successful in a series of attempts to challenge the gas companies by producing superior interior illumination at a competitive price.
7 Thus, the great nineteenth-century infrastructures arose by processes of competition, compromise, and consensus in which the public presence was, at best, a facilitator and at times a mediator. What Falls Out? Economic growth, deflation, and different expectations of government are three important differences that shaped the development of the infrastructures studied. But six themes do fall out as common to all four of the studies: • Period of experimentation, • First-order substitutions and feedback effects, • Evolution of new structures, • Not always the ''best" technology, • Natural monopolies, and • Physical plant and service. Each of these observations is discussed in greater detail in the next sections. Period of Experimentation: Winners and Losers All of these examples witnessed a period of experimentation in which there were winners and losers and in which a new technology or technologies per se were necessary but not sufficient for take-off. Railroads are the obvious example. Most of the technologies required for self-propelled steam engines on rails (i.e., locomotives) were developed by the 1830s, but take-off, measured by a leap in miles of rail construction, did not occur until the 1850s. There were, moreover, numerous small railroad companies that were gradually incorporated into larger corporate systems. But this was a surprisingly slow and at times contentious process that required decades. The standardization of the gauge is a case in point. By 1860, there were seven gauges for 30,626 miles of track. Of these seven, the standard, 4-foot, 8.5-inch

OCR for page 163
Page 169 gauge represented the bare majority of mileage (53.3 percent). The second most common gauge was the 5-foot gauge, which was concentrated in the South, a region that was further isolated by insufficient intra-regional rail links, including a critical lack of bridges across major rivers. More generally, the effort by many southern cities to secure an urban hinterland resulted in highly localized lines emanating from the major cities but not connecting them.8 Three considerations drove conformity to the "standard" gauge, a precondition to interconnection: the big eastern railroad firms, eager to tap into the rapidly expanding markets in the West, particularly for western grain, which required transport across many states and many independent rail lines; the outbreak of the Civil War, which underlined the need for efficient east-west transportation and communications from both political and military perspectives; and finally, specification in 1862 of the 4-foot, 8.5-inch gauge for construction of the new transcontinental roads. Between 1870 and 1880, most of the companies outside the South adopted the uniform gauge; 3 percent merely built a third rail. Following a meeting among leading railroad interests in the South on February 2, 1886, the southern lines were brought into conformity with the 4-foot, 8.5-inch gauge.
9 Standardization of gauge as well as increasing conformity in signaling, scheduling, and administrative procedures (e.g., cross-licensing access to track; through-ticketing and bills of lading; inventory control and management) enabled freight to flow across tracks and equipment controlled by competing interests. At the same time, mergers and acquisitions meant that many of the smaller companies built to service Portland, Maine, or Baton Rouge, Louisiana, were incorporated into larger entities, resulting in a pattern of many losers and a few big winners. Similar patterns characterized both telephony and electricity. The Bell interests had enjoyed a 20-year patent monopoly in telephony, but with the expiration of key patents in 1893, the number of telephone companies serving local or regional markets exploded. Much to Bell's corporate dismay, the organization found itself confronted by a potential welter of services, technical standards, and lively competition. Indeed, in 1903, more than half of the nation's 1,051 incorporated towns and cities hosted more than one telephone company. In 1915, at least 40 percent of the telephone exchanges in cities with a population of 5,000 or more competed with another local exchange, and dual service continued to exist in parts of the Midwest and Plains until 1924. By the end of that year, however, AT&T, then under the jurisdiction of the Interstate Commerce Commission (ICC), had bought 223 of the 234 independent telephone companies subject to the agency's jurisdiction.10 Electricity tells a similar story. Until the widespread adoption of AC technology, service areas, limited by the short, one-mile range of DC distribution, tended to be relatively compact. It was fairly easy for a small electric utility to identify a market. Thus, between 1887 and 1892, 28 separate firms offering electric service were formed in Chicago alone—not including users who purchased independent, self-contained plants. In their analysis of the structure of the electric

OCR for page 163
Page 170 utility industry in 1897-1898, Hausman and Neufeld conclude that most firms were only marginally profitable. Weaker firms found it difficult to raise capital, which is one reason put forth for the founding of municipally owned electric plants.
11 The integration of DC into AC systems meant that economies of scale and scope were technologically possible as well as desirable since high-voltage AC transmission over distance was more efficient but meant higher threshold costs. Hausman and Neufeld found that strong power companies offering a higher rate of return tended to be older and larger, to have bigger generators, to rely on hydro rather than steam, to have a strong commitment to AC generation, and to have a better load factor (i.e., the ratio of average to peak demand). These firms had the potential to enjoy substantial cost savings—conditions, the authors observe, "which would be expected to presage a major period of consolidation," and which did, indeed, occur. Power generation and transmission companies evolved notions of holding companies as a way to leverage capital and manage broad distribution. Led by Samuel Insull of Commonwealth Edison, industry executives cultivated state regulatory agencies that mandated standardized service and interconnection. By 1924, 7 holding companies controlled 40 percent of the nation's generating capacity, and 16 holding companies generated three-fourths of the nation's electrical power. Thus, even the publicly owned municipal utilities, which provided service to end-users, were dependent on private power providers and transmission line companies for access to bulk power.12 First-Order Substitutions So far, we have discussed the overall shape and form of these industries. In each case, there was an initial period of expansion and proliferation followed by consolidation into a few—or one, in the case of telephony—corporate giants. This was in some cases pushed by the requirements of the technology, e.g., electricity. But this was not necessary; telephony, for example, could have existed as a series of interconnecting yet independent companies—corporate consolidation and management were not necessary. In each of these cases, there was a product that let end-users or consumers do something or have something better. The substitution effect is most obvious in electricity. There already existed a market in interior illumination provided by candles, kerosene, and gas. Edison intentionally set out to provide a superior product that was cost-competitive with the equivalent gas service, and the pricing of electricity was established in terms of competition with gas.13 Telephony was also an improvement on existing local communications technologies. In 1873, Western Union enjoyed a monopoly over telegraphic service, which was primarily between cities. About 10 years earlier, the telegraph giant had begun to experiment with combined telegraphic and delivery services as a way to provide local communications connections. Western Union also began to

OCR for page 163
Page 171 explore switching technologies that allowed financial information to flow from several banks to a single clearinghouse and then from the clearing house back out to the banks. The initial market for telephony was believed to be local, thus filling the gap in service. Telephony was initially constrained by signal attenuation to a range of about 20 to 30 miles in urban environments where cable was laid below grade, although transmissions across distances of 800 miles could be achieved with open-air lines. By 1890, Bell interests were already pursuing interurban transmission in head-to-head competition with the telegraph monopoly. Rail transport was also conceived of as a substitution, in this case, for transport via canal or overland. Although canals had achieved the first major cost savings, rail held the advantage in perishables and high-value goods, where the desire for speed outweighed higher costs.14 The differential between rail and water has been a matter of some debate. In general, though, competition between rail and water tended to lower all freight charges. Similar inter-product competition also tended to keep electric utility rates relatively low and encouraged utility executives to cooperate with regulatory agencies, thus distancing themselves from the contentious and adversarial positions taken by the gas companies. Evolution of New Structures: Niches, Organization, and Efficiency Eventually, niches for different services formed and new structures and services evolved. For example, early nineteenth-century turnpike companies, never as profitable as hoped, quickly gave way in the long-distance market to both canals and railroads. On the other hand, expanding numbers of middle- and long-distance routes via either rail or water increased the need for short-distance overland services of 15 miles or less. This increased demand more than offset the loss of long-distance business. Plank roads, constructed on the same principle as wooden sidewalks, were introduced after the mid-1830s, and wagons dominated the short haul, that is, distances less than 15 to 20 miles long, where the rate was cheaper than either rail or canals and time was not a constraint. This was seen as an advantage to some entrepreneurs. John Murray Forbes of Boston, who controlled the Michigan Central, avoided construction of branch lines and encouraged local construction of plank roads affording access to the railroad without his having to expend capital to reach markets. Water transport via coastal, lake, or river steamer or by canal barge had the advantage in medium to long hauls, averaging 650 miles, especially where the commodities shipped were high bulk and low value. Innovations during the nineteenth century tended to reduce costs mainly over medium to long distances. Waterways were good albeit not perfect substitutes for rail and generally had the advantage in shipping high-bulk/low-value goods over long distances. Rail possessed the advantage in shipping high value items over medium and long distances and in shipping high-bulk/low-value commodities over medium distances.

OCR for page 163
Page 172 Similar differentiation characterized power. Competition with electric companies spurred gas producers to cut prices and improve the product. "Water" gas, introduced in 1880s, was considered greatly superior to the earlier coal gas; it was cleaner and provided better light. The manufacturing process required a larger scale of operation, which increased the costs of entry but also resulted in economies of scale. In a newly competitive environment, Consumers Gas of Chicago was able to offer still lower prices, thus forcing the price of gas to fall from $3.00 to $1.75 per 1,000 cubic feet. The new gas technology resulted in similar price competition in Houston and a 40 percent decrease in local rates. With electricity beginning to encroach upon the lighting market in upper- and upper-middle-class households and commercial establishments, gas seemed poised to capture the market of middle and working class homes where kerosene light was still the norm. Discovery of natural gas fields and realization of the thermal applications of gas led to further service and product differentiation. Between 1900 and 1940, higher-income urban households adopted electricity first and tended to prefer electricity for lighting and natural gas for hot water and perhaps cooking, with an oil burner for heat. Middle- and lower-income residents converted to new energies more slowly. They selected electricity for light first, then shifted from a coal to a gas stove, and finally added a gas hot-water heater.
15 Thus, consumers chose among multiple energy technologies, and the urban energy landscape as late as the 1920s was characterized by a mix of coal, oil, gas, and electricity. Applications of electricity in the heavy industries took place after World War I as a result of continued advances in technology as well as soaring prices for both coal and labor. But the implications of electrification were more profound than substitution of one power source for another. Applications of central station-generated electricity in manufacturing and industry had begun in the 1890s among small users who realized the advantages of the small AC electric motor in providing fractionalized power in highly segmented, labor-intensive processes where needs were historically too small to justify a large investment in a steam engine: the apparel industries, chemicals, printing, and several equipment manufacturers (electrical, non-electrical, and transportation), and metal fabrication. This cluster of industries remained at the forefront of electrification through 1954. Large-scale enterprises, characterized by substantial sunk costs in existing technology and by power- and heat-intensive processes (lumber, paper, petroleum, stone/clay/glass, and primary metals), consistently lagged in adopting electric power. Given the scale of their facilities and the importance of the heat byproduct (e.g., steam) to their industrial processes, managers of these industries tended to install self-contained electric generating plants when they did decide to go electric after 1919.16 Electricity thus offered small-scale enterprises access to power that formerly they did not have. In both the heavy and light industries and manufacturing plants, electrification revolutionized the organization of work. Prior to the introduction

OCR for page 163
Page 173 of electricity, industry relied on centralized, belt-and-shaft systems linked to a single prime mover (either water or steam-powered). The advent of electricity and the electric motor enabled a restructuring of industrial processes to a more efficient, decentralized unit drive system where energy was made available at the point of use. Unit drive systems possess numerous advantages. Elimination of the centralized line shaft system reduced fuel inputs and increased energy efficiency by reducing friction losses implicit in belt-and-shaft distribution. Factory structures no longer needed to support heavy mechanical systems, permitting lighter, single-story factory layouts, which in turn permitted better materials handling and flexible work flows. Finally, components of the process became independent, and having to fix a problem in one did not shut the entire system down. Walter Devine, who has conducted the seminal work on electrification and organization of industrial processes, argues that reduced energy requirements resulting from efficient application of electricity in unit-drive systems resulted in higher productivity of capital and labor. And economist Harry Oshima finds that in textiles, six labor-intensive mechanized processes in the era of steam were reengineered to 25 processes without a concurrent increase in labor inputs.17 Thus, electrification enabled efficiencies in industrial and manufacturing processes. Not Always the "Best" Technology The efficiency cum labor substitution argument set forth by Oshima is part of a larger literature that addresses the relationship between technology and growth in the American economy in the late nineteenth and the twentieth centuries. Two themes in this literature resonate with contemporary concerns: one is the relationship between technology and labor, and the second is the so-called "productivity paradox." The productivity paradox consists in the fact that although electricity was adopted as early as 1889, measurable gains in aggregate national productivity began to appear only in the 1920s, after large industrial plants—metals, petroleum, transportation—shifted to electric power. Why the lag? For one thing, early adopters were small, labor-intensive manufacturing plants where good light and access to fractionalized power were important. But their impact on the total industrial sector was small relative to the heavy industries, which did not electrify until after 1919. According to economist Arthur Woolf, this transition occurred in the context of rapidly falling prices for electricity, escalating prices for coal, and increased costs of labor. Woolf concludes that firms took advantage of cheaper energy costs to offset higher labor costs by restructuring their operations.18 His analysis has been criticized as overly reductionist and too reliant on the costs of electricity and labor as the principal determinants without taking into account the engineering flexibility and efficiencies that electric power enabled or the process of incremental adoption that began with smaller enterprises.19 Finally,

OCR for page 163
Page 200 focus in this discussion is on the institutions involved in the construction of democratic government, and on the relationships between the people and government. We also address the process by which individuals are elected and appointed to serve in institutionally defined positions of influence and authority in the democratic governance structure. Democratic Governance Computers and communications technologies have taken on a highly visible role as tools of government and as symbols in the ongoing debate about how government ought to function. There has been considerable speculation over whether use of these technologies can and will alter the functioning of democratic government. There are a variety of forms of democratic government. We choose to focus our discussion on the constitutional form of democratic government found in the federated structure of the United States. The United States is the oldest and greatest user of computers and communications technologies among large democratic countries; effects on its democratic institutions should by now be apparent. Our discussion covers three areas: effects on the fundamental structure of democratic institutions predicated on separation of powers and the concept of federalism; effects on the relationship between government and the people; effects on the processes of deliberation and constitutional operation. It also touches on risks inherent in high levels of dependence on technology. Effects on Democratic Institutions The U.S. form of democratic government is predicated on two key assumptions. The first is the separation of power horizontally across the key functions of government—the legislative, executive, and judicial—in order to ensure that each branch holds the others in check. In principle, differential use of computers and communications technologies by one of the branches could undermine the checks, thereby providing substantive, procedural, functional, or symbolic advantage compared to the other branches. The second assumption is that power should be separated vertically in order to keep as much of the authority of government as close to the citizen as possible. In principle, the construction of national information systems for criminal justice, taxation, welfare, and so on might enhance the power of the central government in comparison to the regional and local governments. The introduction of computers and communications technologies in the U.S. federal government was accompanied from the start by speculation that power would accrue to the branch with the most technology. Given the preponderance of technology in the executive branch, one would expect it to gain advantage over the legislative and judicial branches. In fact, no such power shift has occurred.

OCR for page 163
Page 201 The separation of powers doctrine ensures that each branch has separate functions, that each is constitutionally and politically independent of the other, and that each has inviolate recourse through which to check the others. Computers and communications technologies do not and cannot fundamentally change these constitutional relationships. Three examples serve to illustrate: • Example 1. Assume that, as a result of its greater computing, information, and analytic capabilities, the executive branch gains power over the smaller, less experienced, and diffuse bureaucracies supporting the legislative branch. The legislative branch can limit and control executive branch computerization by stopping the purchase of new computer systems through legislation, by strangling the procurement process through audits and inquiries, and by raising politically damaging questions of faulty procurement, cost overruns, mismanagement, and other evils resulting from executive computerization. The legislative branch can also request data from executive agencies, which are usually willing to comply in exchange for favorable treatment of their appropriations. Finally, the legislative branch can buy its own computers, develop its own information systems, and operate its own analytic models with its own staff. Through these mechanisms, the legislative branch can readily establish parity with and independence from the executive branch. • Example 2. Assume that the executive branch tries to influence judicial review or overload the judicial branch with data from its vast stores of computer databases. The judiciary is the least computerized of the three branches of government and so is considered most vulnerable to the information that the executive branch can amass in support of its legal and policy preferences. The judiciary, in response, can use its tremendous power over legal proceedings to hold the executive branch to answer for its actions. The judiciary can grant or deny standing of parties, can determine the materiality of information, and can in effect declare all or part of the executive branch's information to be "non-information" and therefore inadmissible in any of its proceedings. The judiciary, alone among the branches, has the power to decide what information "is" within its own house. The judiciary can also force the executive branch to provide the information it wants, when it wants it, and in the form it wants it, regardless of whether the information yet exists or what it costs the executive to get it. Finally, where violations of federal law may be involved the judiciary can override executive branch attempts to withhold information under claims of "executive privilege." In summary, the judiciary's powers overwhelm any advantage the executive branch may gain from computers and communications technologies. • Example 3. Assume that the legislative branch seeks to gain advantage over the executive through the use of computers for oversight. Even if an "ideal" computerized system for legislative oversight were in place, the executive could stall in the provision of information, could provide misinformation and disinformation, and could refuse outright to provide information requested by the

OCR for page 163
Page 202 legislative branch. In such a confrontation, only the judiciary would have the power to mediate the disagreement. The most powerful response of the executive branch is the ability of the executive to take his or her viewpoint directly to the citizens, thereby marshaling popular support and potentially nullifying the effects of oversight by the legislature. The use of computers and communications technologies is unlikely to produce power shifts from the executive to the legislative branch in this area either. The branches are able to check one another in virtually any case where computers and communications technologies play a role, simply because the powers of democratic institutions transcend whatever advantage the technologies can confer. Another possibility is that acquisition of vast computer databases could give one level of government exceptional power over other levels. The most common speculation has been that the central government gains power at the expense of the regional and local governments. There is no evidence that this has happened, and moreover, it is unlikely that such a shift could happen. For one thing, the central government does not need computers and communications technologies to gain a power advantage because it already has the supremacy of federal law on its side. The states have wide powers of autonomous action (i.e., the residue of powers not conferred by the Constitution upon the federal government) but not independence. Also, intergovernmental relationships seldom involve the federal government "ordering" state and local governments about. Instead, most federal actions affecting states involve the federal government paying for national programs, such as unemployment and social welfare, that are implemented by state or local governments, or holding out carrots and sticks to induce state and local governments to adopt particular policies or programs. It is conceivable that the careful use of computers could permit the federal government to be more heavy-handed in its superior role by enabling federal agencies to better monitor state compliance with federal expectations. However, the current political trend is in the opposite direction. The dominant trend of federalism is toward devolvement of funding, administration, and oversight responsibility to the state and local level. As in the case of separation of powers across the branches of government, the distribution of power across the levels of the federated government system is itself a central part of democratic governance and the institutions that ensure such governance. Use of computers and communications technologies is highly unlikely to affect this as time goes on. Effects on Relationships Between Government and the People In the foregoing discussion we address the impacts of computers and communications technologies on democratic institutions. At a more fundamental level, there is concern that these technologies can affect the relationship between

OCR for page 163
Page 203 government at all levels and the citizens of the country. A central principle of the U.S. form of democratic government is the desire to protect citizens from government tyranny. At issue is whether the use of computers and communications technologies could give government the power to overwhelm constitutional safeguards against abuse of individuals or groups. Creating a well-balanced distribution of power between individual citizens and the government created by and for those citizens is a central problem in the maintenance of civil society. The issue is not whether individuals are imperiled by a faceless government armed with computers, but rather whether duly elected representatives, working through appropriate constitutional mechanisms, will engender computer-dependent abuse of individual rights. Most of the concern over this issue is expressed in the debate about computers, databanks, and personal privacy. There has been considerable speculation and discussion of scenarios about the potential problems for privacy due to the computerization of government record-keeping activities, but there has been little empirical evaluation of the privacy-related consequences of the use of computers and communications technologies. The debate has at times been largely ideological. With enough data and the right computer systems, authorities will be able to monitor the behavior of large numbers of individuals in a systematic and ongoing fashion. The issue is no longer what authorities can do, but what they choose to do in surveillance of the population. Privacy is a politically sensitive topic, but as a concept in society and law it is surprisingly not well developed. Existing uses of computerized databanks have not yet abridged personal privacy sufficiently to require constitutional action or even substantial Supreme Court action on the matter. Nevertheless, the privacy issue is being played out in the realms of rhetoric, legislation, and executive action. The controversy is likely to persist due to the creation and interconnection of large systems containing personal information and the relatively weak enforcement of existing privacy legislation. Effects on the Political Process Computers and communications technologies do not appear to be serious agents of change in democratic government, at least as seen thus far. However, there is a chance that these technologies will have a very substantial influence on the political processes that lead to the election of representatives and the mobilization of national political movements. Much has been written about the effects of communications media, particularly the mass media of radio and television, on the processes by which public opinion is formed and guided, and on the political contests that determine who will govern. The addition of advanced forms of public opinion sensing and computerized direct-mail systems has created a package of tools that are transforming the nature of the political process. There is concern that the extensive manipulation of public moods through the use of

OCR for page 163
Page 204 technology will decrease the electorate's overall awareness of the issues, and increase the tendency toward the election of individuals on the grounds of media image and single-issue direct-mail advertising. The ultimate concern is the deliverance of the role of political opinion making, and thereby the mobilization of political bias, into the hands of technicians who stand between actual political leaders and the electorate. This can result in reduced influence of the electorate over political leaders, and potentially, the means for wholesale distortion of the issues by political leaders with skilled "image-making" technocrats. The impact of computers and communications on political fund raising and campaigning could prove to have significant effects on the political process, not because of any particular weakness of the Constitution itself or as a result of changes in the structure or function of the governmental system, but because changes would be part of larger effects of automation on the mobilization of bias among interest groups in the population. The concept of constitutional democracy depends on an informed electorate, capable of discriminating among candidates based on their overall strengths. Critics contend that extensive use of television in campaigns has decreased the quality of debate and reduced attention to the issues. Highly targeted, single-issue fund raising and campaigning conducted through computer-assisted direct mail or targeted telephone solicitation could contribute to such a trend. The Constitution itself addresses only the major offices and issues of enfranchisement, and not the protocols of party behavior or campaigning. It is possible that computing-based changes in the conduct of political contests will eventually have an effect on the ways the Constitution is interpreted and implemented. An orthogonal view of technology and its impact on social life implies more subtle and possibly more important concerns for democratic government. This view engages concern over the application of computers and communications technologies to mass surveillance, national information systems, and political campaigning—in particular, to the question of what is really important in the determination of who should govern. This concern is manifest in Aldous Huxley's Brave New World, in which technological advancements were deliberately, and to a large measure democratically, applied toward elimination of need and stabilization of the social order. The new world was the epitome of successful technocracy, to the point that circumstances that gave rise to jealousy were preempted through ubiquitous use of technology. Technology was used not to give expression to malicious and destructive tendencies, but rather to support well-intentioned efforts to eliminate the causes of strife. In the process, the removal of strife eliminated existential choice, and thereby, freedom. Technology maximized efficiency in exchange for unavoidable limitations on individual privacy, choice, and freedom. This story is useful for considering the ultimate impacts of computers and communications technologies on democratic government. The world depicted by Huxley evolved over a protracted period of time, and each step along the way

OCR for page 163
Page 205 posed a choice: to live with the contradictions of the present, or to remove them with technical solutions. To the extent that democratic government is threatened by the application of information technology, the threat does not come from weaknesses in the Constitution or the government it shapes. Rather, the threat comes when the governed fail to protect and defend their rights to personal privacy. Whether the growing use of information technologies in mass social surveillance or in partisan political contests is leading to this end remains to be seen. However, this analysis gives sufficient evidence to warrant renewed concern and to prompt increased monitoring of computing activities conducted by government or used in political processes. Technology, Dependency, and Risk A civil engineer working on the large California Water Project, which brings water from the Sacramento/San Joaquin river delta to Southern California, once remarked, "If we don't build this canal, we won't need it." The creation of vital infrastructure ensures dependence on that infrastructure. As surely as the world is now dependent on its transport, telephone, and other infrastructures, it will be dependent on the emerging information infrastructure. In a sense, this is an inevitable price of technological progress—dependency occurs only when the thing depended on is very valuable to the dependent. At issue here is the character of dependence that is likely to evolve, and the institutional responses to that dependency. Dependency on technology can bring risks. Failures in the technological infrastructure can cause the collapse of economic and social functionality. Regional blackouts of electricity service in the Northeast during the 1970s and 1980s resulted in significant economic losses. Blackouts of national long-distance telephone service, credit data systems, electronic funds transfer systems, and other such vital communications and information processing services would undoubtedly cause widespread economic disruption. Dependency can also result in unanticipated, downstream consequences in the form of negative externalities such as pollution. Reliance on nuclear weapons as a key component of strategy during the Cold War resulted in an at-any-cost development and production program that left large areas of the United States terribly polluted, perhaps so badly that they must eventually be entombed and sacrificed as a cost of the war. Although it is difficult to imagine dependence on information technology producing an equivalent environmental catastrophe, toxic materials used in the manufacture of semiconductors and other hardware components have polluted manufacturing sites throughout the country that must now be cleaned up. Perhaps most important, high levels of technological dependency create more than the risk of economic difficulty from failure. When technologies are instrumental in the construction and maintenance of institutions, and workable substitutes are not available in the event of failure, institutional collapse is possible. A

OCR for page 163
Page 206 useful example of this is the uni-modal transportation infrastructure of the Los Angeles region. The entire region is dependent on a single transportation infrastructure: vehicles on roadways. The failure of any major component of that infrastructure—fuel availability, roadways, traffic controls—for any lengthy period of time would bring the entire region of 12 million people to a halt. The Los Angeles region is at risk not only because the existing infrastructure constitutes a single point of failure capable of threatening the region, but also because commuting long distances to work using that infrastructure is a widespread and accepted cultural norm. The failure of transportation would strike at the heart of a nondiscretionary social institution. The collapse of two bridges on the Santa Monica Freeway during the 1993 Northridge earthquake was minor given the hundreds of miles of freeway in the region, yet the cost to the city's economy was at least a $1 million per day during the reconstruction, even after every available alternative transport mode and scheme was implemented. In summary, technological dependency is not necessarily something to be avoided; in fact, it is probably impossible to avoid altogether. What must be considered is the exposure brought from dependency on technologies with a recognizable probability of failure, no workable substitutes at hand, and high institutional and social costs as a result of failure. Conclusions and Implications Research Issues Computerization Is a Complex Social Phenomenon. The process of automation involves more than the acquisition and implementation of discrete components of technology. Automation is a social phenomenon involving a "package." The adoption and diffusion of information technology are influenced by both demand-pull and supply-push factors. Demand forces dominate the evolution of large, complex, custom applications, while supply forces appear to exert a major influence on the evolution of smaller packaged applications. The Impacts of Computers Are Seldom as Predicted. Common predictions about the effects of using information technology frequently fail to materialize as expected. The failure of a prediction is not a signal that the outcome is negative. Rather, it is a sign that the impacts are richer and more complex than anticipated. Computerization has not resulted in widespread job displacement of middle managers because it has actually increased their job scope and roles in many cases. And, while management information system skill bureaucracies do not fit the ideal-type service bureaucracy, they frequently produce leading-edge applications of the technology. The important lesson from the research, then, is that failures of expectation and prediction are commonplace in the world of automation. The technology and its applications are best characterized as evolutionary

OCR for page 163
Page 207 in impact rather than revolutionary. Indeed, many organizational managers desire stability and work against surprises. Therefore, new information technology is generally introduced slowly so that it can be adapted to meet the organization's needs, and so that the staff can adapt to the technology's introduction. Technology Is Political. Rational perspectives on change seldom acknowledge the explicitly political character of technology. They emphasize organizational efficiency, concentrate on the positive potential of technology, and assume organization-wide agreement on the purposes of computing use. In contrast, political perspectives see efficiency as a relative concept, embrace the notion that technology can have differential effects on various groups, and reflect the belief that organizational life is rife with social conflict rather than consensus. From a political perspective, organizations are seen as adopting computing for a variety of reasons, including the desire to enhance their status or credibility, or simply in response to the actions of other organizations. Moreover, applications of the technology can cause intra-organizational conflicts. Decisions about technology are inherently political, and the politics behind them may be technocratic, pluralistic, or reinforcing, with different consequences for different groups in each case. Political perspectives are essential for understanding technology's role in organizations. Technocratic politics helps explain the relationships between the technologists and end-users; pluralistic politics helps explain the relationships among various user interests vying for access to computing resources; and reinforcement politics helps understand the effects of computing on power and authority in organizations. Reinforcement politics has proven to be important in explaining decisions about computerization in organizations, wherein the technology is used primarily to serve the interests of the dominant organizational elites. Reinforcement occurs sometimes through the direct influence of the elites, but more often it occurs through the actions of lower-level managerial and technical staff in anticipation of the interests and preferences of the elites. The political mechanisms used to determine the course of organizational automation will vary, depending on the broader political structure of the organizations themselves, and these mechanisms tend to remain stable over time. Management Matters in Complex Ways. Prescriptive literature is full of admonitions about the importance of management in effective use of information technology. However, empirical research into the role of management and the efficacy of management policies is lacking. Research of the Irvine School has demonstrated the crucial role of management action in determining the course of automation, even in cases where major environmental changes were present. Moreover, there are distinct patterns of management action that yield different outcomes. Effective management of computers and communications technologies is much more difficult than suggested, however. Specific policies are contingent in their effects on the state of computing management as well as the characteristics

OCR for page 163
Page 208 of the organization. Policies recommended in the practitioner literature have proven to be associated with serious problems in the computing environment, and it is unclear whether the policies are not working, whether they have not yet had time to work, or whether they work only under special conditions. Methodology Research Requires the Use of Multiple Perspectives. Review of the research shows that systematic research into social impacts requires understanding and use of multiple disciplines for viewing the interaction of technology, organizations, and society. The work reviewed has used perspectives from the social sciences (political science, economics, sociology, psychology, communications, and management) and from the social analysis of computing in the information and computer sciences. Perhaps more important than the multidisciplinary character of this research, however, is the value of drawing on multiple intellectual perspectives when exploring fundamental causes of social change. All meaningful explanations of the social aspects of the use of information technology proceed from an ideological base. All scholars have interests and values that influence the theories and explanations they construct. These interests are important not only in prescriptive work; they also figure markedly in the descriptive and explanatory work in the field. By recognizing the fact that explanations are at least in part ideological, and that ideology is an essential and required component of social analytic work, we are able to ''triangulate" on a set of facts from several explanatory positions. This approach permits explaining social phenomena more comprehensively and precisely by gathering insight from various points of view, and using contrasting elements from various perspectives to test the intellectual coherence of alternative perspectives. The multiple-perspectives approach leads to increased self-consciousness during observation and explanation, and increased precision, because explicit perspectives can be examined in light of the facts and other perspectives for explaining the facts. The dominant analytical perspectives in the computer and communications field have traditionally been tied to the supply-push world of technical development, coupled with a rational-economic interpretation of managerial behavior. These explanatory perspectives have considerable power and have yielded useful results. However, they have distinct limits. Technological determinism and narrow managerial rationalism do not explain the variance observed in the patterns and processes of adoption and routinization of information technology in various tasks, and they fall far short of explaining the considerable differences in successful use of the technology across organizations. Indeed, such perspectives are at a loss to explain the fact that "success" in the use of information technology is singularly elusive. As economist Eliot Soloway has stated so succinctly, the effects of the information revolution have shown up everywhere but in the profit figures. There certainly are technical and economic-rational elements to be considered

OCR for page 163
Page 209 in understanding use of information technology in organizations. Missing, however, are the more finely grained explanations of volition in shaping the behaviors of those that adopt and use the technology, or that react to the effects of its use. While it is clear that information technology has brought major opportunities for change to organizations, it is the individuals and features of the organizations within which they work that determine whether given technologies are adopted and how they will be absorbed into the complex of factors operating in modern organizations. Organizations are political, social, and managerial constructions that involve interactions among competing and cooperating groups, each of which seeks to pursue some mix of its own and common interests, within the framework of broader organizational and social constructions of what is appropriate and expected. Since the true consequences of using information technology are unforeseeable, the actions of individuals in organizations are always based to some extent on faith, social pressure, perceived political advantage, and other factors, in addition to "cost-benefit" calculi covering applications to given activities. Research Requires a Critical Perspective. Research indicates that there is often a gulf between expectations and subsequent experience with the use of information technology. It is important, therefore, that research proceed from a critical stance. It should be concerned with challenging existing ideas, examining expectations about technology and organizations, and counteracting unsubstantiated biases in both. It should focus particularly on the important role played by ideology and expectations in the use of information technology. The expectations of managers and others in organizations influence the choices they make in adopting and using technology. Managers who believe in technological solutions are likely to introduce information technology on faith, while discounting other considerations. And experiences with technology shape future expectations about the efficacy of technology in meeting organizational needs. The ongoing relationship between expectations and outcomes is a crucial part of understanding the dynamics of use of information technology in organizations. In taking a critical stance, it is useful to start from common expectations and accepted explanations, and then attempt to corroborate them with empirical evidence. When the corroboration is incomplete, explanations can be modified, expanded, or displaced in order to develop a more accurate fit of theory with the facts. The combination of the critical stance and the multiple-perspectives approach reveals biases inherent in popular claims and provides leverage to think critically about alternative explanations. Social Analysis Requires Innovation in Research Design. The Irvine School has produced methodological as well as substantive contributions. Most are innovations in research design that are especially suited to social analysis. The basic research strategy of the group is that the scale of research has to match the

OCR for page 163
Page 210 scope of the problem one seeks to address. Large, complex, and multifaceted problems require similar approaches. Given customary constraints (shortage of knowledge, resources, and talented people), one is challenged to focus both energy and effort. Five recommendations can guide research. The first is to focus on leading adopters of the technology when studying the effectiveness of policies for managing computing. This focus enables determination of what works and what does not in the process of innovating, and can lead to advice that will bring others up to the level of the leading performers. The second, when studying policies, is to sample sites at the extremes of policy application (e.g., high and low centralization, insignificant and extensive user training). This approach maximizes the variance on the policies and provides a better indication of the basic direction in the relationships. The third is to use census surveys to investigate the extent of a technology's diffusion, the extent of its use, and the nature of its organizational impact. In addition to elimination of sampling bias, a census provides a good indication of the distribution of patterns of diffusion throughout a population of organizations. The fourth is to concentrate on long-term study of organizational and social impacts. Such impacts cannot be studied over the short term because changes occur slowly, the effects of the use of technology are indirect more often than direct, and the organization and the technology are interactive. The fifth is to use a mix of methods—quantitative and qualitative secondary data analysis, survey research, longitudinal research, international comparative research—and a mix of measures in the research in an effort to achieve better measurement and to triangulate the results of various studies.