Illustrative Examples and
This chapter discusses a selection of social science studies that have provided useful insights for understanding the impacts of computing and communications and shaping public policy. The aim is to give a flavor of the results produced by earlier studies and to introduce some areas viewed as especially promising for future research, including points raised and issues discussed at the June 1997 workshop and in position papers submitted by the participants. It is not intended to be comprehensive; the points and issues discussed illustrate the range and value of social science research and provide a basis for framing important research questions. The chapter concludes with an illustrative set of broad topics for ongoing research drawn from the discussion presented below.
Since the range of potential impacts associated with information technology is vast, the examples and issues outlined below are organized according to the domains in which they are extraordinarily important: private life, including households and community; social infrastructure; and business, including labor and organizational process. Cutting across all of these are issues integral to life in an information economy and societyamong them protection of intellectual property, pricing of information, and electronic commerce. Another significant impact of computing and communications is the changing boundaries between these domainsbetween people and organizations, organizations and nations, and the private and public sectors.
2.1 Households And Community
Americans are rushing to furnish their homes with a host of devices for sending, receiving, and processing huge quantities of information through diverse
media across multitudes of channels. Figure 2.1 shows trends in acquisition of devices such as personal computers compared with ownership of two consumer staplesrefrigerators and automobiles. If one could lift the roof from the characteristic U.S. home, one would see that it looks increasingly like a multiplex theater. What once took place in the town square, in the neighborhood tavern, on market day, or in the library can now occur as easily in the study or in the bedroom. Computers and advanced communications are also playing increasingly significant roles in community organizations and in education.
2.1.1 Computer Use in the Home
Computer use in the home is a relatively recent phenomenon, and one that has changed considerably in the past two decades. At first, a majority of use was work-related. Today computers are more accepted as a household technology, with an increasing amount of software and other development targeted to the home. (For further discussion and a model for the interaction of the household and technology, see Venkatesh, 1996.)
Descriptive studies of computer use in the home are relatively rare and almost always very "thin," that is, based on a small number of survey questions.
More intensive and extensive study of computer use in the home is required to understand what people use the computer for; how computer use substitutes for other activities; and how it affects family dynamics, children's educational performance, adults' employment activities, and so forth. But even the best descriptive studies inevitably confound household computer use and its effects with effects stemming from household income and educational status. Households with greater resources are much more likely to have and use computers, and they are likely to use them in different ways. In this situation it is difficult to understand how much of any described effect is due to the technology and how much is due to ancillary resources the household brings to bear on such challenges as understanding how to use software, troubleshoot technical problems, select software for children, and incorporate computers into family activities. A good way to untangle the effects of technology from the effects of other household resources such as income and education is to conduct field experiments in which households are given current technology and well-designed training and support to compensate at least in part for limited income and educational resources.
The Homenet Project (Kraut et al., 1996; Kiesler et al., 1997), organized by social scientists at Carnegie Mellon University, is a field experiment documenting the use and effects of household computers in more than 100 households in Pittsburgh, Pennsylvania. Families were selected for demographic diversity, and a matched sample of eligible but not-selected families was also tracked. Each selected family was given a computer, modem, extra telephone line, full Internet accounts for each family member above age 8 who wanted one, software, training, online support, and access to an evening telephone help desk. In exchange for receiving technology and technical support, families agreed to participate in a variety of data collection efforts, including surveys, home interviews, and automated logging of software use.
Data collection and analysis are still under way, but the researchers have already been able to document important findings:
Even with hardware and software designed for ease of use, personal training, and personal support, people found the technology hard to understand and use. Significantly, many of those who stopped (or never started) use blamed themselves rather than the technology for their problems. Generational effects persist even when both older and younger generations have the same access to the same technology. People in the household under the age of 19 use the computer more than people older than 19.
Use of electronic mail is better than use of the Web as a predictor of later e-mail and Web use.
Household income and educational levels are not valid as predictors of Internet use when all the people compared have adequate technology and support.
2.1.2 Differential Impacts of Technology
It is rhetorically convenient to talk as though technology is used by everyone in the same way and affects everyone similarly, regardless of their life circumstances. Thus, such generalizations as "e-mail flattens organizational hierarchies" or "people who spend time online reduce their face-to-face interaction" are common.
Historians of earlier technologies such as the telephone have noted that people use the same technology differently and that it has different effects, depending on a person's age, gender, income level, geographic location, and other circumstances (see, e.g., Fischer, 1992; Mueller and Schement, 1996).
Numerous researchers have reached the same conclusion about computers. Attewell and Battle (1997) showed that equivalent technological capability in homes is associated with higher school test scores when family income is higher. The Homenet study (Kraut et al., 1996; Kiesler et al., 1997; described above in section 2.1.1) demonstrates that the same technology in the home is used differently by males and females, and also by teenagers and adults. A RAND study of retirees (Bikson et al., 1991) showed that the same technology is used differently by recent retirees and same-age counterparts who have continued to work. Section 2.3.4 gives examples of the differential impact of e-mail use in scientific communities.
An important related question is understanding why some people who made use of the Internet at some time then stopped using it. Demographic studies of the populations of network users and nonusers are required. The developing population of people who experimented with Internet use but did not become long-term users deserves analysis. One study of this topic, using Nielson data from more than 14,000 households, discovered that Internet "drop-outs" were less likely than those who continue using the Internet to have developed social relationships and roles online (Chung, 1998).
Research on differential impacts holds a very important message to those, such as policy makers and others, wishing to understand the interactions between technology and society as a single, uniform impact: they will forever be disappointed or deluded. It is vital to recognize that the "same" technology has different effects in different social and organizational circumstances. Indeed, one of the most important contributions that social science research can make is in exploring how social and organizational conditionssuch as income, age, sex, or work statusaffect and are affected by how technology is used.
The Internet offers a new locus for communication and participation. According to a Business Week/Harris poll released April 28, 1997, of the 89 percent of those surveyed who used e-mail, nearly one-third considered themselves part
of an online community. Forty-two percent of those involved in an online community said that it was related to their profession, 35 percent said that their community was a social group, and 18 percent said that it revolved around a hobby.
The shift away from traditional notions of public space may threaten older forms of community. Polls show, for example, that more New Jerseyans know the names of the mayors of New York and Philadelphia than know the names of the mayors in their own towns. Although regions vary, this decline in localism seems to be a characteristic of the U.S. political landscape. Large media networks collect audiences by concentrating on stories that appeal to large blocks of viewers and readers. Thus suburban and rural citizens are quite likely to recognize the name of a city official for whom they cannot in fact vote. Individuals who commute to distant workplaces and whose personal networks are spread geographically are further disconnected. The possibility that localism may become increasingly irrelevant to increasing numbers of Americans signals social and political change of a profound nature. For as long as community has remained intact, for example, libraries and churches and schools have functioned to bring people together, to educate newcomers, and to reinforce the virtues of citizenship.
Today the number of potential secondary anonymous relationships has increased vastly as individuals seek to accomplish tasks by relying on mediated information received from strangers. Home-centered, individualistic, information-heavy approaches to carrying out their personal and professional lives offer people opportunities to bypass both the traditional community and the public sphere.
Hard evidence on the issue of localism and engagement with the community is very mixed. For example, Americans today change homes and communities at about half the rate that they did in the mid-19th century, and even less than they did in the 1950s. It is possible to argue that people are less involved on a daily basis with their neighborsand more with people elsewherethan they were a century ago, but the degree of that change is as yet unestimated. Such change may also be a result of other phenomena in the early 20th centuryrural to urban migration, the streetcar in cities, and the automobile in rural areasrather than new communications.
In a sense, the questions first raised by the University of Chicago school of sociology (e.g., Park, 1916, 1955) in the early part of this century persist in their relevance: How does community form out of the ferment of diverse cultural experiences? How does democracy emerge from the diverse cultural experiences of immigrants? At the end of the 20th century such questions are still being asked; but whereas the Chicago school focused on the role of the newspaper as an agent for assimilation and teaching democracy, the question today is under what conditions new information technology and media will bring Americans together or pull them apart. Box 2.1 illustrates some interesting areas meriting further exploration.
Interface between the household and the community. How does the transformation of household functions enabled by information technologies alter an individual's expectations of community?
Political values. Does identification with networked communities affect Americans' construction of democratic participation, responsibilities, and obligations? Will Americans devalue political values associated with geographic community as they integrate into networked communities?
Virtual communities. Are they communities? In what ways do people enact the rights and responsibilities of citizenship in virtual communities?
Networked communities and the elderly. To what extent does participation in networked communities enrich the lives of the elderly and/or contribute to alienation from geographic communities?
Families. Does Internet use by families contribute to the establishment and maintenance of family networks? How fragile are these networks?
Friendship. Does making friends in cyberspace enrich or fragment emotional life? Does dependence on cyber friends result in lower motivation to develop friendships with those close by?
Computer networks as social networks. How social are computer networks? What needs do they meet or fail to meet?
Increased use of computing technologies in K-12 education is giving rise to important new areas for social science research. The Internet has penetrated rapidly and extensively into U.S. public schools. A U.S. Department of Education survey found that as of fall 1996, 65 percent of schools had access to the Internet; penetration had increased by 15 percentage points in each of the prior 2 years (Heaviside et al., 1997). The Office of Technology Assessment (OTA) estimated that in 1995, U.S. schools had 5.8 million computers for use in instructionabout one for every nine students (Office of Technology Assessment, 1995).
However, the presence of computers for instruction does not necessarily translate into student use of computers for instruction. The OTA reported that despite the presence of close to 6 million computers for instruction in the nation's schools (in 1995; presumably there are more now) students spent only about 2
hours a week using them. Like factories at the introduction of the electric dynamo or business at the introduction of computing technology, schools and teachers may not yet have learned how to modify work practices and organizational structures to take advantage of computing and communications technology. Schools have in general not found it easy to use technologies effectively for improving teaching and learning.1 Nevertheless it is important for policy makers, educators, and parents to understand what could be accomplished with computing technology in schools under optimal conditions.
Although a variety of proposals have been advanced to increase the availability of computers and Internet connectivity, and substantial investment made in purchasing technology, relatively little attention has been paid to how they will be used once they are in place. Because of the decentralized nature of U.S. education, it is difficult to understand for the nation as a whole the breadth and depth of change in educational practice and outcomes associated with the increasing presence of computing and communications technology in schools and classrooms. While many state departments of education and local districts are implementing new programs with a technology component, efforts to design and employ measures of effectiveness that would allow policy makers and parents to compare across projects are generally lacking.
A recent report of the President's Committee of Advisors on Science and Technology, Panel on Educational Technology, stresses the importance of experimental research in exploring what educational approaches are most effective (PCAST, 1997; see Box 2.2). The report notes that research on educational technology has received minimal funding relative to total national spending on K-12 education, and it urges increased investment. One of the major research categories proposed is the need for rigorous empirical study of which approaches to using information technology in schools are most effective.
The starting point for empirical study is descriptive inventories of projects with comparable measures of effectiveness, which would provide an exceptionally useful knowledge base. Such a study can take advantage of natural variation across states and school districts and would not require active intervention.2
This mapping of the range of endeavors under way would lay the foundation for the second phase, a more intensive study of how best to use computers in education. It would be worth considering how to organize, fund, and research a small number of schools as demonstration sites where work practices and organizational structures are radically redesigned to improve teaching and learning through technology. To achieve a fair demonstration, schools would have to be paired with a second set of schools matched according to student and staff demographics and capabilities. The second set would receive economic resources comparable to those of the first set that they could deploy in a range of other ways. Although natural variation among schools would be sufficient for the descriptive phase, this active intervention is required for the second phase in order to derive useful conclusions in the short run.
1.Basic research in various learning-related disciplines (including cognitive and developmental psychology, neuroscience, artificial intelligence, and the interdisciplinary field of cognitive science) and fundamental work on various educationally relevant technologies (encompassing in particular various subdisciplines of the field of computer science).
2.Early-stage research aimed at developing innovative approaches to the application of technology in education which are unlikely to originate from within the private sector, but which could result in the development of new forms of educational software, content, and technology-enabled pedagogy, not only in science and mathematics (which have thus far received the most attention), but in the language arts, social studies, creative arts, and other content areas.
3.Rigorous, well-controlled, peer-reviewed, large-scale (and at least for some studies, long-term), broadly applicable empirical studies designed to determine not whether computers can be effectively used within the school, but rather which approaches to the use of technology are in fact most effective and cost-effective in practice.
SOURCE: Reprinted from PCAST (1997), p. 53.
The politics and economics of designing and running such demonstration studies would be enormously complex and contentious. Yet, currently, school districts and teachers are making decisions about how to allocate resources of both money and time for technology-related efforts, without the benefit of good information about the potential consequences of their decisions. A series of discussions is called for that would involve both the public sector and the private sectorand would include educators, parents, technologists, and researchersin exploring the feasibility and usefulness of such demonstration projects.
A significant opportunity to study the use of information technology in the public schools is presented by the Schools and Libraries Universal Service Fund, which was established as part of the Telecommunications Act of 1996. With funding of up to $2.25 billion per year, the program will provide discounts on telecommunications services, Internet access, and networking, with the largest discounts going to rural and inner-city communities. By enabling a large number of schools to acquire new technology, this program in effect creates a large-scale "laboratory" where the sorts of research described above could be conducted.
2.2 Social Infrastructure: Universal Service
Formulating public policy on aspects of social infrastructure such as universal access to telephony and other communications services requires decision making about how large amounts of money are allocated and how broad segments of society are served. Although the debate about such questions may often take on a political cast, both empirical research and the application of social science theory offer much to help guide public policy making and the investment of public resources.
Since the value of a networksuch as the public telephone network or the Internetdepends on the total number of people connected to it (a phenomenon known as ''network externalities"), it is often argued that access to networks should be universally provided. Universal service has long been part of U.S. telecommunications policy, and there are those who argue that universal service is an appropriate public policy goal for Internet access (see, for example, Anderson et al., 1995).3
Whether or not one agrees that universal service for networks is an appropriate objective of public policy, it is worth pointing out that the historical evidence suggests that there would be widespread popular support for applying universal service policies to new networks made possible by advances in technology. Historically, in several instances, the political demand for universal service has repeatedly induced Congress to ensure universal service at uniform rates.
For example, a postal service available to all was established by the Constitution. Initially (in 1792), postal rates for a first-class letter depended on the distance it was to be carried: 6 cents for fewer than 30 miles, 8 cents for 31 to 60 miles, and so on through nine rate classes to the highest rate of 25 cents for more than 450 miles. In 1845 the rate structure was collapsed to only two categories, 5 cents for not more than 300 miles, and 10 cents for more than 300 miles. In 1863 a uniform rate (3 cents) regardless of distance and free intracity delivery were established (U.S. Bureau of the Census, 1975). Rural free delivery began in 1896. Subsidized Parcel Post became effective in 1913, effectively connecting rural residents to the advantages of city department stores through mail-order houses like Sears-Roebuck and Montgomery Ward. A premium for airmail delivery was dropped in 1978.
For services provided by private businesses, government regulation was often used to ensure universal service and nondiscriminatory rate structures. Railroad rates were regulated by the Interstate Commerce Commission (ICC) beginning in 1887. Interstate telephone rates were regulated by the ICC and later the Federal Communications Commission beginning in 1919. Intrastate telephone calls were made subject to state regulatory authority. Telephone companies were required to charge a uniform fee for service connection. Cable television rates and access have also been regulated.
Government ownership, government subsidies and loans, and direct government programs have been used to ensure universal network services. Land grants and other government assistance brought railroads to every city in the country. The Rural Electrification Administration was established in 1935 to extend electrical service to areas where high construction costs and low population density had made private service unprofitable. The federal highway program and later the federal Interstate Highway System connected every congressional district to the national transportation network.
Education can also be thought of as a good with substantial network externalities. In the United States, elementary education has been provided universally (and compulsorily after the 1880s), and secondary education has been provided universally since the mid-1940s. College education has been subsidized by the state and federal governments since the Land Grant Universities were established shortly after the Civil War. Increasing fractions of the population have benefited from government-subsidized higher education. Special programs have been introduced to assist the children from low- and middle-income families to pay the cost of college.
Two points about these government efforts to foster or mandate universality for network goods need to be stressed. First, all of these congressional and state efforts were designed to accomplish (as much as possible) universal geographic connectivity. Thus letters with 32-cent stamps are delivered to remote sites in the Alaskan north, in mountainous wilderness, and on small, but inhabited, islands. Even Hawaii has an interstate highway! Rural residents received telephone and electrical service just as their city cousins did. Second, the principle of universality was to extend to people in all income classes, rich and poor alike. This has often gone beyond establishing uniform rates for service to the creation of subsidized "lifeline" rates for basic service at prices presumably available to even the poorest families.
The political logic behind these moves is threefold. First, they have been defended as required by the principle of democracy. Individuals cannot effectively participate in the democratic process if they do not have equal and unrestricted access to the main methods of communication and transportation. Thus as increasing fractions of the population become connected to a network, those left unconnected become an increasing burden on the democratic principle, and the cost of subsidizing their inclusion becomes smaller and smaller. Sooner or later the political calculus tips the balance toward a policy of guaranteeing universal service. The second principle that has been applied is the desirability of equal opportunity. As economic development proceeded, both high-income occupations and low-cost access to the most diverse array of consumer goods being produced became concentrated in the urban areas. Federal action was seen to be required to keep rural Americans abreast of these advances. Farmers, too, it was argued, should share in the opportunities and wealth created by the new technologies. The third argument used to defend special programs for the poor was the
argument that connection to a network was essential or at least very helpful for self-advancement. Basic education is necessary to become employable. More education is probably better. Basic telephone and electrical service is probably necessary to hold a good job and to seek out better opportunities. The political fear is that unless government redistributive actions are taken to include the poor in the network, their lack of connectivity will doom them and their children to permanent poverty.
Although it is certainly true that widespread availability may increase the value of a network, it is not necessarily the case that such access will occur only with government provision or subsidies. After all, many goods with network externalities are provided by the private sector, including our fax machines, video player/cassette market, and so on. Indeed, only a couple of years after the Anderson et al. (1995) report, for-profit firms such as Hotmail4 began offering free e-mail, supported by advertising.
Basic telephone service has long been regarded as a social good that required a deliberate policy effort to achieve universal access. However, a close reading of history suggests another possible conclusion. According to Mueller (1997), penetration of basic telephone service could easily be comparable to today's rates, even if there had been no policies of subsidized access. Various comments to the FCC in its recent docket on universal service reform indicated that the current structure of pricing in telephony is costing the United States billions of dollars, with very little impact on penetration rates for basic telephone service. These deadweight losses arise because the prices of services such as long-distance calling, for which demand is sensitive to price, are set well above cost, and the prices of price-insensitive services, such as basic service, are often below cost, in direct violation of the economic principles of efficient pricing to cover joint and common costs ("Ramsey pricing"; see Kahn, 1970).
Advocates of universal service for the Internet or telephony typically make their case on grounds of geography or of income. One can well see why interested parties might argue for geographic subsidization: economic theory suggests that most of the benefits of providing services to isolated areas will be captured by those who own land in those areas. Land with electricity, telephone, and road service is certainly more valuable than land with none of these features, and it is, of course, appealing to those who own the land to have someone else pay for such improvements.
Geographical concerns also flow from the interest in social and economic development in rural areas. This was a past concern in the United States for telephony, and it remains an issue for expansion of new broadband services to rural areas. Rural access to even basic telephony remains a major issue in many developing countries. Whether cross-subsidies are the appropriate means to fund the expansion of rural telecommunications services is an area of ongoing public policy debate.
With respect to income arguments for universal service subsidies, it is also
important to understand clearly that cost may not be the only reason that the poor do not have access to goods such as telephone service. Mueller and Schement (1996) found that a higher fraction of households below the poverty line in Camden, New Jersey, had cable TV service than had telephones. The most important reason for people choosing not to have telephones was that their friends and relatives would make long-distance calls and leave them liable for paying the bill. According to this study, the monthly charge for basic access was not a significant factor in their choice of whether or not to purchase telephone service.
Public policy debate surrounding the 1996 Telecommunications Act raises new and unresolved research questions. For example: Is special funding for schools and libraries necessary, and if so what is the most efficient means for providing it? Should "carrier-of-last-resort" obligations be allocated via auctions (see section 2.4.8, "Auctions")? Some studies that might shed light on this last question include international comparative studies of the effects of various policy approaches and the impact of subsidies in these policy regimes on consumer behavior.
2.3 Business, Labor, And Organizational Processes
2.3.1 Location: Internationalization and Telecommuting
One important way in which information technology is affecting work is by reducing the importance of distance.5 In many industries, the geographic distribution of work is changing significantly. For instance, some Silicon Valley firms have found that they can overcome the tight local market for software engineers by sending projects to India or other nations where the wages are much lower. Furthermore, such arrangements can take advantage of the time differences so that critical projects can be worked on nearly around the clock. As the day ends in San Jose it is just beginning in Bangalore, Indian, and so the teams can hand off their work electronically to colleagues thousands of miles away (Economist, 1996). Firms can outsource their manufacturing to other nations and rely on telecommunications to keep marketing, R&D, and distribution teams in close contact with the manufacturing groups. Thus the technology can enable a finer division of labor among countries, which in turn affects the relative demand for various skills in each nation.
Although there are a number of case studies examining this phenomenon, quantitative evidence is desirable. For example, how many programmers are there in India who are actually working for Silicon Valley firms? And to what extent is this phenomenon of redistribution of employment linked specifically to information technology as opposed to other factors?
While the international redistribution of work is clearly significant, an equally important, if less obvious, effect of the technology is in enabling a redistribution of work within countries such as the United States. This shift began with the
advent of inexpensive telephony, nationwide toll-free telephone numbers, and the ability to network dispersed call centers to corporate databases (CSTB, 1994a). For instance, technology has also enabled many financial services firms to move jobs out of lower Manhattan. Some of the data entry work is being shifted to New Jersey while large call centers are being set up in North Dakota and other midwestern states. Other industries are similarly affected. A majority of the large national hotel chains have set up their reservations centers in Omaha, Nebraska, leveraging large computer databases and cheap long-distance telephone service.
Even as it becomes easier to transmit data across both state and international borders, tracking or regulating these activities remains very difficult. A given collection of bits being sent from the United States to India may encode a child's drawing being sent to a proud grandmother or the latest version of a million-dollar software program. Furthermore, as discussed above, the technology enables various types of work and employment to be decoupled from one another. The net result is that the firms have greater freedom to locate their economic activities, creating greater competition among regions in infrastructure, labor, capital, and other resource markets. Most interestingly, it also opens the door for more "regulatory arbitrage": firms can increasingly choose which tax authority and other regulations apply. The emergence of a significant Web-based off-shore gambling industry is only one example of this phenomenon.
That the importance of distance will be reduced by computers and communications technology also has implications for the residence patterns of Americans and the demand for transportation services. Telecommuting is being touted in California as a way to ease traffic congestion (Smart Valley, 1994; see also CSTB, 1994c). Telecommuting may significantly reduce the need for workers to locate their residence within automobile or public transit range of their employer's workplace.
As creative workers of all types and service workers ranging from telephone receptionists and data entry operators to management consultants, accountants, and lawyers find that they can realistically do most of their work at home rather than in a centralized workplace, the demand for homes in climatically and physically attractive regions could increase dramatically. Box 2.3 discusses possible consequences of such a shift.
The ability to do most of one's work at home will reduce but not eliminate the need for face-to-face meetings with fellow employees and customers. Telecommuters are likely to need to travel more for business meetings than are similarly employed office workers. Typically these trips will be long distance. This is not just because the favored residential locations of telecommuters are likely to be remote, but also because the enterprise employing the telecommuter is likely to be less centralized than under the current system. The demand for business travel and accommodations should increase. The location of these periodic business meetings will increasingly be urban areas such as Boston, New
One can anticipate a shift of population away from the metropolitan areas to bucolic agricultural settings (rural Vermont, the California wine country, fishing villages), to resort areas (Aspen, Monterey, Sedona), and to the sunbelt and beachfront. Just as the automobile, superhighways, and trucking helped shift population out of the central city to the suburbs in the 1950s, the computer, the information superhighway, and modems will help shift population from the suburbs to more remote areas.
The consequences of a shift in employment from the suburbs to more remote areas would be profound. Property values would rise in the favored destinations and fall in the suburbs. Individuals and groups interested in preserving the rural, or historical, or charming aspects of life and the environment in the newly attractive areas could face new pressures. Since most of those able to relocate with the advent of realistic telecommuting would be among the better educated and higher paid, the demand for high-income and high-status services in these areas should soar. Gourmet restaurants, clothing boutiques, high-income grocery stores, office supply depots, topflight medical services, and venues for artistic and cultural events should thrive. But so, too, would there be an expansion of services of all types. These new magnet areas would need more gas stations, gardeners, churches, housekeepers, construction workers, and postal employees. This would create new and expanding job opportunities for the local population, but more significantly the telecommuters would bring with them many others from the cities and suburbs who would not be telecommuting. This economic boom for rural and resort communities would be mirrored by a decline in property values and population in the least attractive areas of the central cities and suburbs. The exodus and the resulting decline in the tax base would reduce local government revenues, perhaps leading to a decline in governmental services, and would exacerbate the economic differences between the mobile segment of the population and those who remain in the older suburbs.
York, Washington, New Orleans, Chicago, Los Angeles, San Francisco, and Seattle and quite possibly resort business centers. Those urban centers should thrive, while smaller or less attractive cities will share the fate of the older suburbs and struggle with declining populations, jobs, and property values.
Telecommuting will enable not only shifts of hundreds or thousands of miles, but also shifts of only a few miles within cities. Information technology could also create employment opportunities for people in the inner city who may lack adequate means of transportation to the outlying areas where new jobs are being created.
There is a need for interdisciplinary research to further examine the impacts of widespread telecommuting (see Box 2.4). The political, social, and economic consequences seem likely to be quite significant, and the potential for a sudden rise in telecommuting within the next few years seems high. Insights might well
By reducing the fixed cost of employment, widespread telecommuting should make it easier for individuals to work on flexible schedules, to work part time, to share a job (perhaps with their spouse), or to hold two or more jobs simultaneously. Since changing employers would not necessarily require changing one's place of residence, telecommuting should increase job mobility and speed career advancement. Some think that this increased flexibility will reduce job stress and increase job satisfaction. That would be a welcome outcome in itself, and since job stress is a major factor governing health there may be additional benefits in the form of reduced health costs and mortality rates. These impacts also seem ripe for research by social scientists, particularly demographers, and medical researchers.
If the increased flexibility of work schedules is great enough, there may be some profound effects on retirement and saving behavior has well. With a reduced and less stressful work load, the elderly may wish to continue working well past the conventional retirement age. If so, they may choose to save less during their peak earning years because their need for asset income in late life would be reduced. Those who enter telecommuting early enough to purchase or build residences in remote areas before property values rise in those areas will experience substantial capital gains on their home and for this reason also might feel less need to save from current income. The implications of such developments for company pension systems and the social security system need study.
be gained by a renewed study of the causes and consequences of urbanization and suburbanization in our past.
It is clear that widespread use of telecommuting, even in its nascent stages, will have a major impact on how organizations function. The evolution of organizational forms in response to increased use of networking needs detailed study. For example, how is work being done at a distance monitored and evaluated? One implication is that work tasks could be designed in such a way that there is an identifiable output that the home worker or the telecommuter is delivering.
2.3.2 Labor and Information Technology
Labor Market and Information Technology
The popular press is filled with anecdotal evidence about the negative effects of information technology on employment. However, relatively few attempts have been made to address these issues scientifically.
Wolff (1996) conducted a careful analysis of the effect of computerization on the composition of the labor force during the period from 1950 to 1990 using U.S. census data.6 The first step in his analysis was to classify the census data into
267 occupations and 64 industries, which were then further refined into the categories "knowledge workers," "data workers," and ''goods and service workers." "Information workers" refers to the sum of knowledge and data workers.
Wolff (1996) found that from 1950 to 1990, employment of knowledge workers grew at 3.1 percent a year, while that of data and service workers grew at 2.6 percent a year. Employment of goods producers increased at a rate of only 0.3 percent per year. Overall, in 1950, 37 percent of the employed labor force was information workers; by 1990, 55 percent of the labor force was information workers.
What accounts for this dramatic change in labor market structure? Wolff (1996) showed that the total growth in employment could be decomposed into three effects: a "substitution effect" that describes the extent to which industries have substituted information labor for other types, a "productivity effect" that accounts for changes in productivity growth among different industries, and an "output effect" that accounts for the change in the composition of final output.7
Wolff (1996) found that the substitution effect accounted for about half of the growth of the share of knowledge workers in employment. Almost none was due to output effects. In other words, most of the increase in knowledge workers was due to the fact that employers substituted knowledge workers for other kinds of labor, coupled with the fact that industries that used knowledge workers intensively (e.g., service industries) had lower rates of growth in labor productivity than did goods-producing industries. Analysis of this sort is very important in understanding labor force changes and can be a significant input into policy analysis.
Wage Inequality and Information Technology
One of the most striking changes in the economic landscape of the United States over the past 20 to 30 years has been a dramatic increase in inequality of earnings. A growing number of researchers suspect that technological change in general, and computerization in particular, may be part of the explanation.
Although there was relatively little change in the dispersion of wages in the 1950s and 1960s, starting in the 1970s wage inequality increased rapidly. Percapita income and family income show similar patterns. Interestingly, the changes in inequality are evident across virtually every income subgroup: the wages for those at the 95th percentile increased relative to those at the 90th percentile, which in turn outpaced those at the 85th percentile and so on down to the poorest people in the country (see Figure 2.2).
Although the rise in inequality has been well documented in the academic literature (see Levy and Murnane, 1992, and Gottschalk, 1997, for reviews), there is not yet a consensus as to its causes. However, much of the increase in inequality seems to be related to a growing premium for skilled workers throughout the economy. For instance, the premium paid to college-educated workers and those with more experience has grown significantly. Since skilled workers were already
at the top of the wage distribution, any increase in their relative wages will tend to increase overall inequality. Furthermore, because the supply of college-educated workers has grown over the past several decades, the fact that their relative wages have increased indicates that the demand for such workers has grown even faster.
The increased demand for these workers cannot easily be explained by changes in the composition of industry output or other observable factors. "Technical change," typically a residual in wage equations, has been left as the best explanation. More direct evidence that technical change is behind the growth in inequality was provided by Berman et al. (1994). They found that the rate of investment in computers was positively correlated with higher demand for skilled workers relative to less skilled workers. This finding clearly runs counter to the claim that computers "deskill" work, as some have claimed. Although this is clearly the case in certain applications, the findings of Berman et al. suggest that it is not true on average.
The mechanism by which computers increase the relative demand for skilled work is still not well understood. There are at least five distinct possibilities:8
Computers may be disproportionately automating routine, rote jobs, thus reducing the demand for low-skill workers, but not much affecting high-skill workers. Indeed, Mark (1987) found that industries with significant changes in technology typically experienced significant reductions in the number of basic production workers but not in the number of more skilled workers.
Computers may complement highly skilled work, making it more valuable. For instance, after controlling for observable characteristics, Kreuger (1993) found that workers using a computer were paid, on average, 20 percent higher wages.
The direction of causality could instead go in the opposite direction, with high-paid workers getting computers first, as a signal to their colleagues of their skills. Feldman and March (1981) put forth such a thesis years ago.
Computers may facilitate a winner-take-all effect by reducing communications and search costs. As a consequence, the "best" experts in each field can compete over broader geographic regions, displacing the local expert and capturing higher earnings in the process. This explanation seems most applicable to the few "superstars" at the top of the earnings profile.
Computers may enable broader reorganization of work, which may change the relative demand for various skills. For instance, decentralization of decision making, use of lateral communications, and greater outsourcing of activities may reduce the need for people who can follow instructions carefully and increase the need for people who can solve problems independently.
Existing measures of income inequality do not consider the nonmarket goods and services that people consume. However, a significant share of an individual's well-being may be derived from factors that are not counted in traditional income. For instance, if employees find that work done with computers is more (or less) enjoyable than it was before the work was computerized, then this perception may exacerbate (or mitigate) the increase in inequality. Unfortunately, there is relatively little data on the extent to which people value nonmarket services and virtually none on how computerization might affect such services.
Effects of Information Technology on Labor and Skill Demand
The evidence on information technology and wage inequality suggests that computers may be partly responsible for the relative increase in the demand for skilled, educated workers. However, thus far, the measures of "skill" and "education" are fairly coarse. Further research is needed to enhance understanding of what skills information technology complements (Levy and Murnane, 1997). Traditional skills, such as math and reading skills, appear to be important, but
other skills, such as the ability to work well with others, may also be significant factors.
If computers increase the demand for educated workers, then a national policy of increasing investment in education, especially at the bottom of the educational scale, will help with two national goals that historically have been in opposition: increasing economic output and decreasing inequality. Simply spending more on education or requiring students to spend more time in school may help with these problems, but a major research gap exists in understanding how to make education more efficient. Relatively little is known about what types of courses and what methods of instruction are most effective in providing the education and skills sought by the marketplace. One does not have to believe that economics is the only driver of educational choice by policy makers and students to believe that it is one important consideration. (See also the discussion of the role of information technology in education in section 2.1.4.)
2.3.3. Organizations and Processes
As noted in Chapter 1, while the rapid pace of technological change in computing and communications has been astounding, it has been hard (at least until very recently) to find evidence for improvements in productivity as a result of these dazzling technological innovations.9 A fundamental problem is that many of the variables typically measured are becoming less relevant in the emerging information economy. For instance, Walter Wriston, the former chief executive officer of Citibank, has quipped, "When I was a kid in the bank, the key economic indicator we looked at was freight-car loadings. Who the hell cares about them now?" (Stewart, 1997).
In today's economy, technology, knowledge, skills, and organizational competencies are often more important resources than land, labor, and traditional capital. Economic output is becoming increasingly "unmeasurable" as gross domestic product shifts from mining, manufacturing, and agriculture to services (Griliches, 1994). Even in manufacturing, intangible components of value, such as product variety, customization, timeliness, and quality have become more important. Similarly, intangible assets, ranging from software, copyrights, and patents to worker skills, customer relationships, and organizational knowledge, are increasingly recognized as important inputs.
Direct measurement of these intangible assets remains elusive, but one indicator is the substantial increase in the ratio of the financial valuation of firms (stock market equity and value of outstanding debt) to the book value of these firms (plant, property and equipment, cash, inventories, and other measured assets) over the past decade. Apparently, investors in the financial markets believe that an increasingly small fraction of firms' true assets are accounted for by
traditional metrics. Interestingly, a recent study (Brynjolfsson and Yang, 1997) found that changes in the value of firms' computer assets correlated strongly with changes in the implied extent of intangible assets.
However, there is no reason that the "unmeasurable" outputs and inputs must remain so. As the benefits to businesses of intangibles such as those mentioned above have grown, and as the costs of overhead have come to dominate direct labor and materials costs, researchers such as Robert Kaplan have helped transform managerial accounting to better reflect these realities (Kaplan, 1989). A similar rethinking of our national economic accounts is in order.
The issue of unmeasured improvements in qualityof particular importance in evaluating the output of serviceshas also received considerable attention from economists. Section 1.1.1 describes how improvements in computing technology have reduced the real cost of computing. Recently, economists have begun making the same sorts of adjustments in other cases, such as for pharmaceuticals (Berndt et al., 1996), as well as for various particular types of medical care such as heart procedures. When patient survival rates and quality of life are considered, they typically find that "real" medical prices have actually been falling, or at least not rising as rapidly as previously thought, and that "real" output and productivity have been growing impressively (Cutter and McClellan, 1996; Cutler et al., 1996). Similar work could be done for information technologya task made easier now that advances in IT enable better tracking of quality improvements.
Finally, even as information technologies are helping to make organizations more economically efficient than ever before, many people are questioning whether other important goals and values are being overlooked. For instance, social fragmentation, increased inequality, environmental degradation, and greater emotional stress are sometimes associated with advances in technology. A single-minded focus on economic efficiency, narrowly defined, may come at the expense of other goals valued by society. Indeed, technologies need not serve only as generators of wealth. They also can serve other societal functions such as increased equity.
Further compounding the question, technology is giving us new options by relaxing many long-standing constraints. Thus, there is a strong need today for clear thinking about what goals organizations can and should serve. Perhaps a portion of research funding should be reserved for work that contemplates not only how technology can make our lives better, but also what society means by "better."
Historical Perspective: The Computer and the Electric Dynamo
Although the various scientific and engineering disciplines excel at producing technological developments, understanding how or whether these developments lead to greater productivity is quite a different matter. Replacing old information technology with new while retaining the same work practices and
organizational structure may have little impact on productivity. The true measure of the technology's worth can be evaluated only when work practices and organizational structures are revamped to best take advantage of the flexibility and power of today's computing and communications technologies.
The subtle process of extracting productivity gains out of technological advancements has been the subject of much scrutiny by social scientists. One particularly relevant and illuminating study is by David (1990). This historical analysis discusses the electric dynamo and its role in an earlier "productivity paradox." At the turn of the century, electrification was seen, much as computers are today, as a transformational technological advance whose impact would soon be widely felt. However, factory electrification did not have much impact on productivity growth in manufacturing before the 1920s.
The proximate source of the delay in the exploitation of the productivity improvements potential incipient in the dynamo revolution was, in large part, the slow pace of factory electrification. The latter, in turn, was attributable to the unprofitability of replacing still serviceable manufacturing plants embodying production technologies adapted to the old regime of mechanical power derived from water and steam. (David, 1990)
The first phase of electrification (from the mid-1890s to 1920) mainly utilized the group drive system of power transmission, with one motor powering many pieces of equipment. This way of shifting to electric motors entailed minimal changes to the basic factory design, and essentially replaced the old mechanical power system with an electric one. However, in the 1920s, the "unit drive" approach, with individual motors powering each piece of equipment, was widely adopted. The benefits of this approach were not limited to the immediate savings associated with greater energy efficiency. In fact, the greatest benefits derived from the ability to build lighter, more modular, single-story factories using this new technology. Learning how best to utilize this flexibility was not immediate:
Although all this was clear enough in principle, the relevant point is that its implementation on a wide scale required working out the details in the context of many kinds of new industrial facilities, in many different locales, thereby building up a cadre of experienced factory architects and electrical engineers familiar with the new approach to manufacturing. (David, 1990)
The analog of this story for the computer has yet to be written. While there are encouraging reports (see CSTB, 1994a; Brynjolfsson and Hitt, 1996, 1997) that perhaps the productivity paradox is no more, there yet remains substantial work to be done in understanding what work practices and organizational structures
Perhaps the most important potential impact of computers and communications on organizations is a shift in the very concept of "organization" as an economic and social entity. Once considered to be semipermanent and routinized by definition, ideal organizations increasingly have come to be seen as flexible, change-oriented, and able to shift their boundaries, alliances, and partnerships rapidly to meet changing conditions. Computers and communication technologies increasingly permit anytime, anywhere communication, synchronous and asynchronous collaboration, and tight linkages in operational processes within and between organizations (e.g., manufacturers and their suppliers and distributors, and manufacturers and the direct buying public). The concept of the ''adhocracy"a fluid organization in which members come and go as interests changehas emerged as competition for the concept of bureaucracy. After many decades of increasing vertical integration of production and growth as a totem of success, many organizations have divested themselves of every function that was not a core competence, and that could possibly be "outsourced" or bought on the market. Small really did become beautiful, at least in principle. Young entrepreneurs who started little companies in their garages built novel ideas into huge companies and fortunes, capturing the imagination of the world. And the mighty such as AT&T, IBM, and GM appeared shaken as the world they had built started to collapse around them.
Yet, as recent history has shown, organizations such as AT&T, IBM, and GM have by no means been pushed aside by the changes of the information age. They have adopted and adapted the technologies and harnessed them in ways that have allowed radical downsizing of work forces while retaining and in some cases enhancing top management control over firms' performance and profitability. And the start-up companies created in garages have found it necessary to adopt time-honored aspects of organizational hierarchy in order to function effectively. This lesson from recent history reveals an important but frequently overlooked aspect of the information revolution: that its revolutionary character is being channeled through pathways established by powerful social and institutional forces that are not necessarily swept aside by the effects of technology, no matter how powerful those effects are.
Much of the rhetoric about profound change in organizations has been speculative and undisciplined, based more on idealized views of what organizations ought to be rather than on the practical realities that shape organizational form and function. One can construct scenarios of organizational demassing and decentralization, but one also can just as easily construct sound arguments that computer and communication technologies give new life to the traditional bureaucracy. Functions normally carried out by middle managersinformation gathering, decision making within directives, communications with lower-level staff, and monitoring and upward-reporting of activities carried on belowcan be replaced to some extent with technology. The resulting "flattening" of the organization through the elimination of middle managers has been said to bring greater "empowerment" of remaining employees. But technological change can just as easily allow significant increases in organizational centralization, tighter monitoring of employee activity, more effective enforcement of compliance with the desires of top management, and the redesign of tasks in ways that make it difficult for employees to act outside of prescribed patterns.
There are reasons to be confident about profound changes under way in the character and concept of organizations as a result of new computer and communications technologies. At the same time, it is important not to let ideological enthusiasm substitute for careful reasoning and empirical research.
John Leslie King and Kenneth L. Kraemer, "Computer and Communication Technologies: Impacts on the Organization of Enterprises and the Establishment of Civil Society" (see Appendix C of this volume)
are optimal in this new computer age. How will the modern counterpart to David's "cadre of factory architects" be developed? That is the challenge facing the social scientists studying the new industrial frontier.
Information Technology and Organizational Structure
Both firms and markets can be thought of as sophisticated institutions for processing information about desires, costs, capabilities, and constraints (Galbraith, 1977; Hayek, 1945; Radner, 1992; Sah and Stiglitz, 1986). Thus, given the fact that the cost of computer processing of information has declined by several thousandfold over the past three decades, it would be surprising if these institutions did not change. Several researchers have developed models that predict how changing information processing and communication costs are likely
to affect firms and markets (e.g., Malone, 1987; Brynjolfsson, 1996a; Bakos, 1997).
Not only are markets and firms changing, but new structures, some never before feasible or even imagined, may also emerge. Research is needed to extend existing theories about organizations and to learn from the organizational changes and experiments that are already happening (see Box 2.5). Results of such research would provide a sound basis for corporate "re-engineering" efforts that seek to make better use of information technology.
Recent research on the effects of information systems has highlighted the importance of complementarities. Theory (Milgrom and Roberts, 1980), case studies (Brynjolfsson et al., 1997; Orlikowski, 1992), and econometric/statistical analyses (Brynjolfsson and Hitt, 1997) indicate that the effects of information technology depend significantly on other organizational factors such as organizational form, communications practices, and the education and training of the
work force. Therefore, research on the organizational impacts of IT should consider these complementary factors whenever possible. Understanding the nature of the complementarities is critical to being able to make predictions about the social and economic impacts of IT. For instance, if IT is complementary to an organizational structure for which teams and decentralized decision making by workers are important, then as IT becomes cheaper, one would expect that there would be increasing demand for people who work well in teams and who have the skills and education needed to make them effective decision makers.
One of the difficulties researchers face is that the data on the organization of work are fragmented and disorganized. There is a clear need for a compilation of what data already exist in this area. A project that sought to catalog, or better yet, assemble and make available, the key data sets on this topic could reduce duplication of efforts, speed research, and help settle debates that stem from misunderstandings about the basic evidence.
Time series data, even short time series, would be especially valuable in clarifying the role of technology in some of the organizational changes that are observed. For instance, better time-budget studies could help address the ongoing debate as to whether the average worker is spending more hours or fewer hours at work. Data gathered from employers indicate that the work week has shrunk, but employees report that they are working more hours. The discrepancy may be owing to employees working more at home or working for multiple employers, or it may have some other cause.
Another possible source of data is exit interviews, during which workers could be asked how technology has changed the nature of the work they do. Firms and social scientists could collaborate on survey design and data collection; results would benefit the participating companies as well as the broader research community.
In recent years, a number of striking examples have emerged of how information technology can be combined with the redesign of organizational processes or the invention of new processes to transform the way organizations work. Although initially uncommon and perceived as radical, ideas such as just-in-time inventory control and concurrent engineering10 have become accepted as "best practice" (Carter and Baker, 1991). How can the new organizational possibilities enabled by the continuing, dramatic improvements in information technology be developed, understood, and exploited? Given time, managers and employees of companies will certainly develop new ways of working that take advantage of these new opportunities. For more rapid progress on these problems, however, it is useful to develop a more systematic foundation for understanding organizational processes. In order to understand successful organizational
practices, one must be able to recognize and represent organizational practices that are observed, and imagine alternative ones.
While data sets exist on individuals, firms, industries, and nations, there is a substantial gap in the availability of data at the level of business processes. Business processes are a distinct unit of observation from firms or individuals. Not only does an individual firm involve numerous business processes, but many business processes also cut across the firm's boundaries. The creation of a database with information on business processes that included numerous examples of how different groups and companies perform similar functions could enable new research directions. The database should provide a taxonomy for categorizing business processes, and ideally should be designed in a way that multiple researchers could access, add to, and comment on the data.
In fact, a prototype for such a databasea process handbookhas been created by Thomas Malone and colleagues at the Massachusetts Institute of Technology. The handbook is intended to help people imagine new organizations and organizational practices, redesign existing organizations, share ideas and "best practices" about organizational processes, and generate or select software to support or analyze these processes.
One key feature of this representation technique is that it not only breaks down activities into process subparts ("subactivities") but also adds the concept of specializationdifferentiating a process into various specific ways of doing the process ("specializations"). For example, specializations of the process labeled "sell something" include ''sell by retail store" and "sell by mail order." The technique's second key feature is that it characterizes dependencies between processes and ways of managing these dependencies, also known as "coordination" mechanisms (e.g., Malone and Crowston, 1994). A dependency exists, for example, when an item produced by one process must be made available to another at the right time. Two alternative coordination approaches to handling this dependency would be to produce the good to order or produce the item to be held in inventory until it is required.
The work provides an approach to analyzing processes at various levels of abstraction, thus capturing both the details of specific processes as well as the "deep structure" of their similarities. A primary advantage of the approach is that it allows people to explicitly represent the similarities (and differences) among related processes and to find or generate sensible alternatives for how a given process could be performed.
Results from this project suggest that such databases are both technically feasible and managerially useful (Malone et al., 1997). The process handbook project has developed a series of software tools for storing and manipulating processes, which have been used to represent more than 2,000 activities of both generic processes and specializations of these processes from specific organizations.
Because empirical research in this area is often constrained by lack of data
on organizational processes used by firms, the further development of data sets like the process handbook could engender significant new research on the impacts of information technology that would not otherwise be possible.
2.3.4 Social Science and the Workplace
Differential Impacts Within the
and Professional Communities
Section 2.1.2 discusses the differential impacts of information technology in the home. Not surprisingly, the same phenomenon of differential impacts owing to the use of information technology is also found in the workplace.
Scientists are one set of information technology users for whom the differential impacts in a professional setting have been studied. In structured interviews with 67 scientists in 1991 and 1992, Walsh and Bayma (1997) showed that scientists in four scientific fieldsmathematics, experimental biology, chemistry, and particle physicsused computer networks quite differently. They identified four attributes of scientific disciplines that predicted lesser or greater use: (1) size of disciplinesmaller disciplines used computer networks more than larger ones; (2) market penetrationdisciplines in which property rights and financial rewards to research findings were more closely linked used computer networks more; (3) locus of informationdisciplines in which most research findings are produced at the laboratory bench used computer networks less; and (4) technical limitationsdisciplines that rely on photographs and drawings used computer networks less.11
Even within a single scientific field, e-mail can differentially benefit scientists who otherwise could be at an information disadvantage. For example, according to one study, physical oceanographers who work at inland universities or laboratories derived more benefit from each additional e-mail message sent or received than did oceanographers who worked at coastal universities or laboratories (Hesse et al., 1993), when standard measures of scientific productivity were considered such as number of journal articles published.
Differential impacts have been observed for shift workers as well. Shift workers in a municipal government derived more benefit from each additional message sent (although not received) than did day workers, as gauged by standard measures of organizational commitment and well-being (Huff et al., 1989).
Because e-mail omits direct information about social statussuch as age, physical appearance, and genderthat inevitably accompanies face-to-face communication, it can differentially benefit people who otherwise could be socially marginalized. Younger oceanographers derived more benefit from each additional e-mail message than did older, more established ones (Hesse et al., 1993). Physically disabled members of a multiyear program of computer communication derived more benefit than did nondisabled members (Earls, 1990). Members
of a retirement-planning task force who were themselves recent retirees derived more benefit from use of e-mail than did their still-employed counterparts (Bikson et al., 1991).12
Romm and Pliskin (1997) have examined the use of e-mail in the workplace. They suggest that several characteristics of e-mailspeed of delivery, ease of sending to a large number of addressees, ease of adding commentary to messages and forwarding them on, and ease of control over which versions of messages are sent to which recipientsmake it a powerful political tool. Used deliberately in this way as a tool for "virtual politicking," e-mail can increase the power of employees relative to that of management. The adoption of information technology to improve the performance of organizations will also have significant, differentially distributed impacts on the individuals within those organizations.
Technical Support Communities
Social science methodology associated with the study of communities has proven useful in improving business performance. In the mid-1980s, anthropologist Julian Orr conducted extensive field work among Xerox Corporation service repair technicians (Orr, 1990). One of his findings was that technicians never relied exclusively, or sometimes even at all, on the company-provided service manuals when troubleshooting machine problems. Often the manuals were out of date or did not address local, idiosyncratic problems. Rather than using manuals, or in addition to using them, they relied on war stories passed from technician to technician in an oral storytelling culture. Orr pointed out the value of these stories to corporate management, noting that they represented an important intellectual resource that the company should capitalize on.
Partly inspired by Orr's work, a team of Xerox developers spent a long time observing and talking with service technicians, learning what would be useful to them from their point of view. Based on their fieldwork they built a system to leverage technicians' local knowledge through a community-validated "tips" database. A "tip" is a problem-cause-solution case that is voluntarily written and submitted by anyone in the field service organization and validated by technical specialists. When the tip is released to the field, it carries in it the name of both the submitting technician and the validator. In one field trial of the tips system with 1,300 field support people, Xerox found that about 15 percent of the employees submitted tips and that the tips database was accessed more than 1,000 times a day (Bell et al., 1997).
The technology per se in this system is not at the forefront of computer science. Both developers and managers attribute the success of this system in part to the effort to take seriously social science ideas about community. They learned that local knowledge conveyed in the community vernacular by community members is useful to technicians troubleshooting unfamiliar problems. And so the system was designed to support vernacular content. They learned that
community knowledge could spread much more rapidly than standard corporate publication or validation cycles. And so the system was designed to include many human validators in order to ensure very short validation cycles. Initially, developers and managers worried that they might have to provide economic incentives for technicians to contribute tips. But they learned that technicians value the social validation that comes from other community members who appreciate their tips. And so the system was designed to ensure that people contributing tips could be maximally visible to others in their community. In the first deployment of the tips system, the corporation collected data suggesting that use of the tips database was responsible for a significant improvement in this group's service performance. The corporation has subsequently begun deploying variants of this system in many of its service organizations.
2.4 Information Economy And Society
Advances in information technology raise a number of research issues involving the economic, social, and cultural aspects of how society accesses, uses, and values information. For example, the ability to store and distribute more information more rapidly is leading to concerns about information overload and raising new questions about how information is evaluated by users. Increasingly, information is exceeding the bounds inherent in the written and printed word. New types of media and new technologies for accessing information pose challenges to educators, creators and distributors of information, and policy makers, among others.
2.4.1 Protection of Intellectual Property
Increasing representation of a wide variety of content (e.g., text, images, video, audio) in digital form has resulted in markedly easier and cheaper duplication and distribution of informationwith mixed effects on the provision of content. On the one hand, content can be distributed at a dramatically lower unit cost. On the other hand, distribution of content outside of channels that respect intellectual property rights can reduce the incentives of creators and distributors to produce and make content available in the first place. Information technology has raised a host of questions about intellectual property protection, and quite a number of solutions have been proposed. Making appropriate choices requires attention to a range of considerations and perspectives and can be informed by economic and historical analysis.
Economic Analysis of Intellectual Property Rights
The classic economic study of the trade-off between innovation and intellectual property protection is that of Nordhaus (1969), who examined the optimal
length of a patent. Longer-lived patents give producers more incentive to innovate but also lead to longer periods of monopolization. Using a simulation model, Nordhaus studied the benefit-cost trade-off between these two effects and concluded that a patent life of around 20 years was a reasonable middle ground. Subsequently, economists have examined other dimensions of patent policy, such as the scope of patents, the standard for novelty, and so on.
Whereas patent rights apply to inventions, copyright applies to artistic or literary expression (be it in print, audio, or visual form) and is the dominant form of intellectual property protection for electronic content. Numerous studies have looked at the economic impact of patents, but far fewer such studies have been done on copyright, even though there is currently much legal and policy activity in this area. In part this discrepancy reflects differences in data availability: patent data is available electronically in a centralized database, whereas copyright is granted automatically and does not require the copyright holder to register formally in all instances.13
Even relatively simple cost-benefit examination of some of the issues would contribute to a better understanding of the effects and implications of particular approaches raised in policy debates about intellectual property rightsfor example, a new form of copyright protection for data in databases that the Europeans are asking the United States to consider.
Economic considerations are also important in evaluating methods for enforcing requirements attached to the use of copyrighted materials, including technological approaches. One specific approach is the use of secure hardware that enforces specific terms and conditions (Stefik, 1995). However, both theoretical examination (Shy, 1998) and practical experience with devices that provide protection against unauthorized copying suggest that this strategy is problematic in a highly competitive environment. If copy protection imposes inconvenience on users, then new products that do not incorporate copy protection can successfully compete against those that do. This occurred with spreadsheets during the mid-1980s when a rash of new entrants caused Lotus to remove its key-disk copy protection scheme. Knowing how intellectual property protection strategies and market structure interrelate is clearly important for understanding how secure hardware might work in protecting intellectual property rights.
Among the variety of alternatives for protecting intellectual property are fixed licensing rates, media taxes, and statistical sampling. The costs and benefits of these approaches clearly depend on the costs of monitoring use, but little theoretical or empirical work has been done that examines this issue in detail.
Historical Perspectives on Copyright
The challenge of setting appropriate intellectual property policy is not new. Indeed the history of copyright's use is filled with examples that are relevant to today's concerns. Now, as in the past, there are those who question whether
copyright is appropriate at all in the information age. Some would argue that "information wants to be free." But what would be the consequences of all information being free? Hesse (1991) examined experiences following the elimination of copyright in post-revolutionary France.
The French monarchy used copyright as a tool for censorship. Just as in England, the French kings granted monopoly rights to publishers in exchange for the right to censor publications. One of the first acts of the French revolutionaries was to eliminate copyright in 1789. The revolutionaries thought that all people would then be free to publish whatever they wanted without government censors looking over their shoulder.
The consequences were disastrous: literature, especially serious literature, disappeared. The only material published was newspapers, pornography, scandal sheets, and seditious tracts. These were printed on cheap paper suitable to be read only once and then thrown away. In 1789 more than 100 French novels were published; in 1794 only 16 were published. The elimination of French culture was enough to frighten even the revolutionaries, and the government quickly reinstituted copyright and initiated a program to subsidize the production of cultural works.
But within a few years budget cuts forced the elimination of the subsidies. The French government had set the term of copyright to run for 10 years after the author's death. But given the short life span typical of the period, this term generally allowed for only a single edition to be published. Publishers therefore gravitated toward works in the public domain, according to a then-contemporary observer:
Modern publishing consists of all the books that are reprinted endlessly, which are no one's property, and which anyone can make use of … [The publishers] all print the same works … and end up remaindering them. But the public does not even profit from the low prices because the editions are abridged, inaccurate, and poorly produced, which harms the art and the honor of French publishing in the eyes of Europe. (Pierre-Cesar Briand, 1810, as quoted in Hesse, 1991)
The treatment of copyright immediately after the French Revolution shows that the absence of copyright protection can be disastrous. But there are also cases in which information providers have been too conservative in the management of their intellectual property.
Rise of the Novel. The first modern libraries for middle-class readers were created in the late 1700s, soon after the invention of the English novel. English bookstores could not keep up with the demand for novels and romances, and so they started renting them out. These circulating libraries, as they were called, were denounced by the literate classes as "slop shops of literature." They were
also unpopular with publishers and booksellers, but for a different reason. As an observer put it at the time:
… when circulating libraries were first opened, the booksellers were much alarmed; and their rapid increase added to their fears, and led them to think that the sale of books would be much diminished by such libraries. (Knight, 1854)
However, in the long run there is no doubt that the sale of books was not diminished by the circulating libraries, but rather was much enhanced. Before the advent of the circulating libraries there was no low-cost, entertaining literatureand so the common folk had little reason to learn to read. In 1800 there were only 80,000 readers in England. By 1854, that number had increased more than 60-fold to 5 million readers (Knight, 1854). The publishers who served the new mass market for books thrived, while those who sold only to the elite disappeared.14
As the market for books grew, people started to buy rather than rent. As Knight reported:
… thousand of books are purchased each year by such as have first borrowed them at those libraries, and after reading, approving of them, have become purchasers. (Knight, 1854)
The presence of the circulating libraries did kill the old publishing modelbut at the same time it enabled the creation of the new model of mass-market books. The publishers and booksellers who recognized this causality prospered; those that continued to push for the old model went out of business.
The for-profit, circulating libraries continued to survive in England well into the 1950s. What ended them was not a lack of demand for reading material, but rather the paperback bookan even cheaper way of providing literature to the masses.
Rise of the Video. The pattern seen in the rise of the novel occurred also in the market for prerecorded videos in the 1980s. Initially the video machine was no threat to the film industry because it was so expensive. In the early 1980s video machines cost more than $1,000, and video tapes sold for around $100. Videos were a medium only for the richjust as books were in 1800.
The video rental stores changed all that. Like the circulating libraries 200 years earlier, they brought a new form of entertainment to the masses. The rental market broke out of the low-level equilibrium. If an ordinary family could rent the video machine and rent the cartridge, the industry could get enough cash flow to invest in new production technologies. By the mid-1980s the average middle-class family could afford a video machine, and video rental stores were thriving.
Hollywood did not like the video rental business. Studios tried to get around the first-sale doctrine with various leasing arrangements, but these schemes were
not acceptable to the video store owners. But despite Hollywood's objections to video rentals, they ended being very profitable for the movie studios. The availability of inexpensive videos meant that people watched many more movies. By the 1990s, video machines were selling for less than $200 and 85 percent of U.S. families owned one.
Eventually the Hollywood producers realized that people would actually buy a video if the price was right. Since 1990, the video rental market has been flat, and all the action is in the sales market. In the last 15 years, video purchase prices have dropped more than 90 percentand Hollywood is making money like never before.
Far from killing Hollywood, video was Hollywood's savior. Just as in the case of circulating libraries, video rental created a huge new market for both renting and buying the product. The companies that recognized the implications of the new technology succeeded beyond their wildest dreams, and those that did not have vanished.15
2.4.2 Free Speech and Content
Many contentious issues surround free speech and regulation of content on the Internet, and there continue to be calls for mechanisms to control objectionable content. Since the Communications Decency Act was struck down by the Supreme Court in 1997, many groups have become interested in developing other ways of rating or labeling content. Emotions run high on both sides of this issue, but attention to several empirical questions would be of use in finding a sensible solution.
First, very little objective information is publicly available on the kinds of content accessible on the Web, even though content-monitoring software companies such as Surfwatch and CyberPatrol16 are in a position to have such data. Little is known about how much access children have to "objectionable" sites. Certainly some surveys are in order.
Second, definitions of "unacceptable" content are subjective. Dealing with indecent material involves understanding not only the views on such topics but also their evolution over time. What are institutional and individual attitudes toward filtering? What do parents, schools, libraries, and other institutions want? In the case of the Internetwhere content flows globallyit is important to understand how different localities define objectionable content.
There are larger political issues involved as well. The same technology that allows for content filtering with respect to decency can be used to filter political speech. In some countries (e.g., Germany), hate speech and Nazi symbols are outlawed. Approaches designed to limit indecent content could be used to restrict access to political material as well. How effective would such approaches be? How desirable would they be?
Since censorship of indecent material does not appear to be an option in the
United States, a well-known policy response is labeling. The idea is that consumers will be better informed in their decisions to avoid (or seek out) objectionable content. An interesting question is what constitutes the "best" form of such labels from the viewpoint of cognition and use. The considerable debate about the type of labels used for the V-chip, for example, encompasses whether they should be based on age or type of content, whether they should be one dimensional or multidimensional, whether they should employ three levels of rating or seven, and so on. The debate has been conducted with little in the way of scientific investigation of alternatives to inform the participants.
One interesting proposal is the Platform for Internet Content Selection (PICS; Resnick and Miller, 1996). PICS is a set of protocols that defines the communication of "ratings." The protocol set is very broadly designed so that any site can declare itself a "rater" and provide ratings to the public.17 PICS provides an open standard that may be very helpful in dealing with the issue of content in a flexible way, but there are many questions about how this technology might be used.
For one thing, some incumbents in the industry already have proprietary labeling schemes. What are their incentives to move to the PICS standard? The literature on economic considerations in standards setting (see Besen and Farrell, 1994, for a survey) would be useful in stimulating careful thinking about this issue.
What is the cost recovery model for labeling services? Is there any reason to believe that competitive forces will yield an appropriate social outcome, or are the opportunities for "free riding" (enjoying the benefits without incurring the costs) so strong that a market for labeling cannot be sustainable? How frequently is material added to or updated on the Internet, so that reexamination is appropriate?
The rapid increase in computing and communications power has raised considerable concern about privacy (Box 2.6 discusses one aspect). Privacy issues arise in the public and private sector as well as in the conduct of social science itself.
An initial set of questions concerns public attitudes toward privacy. Classic studies of this subject include Baker and Westin (1972) and Westin and Louis Harris & Associates (1981). A recent survey of Web users included several questions measuring the view of survey respondents on such issues as the need for new laws to protect privacy on the Internet and the rights of content providers to resell information about users (Kehoe et al., 1997). Kang et al. (1995), which provided an overview of issues relating to the Internet, also called for better data.18 A more recent study looked specifically at the privacy concerns and experiences of computer users (Louis Harris & Associates and Alan F. Westin, 1997).19 This survey found that while many people are concerned about the confidentiality and security of personal information online, there are very few
Decreases in the cost of video, audio, and other sensor technology, as well as cheaper data storage and information processing, make it likely that it will become practicable for both governments and private data-mining enterprises to collect enormously detailed dossiers on all citizens. This prospect raises a host of issues requiring research and debate. Among them:
•Who currently collects what data about individuals? How is it used? How is it shared? What are the trends?
•What are the existing default rules in different jurisdictions relating to the collection of information? Does the nature of default rules meaningfully alter outcomes? Do prohibitions on data collection (e.g., data protection laws) affect outcomes? To what extent are existing rules vulnerable to foreign "data havens" and other regulatory arbitrage? To what extent do/will consumers choose alternatives to the default rules when such an option is available?
•What are the possible political, social, and economic consequences of extensive individual profiling? Is extensive profiling likely? Is the absence of a great deal of the privacy now taken for granted compatible with freedom? What difference does it make if the profiling is undertaken by (or available to) democratic governments? Non-democratic governments? Private industry? What would the economic and social consequences be of making profiling data available to some? To all? At a cost? At no cost? Would it be socially valuable to prohibit the creation of individualized dossiers? In an era of distributed databases, would it be technically practical to enforce such a prohibition?
•To what extent do different types of electronic cash and electronic commerce enable or disable profiling? To what extent do concerns about the control of electronic money laundering imply the power to restrict free speech or anonymous commerce? To what extent does the protection of free speech and a private social and economic space require the protection of anonymous speech and/or anonymous commerce? What are the current national policies regarding anonymous speech and commerce? In a networked world, what are the external and extrateritorial effects of one nation's policies regarding anonymous speech and commerce?
Michael Froomkin, "Five Critical Issues Relating to Impacts of Information Technology" (see Appendix B of this volume)
reports of actual breaches of confidentiality. Whereas only 5 percent of Internet users said that they had been a victim of what they regarded as an invasion of their privacy, 54 percent of Internet users reported that they were concerned that information about which sites they visited would be linked to their e-mail address and disclosed without their consent or knowledge. The report also found lower trust in online institutions and communication: computer users had less confidence in online businesses than in other institutions and were more concerned
about the confidentiality of e-mail than that of other common means of communication. However, with increasing familiarity comes greater trust: those who used e-mail regularly were less than 50 percent as likely to be concerned about the confidentiality of this form of communication.
Absent better understanding of the nature and extent of public concern, the public debate appears to rest on assertions by vocal advocates. More data and analysis would support more effective debate and decision making.
In a related example, a computer consultant in Oregon paid the state $222 for the complete motor vehicles database, which he then posted to a Web site. The database allows anyone with knowledge of a particular Oregon license plate number to look up the vehicle owner's name, address, birth date, driver's license number, and vehicle title information (McCall, 1996).20 Also, state and local governments are themselves finding that their data can be a source of revenue, through the sale either of customized search services or entire databases (Chandrasekaran, 1998). Already, regulations and legislation that address concerns about personal dossiers are emerging at both state and federal levels for the specific case of medical records. A recent CSTB report (CSTB, 1997a) examines trends and issues relating to the protection of medical information.
''Informed consent" in surveys and experiments is a dimension of privacy that strikes close to home for social scientists. Quite strong safeguards are in place for social science work involving human subjects, but in some ways it is difficult to apply some of these practices to the Internet. For example, the fact that data is being collected can easily be concealed from subjects. One source of useful data comes from retrospective examination of existing records such as server logs or "Usenet" postings where a social science experiment was not the original intent of the data collection. Just as in the case of private data, cross-tabulation of innocuous data sets can identify seemingly anonymous subjects. Certainly, social scientists must develop a code of practices, ethics, and perhaps regulations that will help deal with these issues.
Another dimension of privacy is "annoyance." A recent report on junk e-mail
by World Research, Inc. (1997) announced that half of the more than 1,000 respondents in a voluntary online survey said that they "hate" junk mail, and another quarter said that they found it "bothersome." Three-quarters of the respondents felt it should be regulated. The interesting question is what form of regulation (if any) would be appropriate. A number of congressional bills have been introduced to address this issue, such as S.771, which proposed requiring that advertisements be self-labeled. It would be interesting to investigate how effective such solutions have been in controlling physical junk mail, magazine ads, "infomercials," and so on.
There have been suggestions that industry self-regulation could be an effective tool for protecting privacy. A National Telecommunications and Information Administration report contains a number of papers exploring the prospects for and limitations of self-regulation (NTIA, 1997). One approach, offered by Laudon (1996), proposes a market for personal information in which individuals would have the right to sell or prevent the sale of information about themselves. Varian (1996a) has examined some economic aspects of such a market, but much work remains to be done.
Cryptography is a technological approach to protecting privacy. Cryptography policy is being widely debated; see CSTB (1996b) for a thorough study. The online Information Economy Page on Security, Privacy, and Encryption21 is also a valuable resource. However, there are comparatively few studies of the implications of cryptography policy. One area where social science can contribute is in characterizing the role of encryption in the commission and detection of crime. For example, it has been claimed that use of cryptography presents a serious barrier to criminal investigations. Yet a study by Denning and Baugh (1997) found that use of encryption was not currently obstructing a large number of investigations. However, it also found that the use of encryption by criminal elements was growing rapidly and could become a problem in the future, suggesting the need for further study.
2.4.4 Information Use and Value
It is popular wisdom that people today suffer information overload. If true, overload has implications for those studying such issues as the value of information. Characterizing and quantifying overload also can shape the design of new information technology tools. Several questions need answering to determine the truth of this assertion. In what particular sense are people dealing with more information than in the past? Is digital information more complex, harder to apprehend, less compressible? Are there new social or psychological phenomena emerging?
One issue is whether in fact more information is being produced. A line of studies going back through Pool (1984) and Machlup (1982) looked at production of information, but there seems to be a shortage of current data on measures of production, particularly in the electronic environment. It would be useful to update the Pool and Machlup studies.
Second, people may be spending more of their time absorbing information. There is certainly a need for detailed studies of how people spend time with regard to various information sources, and in particular what they are doing with these sources. Are they using them as a basis for decision making or are they collecting largely irrelevant information just because it is available and they think they should know about it?
Use of the Internet is a prime example of where research into time use is valuable. A 1997 Price Waterhouse Consumer Technology Survey (Price Waterhouse, 1997) polled 1,010 American consumers via telephone and found that 43 percent of the time spent accessing the Internet from home was used for obtaining information, and 34 percent was used to send or receive e-mail. This categorization is a start, but clearly it is important to know what kind of information people are accessing on the Internet and how it is being used. Time-use diaries (see section 3.1.3) are another important source of information.
There is also the issue of how individual differences, expertise, and intent may determine how much information is selected and extracted. These questions need to be examined in comparative studies of changing uses of information and effects on productivity within specific contexts and domains. For example, comparative studies to determine the effects of the availability of electronic preprints in particular scientific disciplines as well as other information-sharing and dissemination practices of various disciplines could yield useful insights.
Further elucidation of questions concerning information overload can come from microlevel studies of technology, information viewing practices, and information-seeking behavior. Collecting better information on the use of library materials in the electronic environment could represent an important opportunity. However, it is worth noting that most online systems are not instrumented to allow such data collection, except for a few locally developed systems like the University of California library information system, UC MELVYL.22 The paper by Amy Friedlander, "Impacts of Information Technology: Behaviors and Metrics," in Appendix B describes additional approaches to the question of library information use.
Current use is not the only measure of value in a library context. There is also value associated with future, potential use such as having access to archival material and preserving the scholarly record. The paper in Appendix B by Alexander Field, "Critical Issues Relating to Impacts of Information Technologies," raises questions about what is needed to ensure retrievability of material in archives in the information age.
Branding, Credibility, and Authority
Branding23 and authoritythe credibility associated with the name of a publisher or authorare interrelated in critical ways with the information overload question and with user strategies to manage it. Authority is also critical in social and societal uses of information. Little is understood about where authority comes from and how it operates in the information context. How do people assign credibility, and how is this changing, in an increasingly information-rich and competitive environment? What kinds of credibility systems could be invented or developed? How does branding work in the digital environment, and how do brand identities transfer from other media? How does the credibility established by peer review conducted prior to publication in scientific journals translate to the digital environment where researchers can reach a wide audience without publishing in traditional journals?
In a highly competitive environment, the costs for information production and distribution are sometimes driven to the lowest possible level. To understand how authority works in this situation requires comparative data on the quality and cost of information. Also, research is needed to develop a theory of production and publishing strategies in the new Web environment that includes both "push" and "pull" technologies,24 micropayments, advertising support, and other alternatives. Together, these constitute a much richer set of options than that provided by current media. It is also important to understand interactions affecting how people search for information. For example, a CommerceNet/Nielson survey in the spring of 1997 found that 71 percent of frequent users found Web sites through search engines, 9.8 percent used friends and relatives, 8.5 percent used newspapers and magazines, and 8.4 percent used links from other sites (CommerceNet/Nielsen Media Research, 1997).
On the question of authority, a candidate for case study is legal information, which formerly was a monopolistic market but now is fragmenting as new players become involved. A baseline needs to be established, followed by studies of how authority changes with fragmentation into competitive marketplace (see Berring, 1995, which describes authority in legal information). How does the market structure affect the creation of authority? Is it better to have a single, authoritative source (and pay monopoly prices) or to have competing authorities and face problems of choice and accuracy? The paper by Michael Froomkin in Appendix B discusses some additional questions about the economics of trust.
Medicine and finance offer two opportunities to study credibility and how it is established. In both cases, changes in technology and government policy have made a great deal of data available to the public (via MEDLINE25 and EDGAR26). At the same time many nontraditional sources of information can be found via the Web and other mediasome very good and some very much outside the mainstream, if not downright fraudulent. How are people using these information sources, and how do they assign credibility to them? A recent study of credibility
in Usenet groups for the support of people with medical or psychological ailments or disorders found that members relied on predictable strategies to establish the legitimacy of their questions and the authority of their replies. Members (or interlopers) who did not use these strategies were either ignored or censured by the group (Galegher et al., 1998).
How does technology change relationships with professionals, for example, physician-patient relationships? What are the costs of errors in judging credibility? One study of the effects of an e-mail "listserve" support group found that members who reported the greatest benefits were those who were also using professional medical services (Cummings et al., 1998).
On the Internet there is a growing range of sources of information whose credibility varies. How does this variability change social behavior? Does it contribute to social fragmentation? Does this variability when combined with the global nature of the Internet reinforce fringe beliefs that would not be self-sustaining in an environment determined by geography? Are there indicators of the current level of common knowledge or experience (beyond viewing of network television) that can be tracked to gain insight into these changes? How will people recognize and manage bias in information, such as that which may be present in advertiser-sponsored content?
The Information Gap
The typical formulation of the gap separating information "haves" and "have-nots"27 is highly biased toward a definition of literacy as the ability to read and write written text, and even somewhat biased toward scholarly communication. Vast amounts of audio and video information are becoming available, searchable, and retrievable. Currently the enabling technologies for these activities are expensive and are not available as consumer products. However, at some point the infrastructure necessary to support the transmission of digital multimedia information, such as broadband networks to the home, will be available.
It is important to characterize more general forms of literacy and relate these to the educational process. How is the definition of literacy changing? How is the value of different forms of literacy (e.g., the ability to facilitate a discussion, or create slides for a presentation) shifting? Television has been a pervasive medium of communication for several decades. What new questions do novel, more interactive media present that television did not? It is probably necessary to go beyond using such terms as "literacy" or "numeracy" to indicate a wider set of skills, and instead characterize the skills specifically.
What part can the educational system and universities play in serving the information poor? How should this change models of distance learning and distance education? It would be useful to establish a baseline on current distance education practices and effectiveness by careful studies of the National Technological University in the United States, the Open University in the United Kingdom,
and other similar organizations. Can distance learning be used to narrow the gap between the information "haves" and "have nots," or will it only widen it? What would be some of the characteristics of a system of distance education targeted toward the information poor?
How are differences in wealth related to possession of needed skills or access to information? Should the focus be on increasing skills or on reducing the need for skills by providing more accessible technologies and content? For example, what happens if and when people move away from text-dominated computing and communication? It is important to understand and establish the validity of interrelationships among access to information, the characteristics of the technology used to access the information, and economic opportunities.
2.4.5 Pricing Models and Content
Different approaches to paying for information often involve different incentives for what types of information are created and sold. For instance, Spence and Owen (1977) showed that when television was paid for by advertising as opposed to pay-per-view fees, there was an incentive to develop more programs that would appeal to a wide audience, rather than programs that were intensely valued by a small group of people. The reason was that advertising rates are based on the size of the audience and not on how much enjoyment each viewer gets from a show. In contrast, a highly focused show that appeals to a narrow audience might be able to recover its costs more easily in a pay-per-view system because it could charge a higher price per viewer. Quite diverse content distributed by cable television, such as the History Channel, the Cooking Channel, and the like, is supported by finely tuned advertising. This line of reasoning helps to explain why broadcast television, which until recently depended almost exclusively on advertising revenues, is often perceived to appeal to the least common denominator. Similar trade-offs are likely to apply in other product markets such as the market for content on the Internet. For example, if online content is supported primarily by advertising, one might expect that it will devolve to the lowest common denominator. Compared with broadcast television, the Web offers much greater opportunity for niche markets, analogous to specialized cable channels, and perhaps niche advertising. A model to describe the quality and diversity of possible content when cost-recovery is generated by such niche advertising would be quite interesting.
Furthermore, even when goods are supported by direct consumer payments, different incentives arise depending on whether the payments are for individual goods and services or for bundles of goods and services. Information goods that are profitable as part of a bundle may be unprofitable when sold separately, and vice versa.
2.4.6 Pricing Information
The emergence of the Internet as a way to distribute digital information such as software, news stories, stock quotes, music, photographs, video clips, and research reports has created new opportunities for the pricing of information goods. Providers of digital information goods are unsure about how to price them and are struggling with a variety of revenue models (CSTB, 1996c). Because perfect copies of these goods can be created and distributed at virtually no cost, some of the old rules, such as "price should equal marginal cost," do not apply, as noted by Varian (1995a,b).
The Internet has also created new opportunities for repackaging content through bundling, site licensing, subscriptions, rentals, differential pricing, and per-use fees. All of these schemes can be thought of as either aggregating or disaggregating information goods along some dimension. For instance, aggregation can occur across products, as when software programs are bundled for sale in a software "suite" or when access to the full contents of an online service is provided for a fixed fee. Aggregation can also occur across consumers, as when a site license is provided to multiple users for a fixed fee, or over time, as when subscriptions are made available.
Many observers have predicted that software and other types of content will be increasingly disaggregated and metered, as on-demand software "applets" or as individual news stories and stock quotes. For instance, Robert Metcalfe has written: "When the Internet finally gets micromoney systems, we'll rent tiny bits of software for seconds at a time. Imagine renting a French spelling checker for one document once" (Metcalfe, 1997). The main rationale for this prediction is that the current advantage obtained from bundling many goods to save on transaction and distribution costs will no longer apply, given that both of these types of costs are often much lower on the Internet.
However, recent theoretical work suggests that in some cases aggregation can also be a surprisingly effective pricing strategy (Bakos and Brynjolfsson, 1997a,b). Not only can it increase the seller's profits from a set of digital information goods, but it may also benefit the consumer as well. As a result, instead of subdividing goods into smaller pieces to be rented separately to individuals, it is sometimes more efficient to bundle many digital goods together. The reason is that by enabling a form of price discriminationthe charging of different prices to different consumers based on their valuation of the quantities they consumeaggregation can make it easier for the seller to extract value from a given set of goods (Box 2.7). For the case of bundling, this type of aggregation has been studied in a number of articles in the economics literature (e.g., McAfee et al., 1989; Schmalensee, 1984). The analysis shows that the benefits of aggregation depend critically on the low marginal cost of reproducing digital information and the nature of the correlation in valuations for the goods: aggregation is less
The impact of aggregation on the profitability of selling information goods can be illustrated by graphically analyzing the effect of bundling on the demand for information goods. Consumers will choose either 0 or 1 unit of an information good, such as a music video or a journal article, depending on how their valuation of the good compares to its price. A possible aggregate demand curve for such a good is depicted in Figure 2.3.
Perfect price discriminationcharging different prices to different consumers, based on their valuations of a goodwill maximize the seller's profits and will eliminate the deadweight loss shown in Figure 2.3 (Varian, 1995a). If the seller cannot price discriminate, however, the only single price that will eliminate the inefficiency from the deadweight loss will be a price equal to the marginal cost, which is close to zero. Such a low price will not generate sufficient revenues to cover the fixed cost of production and is unlikely to be the profit-maximizing price.
Aggregation can sometimes overcome this dilemma. Consider again a journal article and a music video, and suppose that each is valued by consumers at between $0 and $1, generating linear aggregate demand curves like the one in shown Figure 2.3. Suppose further that a consumer's valuation of one good does not correlate with his or her valuation of the other, and that access to one good does not make the other more or less attractive.
What happens if the seller aggregates the two goods and sells them as a bundle? Some consumersthose who valued both goods at $1would be willing to pay $2 for the bundle; othersthose who valued both goods at almost $0would not be willing to pay even a penny. The total area under the demand curve for the bundle of the two information goods, and hence the total potential surplus, is exactly equal to the sum of the areas under the separate demand curves. However, most interestingly, bundling changes the shape of the demand curve, making it flatter (more elastic) in the neighborhood of $1 and steeper (less elastic) near either extreme, as shown in Figure 2.4. As more goods are added, this effect becomes more pronounced. For example, Figure 2.5 shows the demand curve for a bundle of 20 information goods, each of which has an independent, linear demand ranging from $0 to $1.
A profit-maximizing firm selling a bundle of 20 goods will set the price slightly below the bundle's mean value of $10, and almost all consumers will find it worthwhile to purchase the bundle. In contrast, only half the consumers would have purchased the goods if they had been sold individually at the profit-maximizing price of 50 cents each, and so selling the goods as a bundle leads to a smaller deadweight loss and greater economic efficiency. Furthermore, the seller will earn higher profits by selling a single bundle of 20 goods than by selling each of the 20 goods separately. Thus, the shape of the bundle's demand curve is far more favorable both for the seller and for overall economic efficiency.
Why does the shape of the demand curve change as goods are added to a bundle? The law of large numbers implies that the average valuation for a bundle of goods with valuations drawn from the same distribution will be increasingly concentrated near the mean valuation as more goods are added to the bundle. For example, some people subscribe to America Online for the news, some for stock quotes, and some for horoscopes. It is unlikely that a single person places a very high value on every single good offered; instead most consumers will assign high
values to some goods and low values to other goods, leading to moderate values overall. However, if some consumers tend to have systematically higher valuations for all types of goods, then the moderating effect of bundling will be muted and, in some cases, unbundling will be preferred. In general, a strategy of mixed bundling, which involves offering both the complete bundle and various subbundles, can be shown to be the dominant strategy.
Similar effects result in other types of aggregation, such as aggregation across consumers, as in the case of selling a single site license for use by multiple consumers. The law of large numbers, which underlies these aggregation effects, is remarkably general. For instance, it holds for almost any initial distribution with a finite variancenot just the linear demand assumed for the examples above. Furthermore, the law does not require that the valuations be independent of each other or even that the valuations be drawn from the same distribution.
However, theoretical analysis shows that when marginal costs are high, then disaggregation may be more profitable than aggregation. Because the marginal costs of reproducing goods are so much lower on the Internet than they are in most other contexts, bundling may become much more attractive as Internet commerce grows. The policy implications of such changes, including their potential effects on competition and innovation, remain issues for future research.
attractive when marginal costs are high or when valuations are highly correlated (Bakos and Brynjolfsson, 1997a,b).
Thus, strategies involving bundling, site licensing, and subscriptions can each be understood as responses to the radical decline in costs for the reproduction of information made possible by digitization and distribution through the Internet. Increased use of micropayments can be seen as a consequence of radically lower transaction and distribution costs. Information goods that had previously been aggregated to save on transaction or distribution costs may be disaggregated as predicted by Metcalfe (1997), but new aggregations of goods may emerge to exploit the potential for price discrimination.
Experimentation with various approaches continues, and it is premature to conclude that one approach such as microcharging is best, or to try to predict even in what circumstances it may be preferred. Collection of data, analysis, and further theoretical work would all be helpful.
2.4.7 Network Externalities
Economists say that a network externality exists when one consumer's demand for a product or service depends on how many other consumers purchase that service. For example, consider a consumer's demand for a fax machine. People want fax machines so that they can communicate with each other. If no one you communicate with has a fax machine, it certainly is not worthwhile for you to buy one. Modems have a similar property: a modem is useful only if there is another modem somewhere that you can communicate with.
Network externalities are ubiquitous in computing and communications. The demand for e-mail depends on how many other users there are; the demand for a Web browser depends on how many servers there are; and even the demand for a word processing package will depend on how many other users of that package there are.28
Network externalities were first modeled by Rohlfs (1974) in an attempt to understand why AT&T's Picture Phone was not successful. However, there is little interest in failures, and Rohlfs's article did not attract wide notice until almost 10 years after it appeared. Today much more is known about this phenomenon. Nicholas Economides has studied network economics extensively and maintains a Web site that contains a bibliography of his and other work on this topic.29 Also see Katz and Shapiro (1994) for a nice overview of network externalities and their implications.
In each of the examples above (fax, e-mail, Web, word processing), the use grew slowly at first and then suddenly surged ahead. Figure 2.6 shows the price and number of fax machines shipped over a period of 12 years.
This qualitative behavior can be reproduced by some very simple economic models whose essential feature is multiple equilibria. If everyone expects a product to be a failure, then no one will buy it and the product will fail. But if
everyone expects a product to succeed, many people will want to buy it and the product will succeed. Which of these outcomes occurs depends on whether the number of early adopters exceeds a particular critical mass that is a function of the parameters of the model. In a stochastic model, the probability that this will happen depends on the magnitude of the random fluctuations in the number of adopters.
Because of the phenomenon of critical mass, it is very important to try to stimulate growth early in the life cycle of a product. Today it is quite common to see producers offering very cheap access to a piece of software or a c communications service in order to create a new market where none existed before. A critical question is how big the market has to be before it can take off on its own. Theory can provide little guidance here; the appropriate strategy depends on the nature of the good and the costs and benefits that users face in adopting it.
Auctions, one of the oldest market institutions,30 have played an important role in the development of wireless communications. The modern study of auctions by economists dates back to Vickrey (1961), whose work, which was later awarded a Nobel prize, was little read until the early 1970s, when the U.S. Department of the Interior auctioned off the right to drill for oil in offshore tracts. Following the auction, several economists became interested in the optimal strategies associated with such auctions and examined ways that auctions might be designed to achieve some given end (e.g., profit maximization, or efficient allocation of resources).
In recent years Congress has authorized the Federal Communications Commission (FCC) to allocate the radio spectrum via auctiona policy recommended by economists Leo Herzl and Ronald Coase over the period from 1957 to 1959
(Coase, 1959). These auctions are generally regarded as having being quite successful.31 See McMillan (1994) for a readable introduction to how the FCC auctions were conducted.
The economic analysis starts by considering two sorts of auctions: commonvalue auctions and private-value auctions. In a common-value auction, such as the auctioning of offshore oil drilling rights, the item that is being bid for is worth some particular amount, but the bidders may have different opinions about how much that amount is. In a private-value auction, the item in question is worth different amounts to different people. Most auctions of ordinary consumer goods such as works of art and antiques are of the private-value type. For more on the theory and practice of auctions, see the survey by Milgrom (1989) and the references cited therein. See also the discussion in Box 2.8.
2.4.9 Electronic Commerce
Electronic commerce is different from physical commerce because technology changes the modes of communication, ultimately affecting the flow of information. The reduced cost of communicating, transmitting, and processing information is at the core of these differences. The marginal cost of disseminating information electronically to new or existing customers is lower than with more conventional methods, since the cost of an additional Web query or e-mail message is close to zero. Similarly, customers can use the Internet to search across competing sellerswhich can be done directly by visiting various sellers' Web sites and inquiring about prices, products, and availability. Increasingly, searches can also be facilitated by using ''intelligent agents" or intermediaries that can gather and aggregate the necessary information on behalf of the customer. As a result, geographic and informational barriers that dampen competition among sellers may become increasingly irrelevant.
Bakos (1997) has analyzed the implications of reduced search costs for competition, efficiency, and the division of surplus between buyers and sellers. His model indicates that when electronic marketplaces reduce the costs to the consumer of searching for the lowest price, there will be (1) an improvement in overall economic efficiency and (2) a shift in bargaining power from sellers to buyers. As a result buyers will be strictly better off, but the effect on sellers is ambiguous. A change from very high to moderate search costs will tend to make sellers better off, as new markets emerge. For instance, a market for specialty car parts might be unsustainable without a technology like the Internet to lower the transaction costs involved in finding buyers and sellers. The creation of such a market provides new opportunities for sellers. However, Bakos's model indicates that if search costs continue to fall, sellers may be made worse off since buyers can more easily find the seller that offers the lowest price. Since all sellers charging more than this lowest price will lose business, competition will tend to drive down prices until they reach the marginal cost of the product, leaving no
A sensible strategy in a common-value auction, it would seem, would be to estimate the value of the item in question, add on a profit margin, and then bid that amount. However, if everyone uses such a procedure, it follows that the winner will tend to be the bidder with the highest estimatewhich then is likely to be an overestimate of the true value. Hence the "winner" will usually end up overbidding, a phenomenon known as the winner's curse.
Avoiding the winner's curse involves bidding down from one's estimated value, with the reduction depending on the number of other bidders. If one's estimate is higher than the estimates of 2 other bidders it may be reasonably close to the true value; but if it tops the estimates of 100 other bidders, it is almost certainly an overbid!
Economists have developed a number of statistical and game theoretical models of bidding behavior in such markets that have been applied successfully in practical contexts such as auctions of parts of the radio spectrum.
The most common form of private-value auction is the English auction, in which bids are successively raised until only one bidder is left who then claims the item at the last price bid. In this kind of auction, the person who is willing to bid the highest gets the item, but the price paid will generally be slightly above the bid of the second-highest bidder.
In a sealed bid auction, each consumer submits a bid sealed in an envelope. The bids are opened and the item is awarded to the highest bidder at the price he bid. The optimal strategy in the sealed-bid auction is to try to guess the amount the other consumers will bid, and then enter a bid slightly above the highest of these, assuming that the item is attractive to the bidder at that price. Thus bidders will not, in general, want to reveal their true valuation for the item being auctioned off. Furthermore, the outcome of the sealed bid auction will depend on each bidder's beliefs about the others' valuations. Even if these beliefs are correct on average, there will be cases in which the bidders guess incorrectly and the item is not awarded to the person who values it most.
A variation on the sealed-bid auctionknown as the "Vickrey auction," after the economist who first analyzed its propertieseliminates the need for strategic play. The Vickrey auction simply awards the item to the highest bidder, but at the second highest price that was bid. it turns out that in such an auction, there is no need to play strategicallythe optimal bid is simply the true value to the bidder.1
It is also worth observing that the revenue raised by the Vickrey auction will be essentially the same as that raised by the ordinary English auction, since in each case the person who assigned the highest value gets the item but only has to pay the second highest price. (In the English auction, the person willing to bid the highest gets the item, but he or she has to pay only the price bid by the person with the second highest value, plus the minimal bid increment.)
1The essence of the argument can be seen in a two-bidder example. Let v1 be the true value of bidder 1, and let b1 and b2 be the bids of the two bidders. Then the expected payoff to consumer 1 is
If v1 ‹
b2 then bidder 1 would like the
probability to be equal to 1which he can assure by reporting
b1 = v1.
surplus for the sellers (Bakos, 1997). The dynamics of "friction-free" capitalism are not attractive to sellers of commodity products who had previously depended on geography or customer ignorance to insulate them from the low-cost seller in the market. As geography becomes less important, new sources of product differentiation, such as customized features or service or innovation will become more important, at least for those sellers who do not have the advantage of the lowest cost of production.
Is this kind of dynamic already emerging in Internet commerce? Although there is much speculation about the effect that the Internet will have on prices, thus far there has been virtually no systematic evidence obtained or analysis done. However, one exploratory study by Bailey and Brynjolfsson (1997) did not find much evidence that prices on the Internet were any lower or less dispersed than prices for the same goods sold via traditional retail channels. Their analysis was based on data from 52 Internet and conventional retailers for 337 distinct titles of books, music compact disks, and software. Bailey and Brynjolfsson provided several possible explanations for their unexpected findings, including the possibility that search on the Internet during the sample period was not as easy as is sometimes assumed, that the demographics of the typical Internet user encouraged a higher price equilibrium, that many of the Internet retailers were still experimenting with pricing strategies, and that Internet retailers were differentiating their products (e.g., by offering options for delivery or providing customized recommendations), which added value. Because of the rapid pace of change in Internet commerce, it is not clear whether their findings will apply to current and future periods. However, they have suggested the need for close examination of the common assumption that the Internet will be simply a "friction-free" version of the traditional retail channels.
Despite the uncertainties about electronic commerce and relatively few attempts to look at the broad picture, there is a great deal of private-sector interest. Electronic commerce is also receiving increasing attention from policy makers. The Clinton Administration's Framework for Global Electronic Commerce (1997; available online at ‹http://www.whitehouse.gov/WH/New/Commerce/index.html›) highlights both the economic potential of electronic commerce via the Internet as well as the need for government to avoid undue regulatory restrictions and to not subject Internet transactions to additional taxation.
Right now society is in a period of intense speculation and experimentation. Experimentation involves a risk owing to path-dependencetechnological choices made in the past may constrain what technological options will be compatible in the future. Standards developed now for electronic payment may remain in use well into the future, and careful thought should be given to their implications. For example, some of the architectural design of the Visa-MasterCard "Secure Electronic Transactions (SET)" technical standard was necessitated by the need to conform with current cryptographic export control policies. Yet these policies are today very much in flux and may be entirely different in a few years. Migrating the SET standard so that it is consistent with these new policies could be very costly, if not impossible.
Even if cryptography policy changes, society may be locked into design choices already made. Thus it is critically important that any such standards be examined by those with expertise in technology, economics, business, and lawno one discipline suffices to provide the necessary expertise.
One important insight about electronic commerce that follows from a legal and economic analysis has to do with assignment of liability, that is, with who ends up bearing the costs of unexpected outcomes. If the goal is to minimize overall transaction costs, liability should be assigned most heavily to those who are best placed to reduce the costs of transactions.
Consider, for example, the rule in the United States that the consumer is liable for only the first $50 in losses from fraudulent credit card use. This assignment of liability has led to the development of highly sophisticated statistical profiling of consumer purchases that allows companies to detect fraudulent activity, thereby reducing the total costs of transactions. If the liability had instead rested entirely with consumers, one might have expected to see them being more careful in protecting their credit cards, but there would have been little reason for banks to invest in risk management technology. Another example is the difference between U.S. and U.K. assignment of liability for automatic teller machine (ATM) fraud. In the United States the burden of proof lies with the bank; in the United Kingdom it lies with the customer. This has led U.S. banks to invest in video cameras at ATM machines, whereas U.K. banks typically have not made such investments.
The issue of liability is critical for electronic commerce. A survey released in March 1997 by CommerceNet/Nielson Media Research (1997) found "a lack of trust in the security of electronic payments as the leading inhibitor preventing people from actually purchasing goods and services online." This is remarkable considering the fact that the standard $50 limit still applies to online credit purchases. One might conjecture that credit card companies are not interested in a marketing effort to educate the public on this issue until they understand their own potential liabilities for fraud and misuse. There is also a need to understand the psychological and social dimensions of "trust," since trust is a critical component of any sort of commercial transaction.
The information economy calls for new economic institutions such as "certificate authorities" that certify the connection between legal identities and possession of cryptographic keysa public-key infrastructure. Large certificate issuers include Versign, which has close ties to the credit card issuer Visa, and GTE, which has close ties to MasterCard. The economics of this industry are uncertain and clearly depend critically on the issue of liability assignment.
Another factor that is potentially delaying the growth of electronic commerce is intellectual property protection. Some of the broader issues are dealt with in section 2.4.1, "Protection of Intellectual Property," but some of the specifically commerce-oriented issues are mentioned here.
The first such issue is the role of copy protection, which is a technical means for making it more difficult to create additional functional copies of software in a competitive environment. Copy-protected mass-market software was effectively competed away during the mid-1980s. Any copy protection that inconveniences users is difficult to maintain in a highly competitive market. See Shy (1998) for an economic analysis.
More generally, numerous copy protection schemes have been proposed to help safeguard intellectual property, suggesting that there will almost certainly be a standards battle for supremacy in this market. Besen and Farrell (1994) have provided a survey of economic analysis with regard to conflicts about standards that sets forth the current state of the art in this area. More work in this area would be valuable.
Electronic commerce also raises significant antitrust issues. There are large economies of scale in distributiona single general-purpose online bookstore or CD store can serve a very large market. There are also potential demand-side economies of scale in payment mechanisms and software, which leads to a winner-take-all market structure with a single firm (or small set of firms) dominating the market.
There have been a number of interesting studies of market structure in this context (see, e.g., Katz and Shapiro, 1994, for a survey); however, much more work is needed. The role of antitrust policy in an industry with strong network externalities and standardization issues is especially important to understand. A dominant firm brings the benefits of standardization, but presumably also imposes inefficiencies due to its monopoly position. The social trade-off between these benefits and costs is critically important and is the subject of much current debate. Some dispassionate analysis would be highly welcome.
There has been much speculation about the macroeconomic effects of electronic commerce, such as the loss of economic sovereignty. Most economic analysis has focused on moving from multiple currencies to a single currency (as in the European Union context), but the emergence of currencies issued by private companies and barter arrangements is a distinct possibility. Economic monetary history would likely shed some light on how an economy functions in the presence of multiple private currencies, since that circumstance was common up until the turn of the last century.
There is also the question of who will appropriate the benefits of electronic commerce. Varian (1996a) has argued that price discrimination will become a widely used approach to selling information. (One form of price discrimination is enabled by bundling, discussed in section 2.4.6, "Pricing Information.") He cites earlier studies that suggest that the welfare effects of price discrimination will be benign from the viewpoint of overall welfare, but price discrimination may certainly affect the division of economic gains between consumers and firms. These earlier studies typically assumed a monopolistic market structure, which may or may not be appropriate for electronic commerce. Thus extending
these models to more competitive market structures would enhance understanding of the likely impact of electronic commerce on consumers.
2.5 Illustrative Broad Topics For Ongoing Research
Workshop discussions and position papers yielded numerous suggestions for research topics, a number of which are discussed above. From these topicsspanning a wide range of interdisciplinary subjects from economic productivity to communities in the information agethe workshop steering committee selected an illustrative set of promising areas for research, listed below.
•Interdisciplinary study of information indicators. The idea of developing a method for quantifying certain aspects of society in the United States is as old as the Constitution. Over the last two decades, researchers have recognized and begun to analyze the increasing role that information plays in all aspects of society. These efforts have proved most fruitful when measuring the contribution of information to the economy,32 the size of the information work force,33 and the level of penetration of the information infrastructure.34 In most of these analyses, the conclusions drawn have been consistent with the view that society is in the process of a fundamental change through the rapid development and implementation of information technologies and the products and services associated with them.
Some of these studies raise the indirect question of the value of attempting to use a set of indicators to represent the information activities of society, such as public discourse and democratic processes, to improve understanding. This approach was first pioneered by Borko and Menou (1983). In essence, looking at society from an information perspective leads us to perceive society as composed of information structures and communication behaviors. In other words, those activities that lead to the construction of environments for producing, receiving, distributing, and processing information reflect the creation of information structures, while those activities that involve transmission of information reflect communication behaviors. Box 2.9 lists some notional indicators.
The dramatic information-centric changes that have occurred across all societies in recent decades suggest that the social forces enabled by the development of information structures and the prevalence of communication behaviors be measured. More fully developed, a set of quantitative information indicators offers opportunities for comparatively measuring community information assets, public participation, interconnectedness, social capital, information poverty, and universal service.
It would be useful for the nation to invest in an interdisciplinary study of information indicators. The perspectives of many disciplines come to bear on the question of measuring impact. An exploration of how different disciplines do or do not reach consensus about how to measure impacts, and the extent to which consensus is desirable, is called for. From such an exchange can come broadly
Books produced (general/textbooks)
Cable TV access/trunk lines
Number of cinema seats
Number of computer systems/databases
Number of database subscribers
Number of journal articles/technical reports
Number of libraries/archives
Number of modems
Number of movies released
Number of newspapers
Number of online subscribers
Number of personal computers
Number of registered computer users
Number of satellite dishes
Number of telephones
Number of TVs/radios
Number of periodicals published (general/scientific)
Number of public telephones
Number of radio/TV channels
Number of telephone access/trunk lines
Circulation of library volumes
Domestic/international mail traffic
First-class letters mailed
Hours spent accessing the Internet
Hours spent listening/viewing radio/TV
accepted measures of access, use, and the impact of information and information technology. One particular outcome could be the aggregation of the kinds of micronindicators listed in Box 2.9 into broadly accepted macro information indicators such as the following:
Interconnectivity index. A measure of the facility of electronic communication, and an evaluation of the development of this dimension of the information infrastructure;
Information quality of life index. Similar to an index produced by the Organisation for Economic Cooperation and Development, an index that would attempt to evaluate the qualitative levels of communication available to individuals;
Leading information indicators. An index that would attempt to predict the growth of the information infrastructure;
Home media index. An index of the state of penetration of communications technologies in the home that might qualify as a leading index of the potential for future consumption of information; and
Marginalization index. An index that would measure the extent to which specific populations are excluded from participation in the information infrastructure.
Were such a set of indicators developed, funding agencies like the National Science Foundation might have a standardized tool in hand through which to assess the outcomes of the research that they sponsor.
•Impacts of information technology on labor market structure. Information technology has been linked to wage inequality and other changes in the structure of the labor market (more detail is provided in section 2.3). Understanding the extent to which and the mechanism by which computers may affect increased wage inequality is important in determining the nature and extent of public policy responses. This research should acknowledge that computers, by themselves, are not causal agents. Rather it is the entire constellation of economic and organizational strategies, managerial perspectives, and work practices within which computing technology is embedded that affects wage inequality.
One possible response is improved training of workers for IT-related jobs. Understanding the needs for education and training requires better definition of the skills required to make use of IT. Results from such research would benefit both policy makers and the private sector as they seek to better match education and training to workplace skill requirements.
•Productivity and its relationship to work practices and organizational structures for the use of information technology. Extracting the benefits of new technologies depends in part on organizational adaptation to them. As discussed in more detail above, industrial exploitation of the benefits of the electric dynamo in the early part of this century required new approaches to manufacturing. Organizations using information technology today are at a similar learning stage.
A major impediment to determining optimal work practices and organizational structures has been the lack of a clear picture of what data already exist. Developing such a list would help speed up research in this area. There are a number of places where specific research needs are already apparent, such as the collection of time series data to help clarify the role of technology in organizational changes.
Understanding the productivity benefits of information technologyilluminating the so-called productivity paradoxalso is worthy of continued research. Important questions include how to better quantify what have been considered
"unmeasurable" economic inputs, such as organizational knowledge, and "unmeasurable" outputs, such as product quality, associated with computers.
As recognition grows that productivity gains from information technology increasingly depend not just on the introduction of new technology but also on finding new ways and organizational structures to use it, it is worth noting that advances in the technology have owed much to government-supported computer science research. Advances in economic productivity would benefit from analogous research on how to better use information technology in the workplace. This is one facet of the broader question of learning how to better use information technology to achieve a host of social and economic goals. There are already moves to increase research in this domain; one example is the National Science Foundation's interdisciplinary Knowledge and Distributed Intelligence initiative.
Intellectual property issues. Information technology raises many new questions about optimal protection of intellectual property rights, posing challenges to policy makers revising intellectual property law or international agreements as well as to commercial interests considering particular intellectual protection schemes. Many new schemes have been advanced for protection of intellectual property, and more needs to be known to choose among them. While considerable research has been conducted on the effect of different patent regimes on innovation, little has been studied regarding the consequences of different copyright protection schemes (see section 2.4.1). Theoretical work and empirical research on different copyright protection regimes will help inform future actions to protect intellectual property.
•Social issues addressed at the protocol level. The Internet has given rise to many new social issues in intellectual property, privacy, and data filtering. Addressing these social issues at the protocol levelthrough policies, rules, and conventions for the exchange and use of informationis a promising area for interdisciplinary research. Examples include:
PICS, the Platform for Internet Content Selection, which implements a set of protocols for rating Web sites (Resnick and Miller, 1996);
P3P,35 a project for specifying privacy practices;
Language specifying the terms and conditions by which intellectual property is managed; and
Open Profiling Standard,36 a method for individual users to selectively release information about themselves under specific conditions.
Each of these projects involves both technological and social dimensions. For example, PICS raises issues not only about how best to encode ratings for Web sites, but also about how to represent them; cognitive issues about how elaborate the rating schemes should be; and economic issues about how rating bureaus can recover costs. Another issue is how users can evaluate the trustworthiness of the labels provided by ratings services.
1. See Tyack and Cuban (1995) for an analysis of why earlier technologies for improving teaching and learning never achieved their promise. See also references in CSTB (1994b, 1996c).
2. "Active intervention" refers to deliberate interventionsuch as the introduction of new technology or educational practicesfor the purposes of research.
3. While this discussion focuses on this question in a U.S. domestic context, in much of the rest of the world, socioeconomic disparities and the gap between urban and rural access are much greater.
4. See ‹http://www.hotmail.com›.
5. Note, however, that not all forms of communication necessarily reduce localness. For example, Wiley and Rice (1933) postulated that the telephone, a point-to-point medium, reinforces locality whereas broadcast media tend to diminish the importance of locality.
6. This work updated earlier work conducted at a time when computing was less prevalent.
7. The "output effect" also includes changing tastes or desires, e.g., the changes in preference for cars rather than horses or for word processors rather than typewriters.
8. One might expect software development to contribute to increased demand for skilled work, but recent work by Brynjolfsson (1997) found that it was not a major factor, at least not in most industrial countries. Although the U.S. software industry is fairly large and growing, it is still not large enough to explain any significant share of the effect.
9. See Brynjolfsson (1993), Attewell (1994), Sichel (1997), and CSTB (1994a) for empirical studies of the productivity paradox. See CSTB (1994a), Baily and Chakrabarti (1988), Brynjolfsson (1993), Wilson (1995), and Brynjolfsson and Yang (1996) for reviews.
10. Concurrent engineering refers to the practice in which personnel from every phase of product developmente.g., from design to production engineering, quality control, and servicecollaborate in product development beginning at the earliest stages.
11. Note that improvements in the technology for transmitting and manipulating image data increasingly remove this limitation.
12. With the exception of the retirement-planning task force, the studies of differential benefits cited in this section used survey analysis of naturally occurring differential use. Statistical techniques were used to control for the effects of other variables, but because people were not randomly assigned to the use (or nonuse) of technology, strict causal claims are not warranted. In the retirement-planning study, still-employed and recently retired people were randomly assigned to task forces with and without access to technology. Because of the random assignment, causal claims are warranted.
13. U.S. Copyright Office records on documents registered for copyright are available via the Library of Congress Information System (LOCIS) for 1978 onward.
14. Figures on literacy are not always reliable, in part because the definition of literacy is somewhat vague. The numbers given in this discussion were taken from contemporary accounts.
15. Early expectations were that interactive cable services providing video on demand (VOD) or near-VOD would be lucrative and popular. However, early experiments by the cable industry showed that consumer response to VOD was unlikely to generate sufficient revenue to justify investment in interactive cable systems. Investment in two-way capabilities in the cable industry today is predicated on a market for broadband data delivery (including Internet as well as telephony and video conferencing) to both the home and small businesses, in addition to video programming.
16. See ‹http://www.cyberpatrol.com/›.
17. This set of protocols was adopted as a standard by the consortium that sets standards for the World Wide Web.
18. EPIC (see ‹http://www.epic.org/privacy/privacy_resources_faq.html›) contains an extensive list of online resources on privacy issues.
19. This study is based on a sample of 1,009 computer users derived from a sample representative of 2,204 persons, age 18 or over, living in households with telephones and located in the 48 contiguous states.
20. Note that federal legislation passed in 1994 (which did not come into effect until 1997) allows people to restrict the release of personal information from state motor vehicle records.
21. See ‹http://www.sims.berkeley.edu/resources/infoecon/Security.html›.
22. See ‹http://www.melvyl.ucop.edu/›.
23. Branding is an effort to transform something perceived as generic into something with which people associate a brand name. A recent example is the ''Inside Intel" campaign, which built up significant brand awareness for CPUs, something that the average individual cared little about.
24. "Push" technologies send information to an intended consumer without that consumer having requested it, while "pull" technologies send information only in response to a specific request. Radio and television broadcasting and e-mail are examples of push technologies, because they both transmit information regardless of whether or not anyone specifically requested it; the World Wide Web is an example of pull technology since a page must be requested before it is sent. Note that push technologies can be used over the Internet as well; examples include the PointCast system, which delivers customized news to users' computer desktops.
25. The National Library of Medicine's MEDLINE system makes extensive bibliographic information covering the fields of medicine and health care available free of charge to the public through a Web site.
26. The Electronic Data Gathering, Analysis, and Retrieval system makes available to the public through a Web site much of the information companies are required to submit to the U.S. Securities and Exchange Commission.
27. Such a gap exists, for example, between various socioeconomic groups, between urban and rural areas, and between industrialized and developing countries.
28. Also see Markus (1987) on the theory of critical mass for interactive media.
29. See ‹http://raven.stern.nyu.edu/networks›.
30. Herodotus describes the use of auctions in Babylon as early as 500 BC. It is remarkable that a venerable economic institution like an auction has found a receptive audience on the Internet. The Internet Auction List (‹http://www.usaweb.com›) lists more than 50 sites that have regular online auctions, and more are being added every day. Computer equipment, air tickets, and Barbie dolls are being bought and sold daily via Internet auctions. Even advertising space is being sold via auction on AdBot (‹http://www.adbot.com›).
31. There have, however, been problems due to overbidding (the so-called "winner's curse" phenomenon, described in Box 2.8) and signaling. Signaling can occur in multiround auctions when the bid values are used to signal the intent of the bidder, in violation of the rule against there being any collaboration or collusion between auction participants. For example, a bid of $1,000,202 might indicate that a bidder has a particular interest in the market with telephone area code 202.
32. See Jussawalla et al. (1988); Machlup (1962); and Porat (1977).
33. See Bell (1973); Katz (1988); Machlup (1962); and Schement (1990).
34. See Dordick and Wang (1993); Ito (1981); and Kuo (1989).
35. See ‹http://www.w3.org/Privacy/Overview.html›.
36. See ‹http://www.w3.org›.