Page 21

2
Illustrative Examples and
Unanswered Questions

This chapter discusses a selection of social science studies that have provided useful insights for understanding the impacts of computing and communications and shaping public policy. The aim is to give a flavor of the results produced by earlier studies and to introduce some areas viewed as especially promising for future research, including points raised and issues discussed at the June 1997 workshop and in position papers submitted by the participants. It is not intended to be comprehensive; the points and issues discussed illustrate the range and value of social science research and provide a basis for framing important research questions. The chapter concludes with an illustrative set of broad topics for ongoing research drawn from the discussion presented below.

Since the range of potential impacts associated with information technology is vast, the examples and issues outlined below are organized according to the domains in which they are extraordinarily important: private life, including households and community; social infrastructure; and business, including labor and organizational process. Cutting across all of these are issues integral to life in an information economy and society—among them protection of intellectual property, pricing of information, and electronic commerce. Another significant impact of computing and communications is the changing boundaries between these domains—between people and organizations, organizations and nations, and the private and public sectors.

2.1 Households And Community

Americans are rushing to furnish their homes with a host of devices for sending, receiving, and processing huge quantities of information through diverse



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 21
Page 21 2 Illustrative Examples and Unanswered Questions This chapter discusses a selection of social science studies that have provided useful insights for understanding the impacts of computing and communications and shaping public policy. The aim is to give a flavor of the results produced by earlier studies and to introduce some areas viewed as especially promising for future research, including points raised and issues discussed at the June 1997 workshop and in position papers submitted by the participants. It is not intended to be comprehensive; the points and issues discussed illustrate the range and value of social science research and provide a basis for framing important research questions. The chapter concludes with an illustrative set of broad topics for ongoing research drawn from the discussion presented below. Since the range of potential impacts associated with information technology is vast, the examples and issues outlined below are organized according to the domains in which they are extraordinarily important: private life, including households and community; social infrastructure; and business, including labor and organizational process. Cutting across all of these are issues integral to life in an information economy and society—among them protection of intellectual property, pricing of information, and electronic commerce. Another significant impact of computing and communications is the changing boundaries between these domains—between people and organizations, organizations and nations, and the private and public sectors. 2.1 Households And Community Americans are rushing to furnish their homes with a host of devices for sending, receiving, and processing huge quantities of information through diverse

OCR for page 21
Page 22 FIGURE 2.1 Penetration of various household devices in the U.S. market over the 20th century. SOURCE: Data from Belinfante (1991), Electronic Industries Association (1984-1990, 1992), Television Bureau of Advertising (1991), and U.S. Bureau of the Census (1986, 1990-1992). media across multitudes of channels. Figure 2.1 shows trends in acquisition of devices such as personal computers compared with ownership of two consumer staples—refrigerators and automobiles. If one could lift the roof from the characteristic U.S. home, one would see that it looks increasingly like a multiplex theater. What once took place in the town square, in the neighborhood tavern, on market day, or in the library can now occur as easily in the study or in the bedroom. Computers and advanced communications are also playing increasingly significant roles in community organizations and in education. 2.1.1 Computer Use in the Home Computer use in the home is a relatively recent phenomenon, and one that has changed considerably in the past two decades. At first, a majority of use was work-related. Today computers are more accepted as a household technology, with an increasing amount of software and other development targeted to the home. (For further discussion and a model for the interaction of the household and technology, see Venkatesh, 1996.) Descriptive studies of computer use in the home are relatively rare and almost always very "thin," that is, based on a small number of survey questions.

OCR for page 21
Page 23 More intensive and extensive study of computer use in the home is required to understand what people use the computer for; how computer use substitutes for other activities; and how it affects family dynamics, children's educational performance, adults' employment activities, and so forth. But even the best descriptive studies inevitably confound household computer use and its effects with effects stemming from household income and educational status. Households with greater resources are much more likely to have and use computers, and they are likely to use them in different ways. In this situation it is difficult to understand how much of any described effect is due to the technology and how much is due to ancillary resources the household brings to bear on such challenges as understanding how to use software, troubleshoot technical problems, select software for children, and incorporate computers into family activities. A good way to untangle the effects of technology from the effects of other household resources such as income and education is to conduct field experiments in which households are given current technology and well-designed training and support to compensate at least in part for limited income and educational resources. The Homenet Project (Kraut et al., 1996; Kiesler et al., 1997), organized by social scientists at Carnegie Mellon University, is a field experiment documenting the use and effects of household computers in more than 100 households in Pittsburgh, Pennsylvania. Families were selected for demographic diversity, and a matched sample of eligible but not-selected families was also tracked. Each selected family was given a computer, modem, extra telephone line, full Internet accounts for each family member above age 8 who wanted one, software, training, online support, and access to an evening telephone help desk. In exchange for receiving technology and technical support, families agreed to participate in a variety of data collection efforts, including surveys, home interviews, and automated logging of software use. Data collection and analysis are still under way, but the researchers have already been able to document important findings: • Even with hardware and software designed for ease of use, personal training, and personal support, people found the technology hard to understand and use. Significantly, many of those who stopped (or never started) use blamed themselves rather than the technology for their problems. Generational effects persist even when both older and younger generations have the same access to the same technology. People in the household under the age of 19 use the computer more than people older than 19. • Use of electronic mail is better than use of the Web as a predictor of later e-mail and Web use. • Household income and educational levels are not valid as predictors of Internet use when all the people compared have adequate technology and support.

OCR for page 21
Page 24 2.1.2 Differential Impacts of Technology It is rhetorically convenient to talk as though technology is used by everyone in the same way and affects everyone similarly, regardless of their life circumstances. Thus, such generalizations as "e-mail flattens organizational hierarchies" or "people who spend time online reduce their face-to-face interaction" are common. Historians of earlier technologies such as the telephone have noted that people use the same technology differently and that it has different effects, depending on a person's age, gender, income level, geographic location, and other circumstances (see, e.g., Fischer, 1992; Mueller and Schement, 1996). Numerous researchers have reached the same conclusion about computers. Attewell and Battle (1997) showed that equivalent technological capability in homes is associated with higher school test scores when family income is higher. The Homenet study (Kraut et al., 1996; Kiesler et al., 1997; described above in section 2.1.1) demonstrates that the same technology in the home is used differently by males and females, and also by teenagers and adults. A RAND study of retirees (Bikson et al., 1991) showed that the same technology is used differently by recent retirees and same-age counterparts who have continued to work. Section 2.3.4 gives examples of the differential impact of e-mail use in scientific communities. An important related question is understanding why some people who made use of the Internet at some time then stopped using it. Demographic studies of the populations of network users and nonusers are required. The developing population of people who experimented with Internet use but did not become long-term users deserves analysis. One study of this topic, using Nielson data from more than 14,000 households, discovered that Internet "drop-outs" were less likely than those who continue using the Internet to have developed social relationships and roles online (Chung, 1998). Research on differential impacts holds a very important message to those, such as policy makers and others, wishing to understand the interactions between technology and society as a single, uniform impact: they will forever be disappointed or deluded. It is vital to recognize that the "same" technology has different effects in different social and organizational circumstances. Indeed, one of the most important contributions that social science research can make is in exploring how social and organizational conditions—such as income, age, sex, or work status—affect and are affected by how technology is used. 2.1.3 Community The Internet offers a new locus for communication and participation. According to a Business Week/Harris poll released April 28, 1997, of the 89 percent of those surveyed who used e-mail, nearly one-third considered themselves part

OCR for page 21
Page 25 of an online community. Forty-two percent of those involved in an online community said that it was related to their profession, 35 percent said that their community was a social group, and 18 percent said that it revolved around a hobby. The shift away from traditional notions of public space may threaten older forms of community. Polls show, for example, that more New Jerseyans know the names of the mayors of New York and Philadelphia than know the names of the mayors in their own towns. Although regions vary, this decline in localism seems to be a characteristic of the U.S. political landscape. Large media networks collect audiences by concentrating on stories that appeal to large blocks of viewers and readers. Thus suburban and rural citizens are quite likely to recognize the name of a city official for whom they cannot in fact vote. Individuals who commute to distant workplaces and whose personal networks are spread geographically are further disconnected. The possibility that localism may become increasingly irrelevant to increasing numbers of Americans signals social and political change of a profound nature. For as long as community has remained intact, for example, libraries and churches and schools have functioned to bring people together, to educate newcomers, and to reinforce the virtues of citizenship. Today the number of potential secondary anonymous relationships has increased vastly as individuals seek to accomplish tasks by relying on mediated information received from strangers. Home-centered, individualistic, information-heavy approaches to carrying out their personal and professional lives offer people opportunities to bypass both the traditional community and the public sphere. Hard evidence on the issue of localism and engagement with the community is very mixed. For example, Americans today change homes and communities at about half the rate that they did in the mid-19th century, and even less than they did in the 1950s. It is possible to argue that people are less involved on a daily basis with their neighbors—and more with people elsewhere—than they were a century ago, but the degree of that change is as yet unestimated. Such change may also be a result of other phenomena in the early 20th century—rural to urban migration, the streetcar in cities, and the automobile in rural areas—rather than new communications. In a sense, the questions first raised by the University of Chicago school of sociology (e.g., Park, 1916, 1955) in the early part of this century persist in their relevance: How does community form out of the ferment of diverse cultural experiences? How does democracy emerge from the diverse cultural experiences of immigrants? At the end of the 20th century such questions are still being asked; but whereas the Chicago school focused on the role of the newspaper as an agent for assimilation and teaching democracy, the question today is under what conditions new information technology and media will bring Americans together or pull them apart. Box 2.1 illustrates some interesting areas meriting further exploration.

OCR for page 21
Page 26

BOX 2.1 Community in the Information Age Interface between the household and the community. How does the transformation of household functions enabled by information technologies alter an individual's expectations of community? Political values. Does identification with networked communities affect Americans' construction of democratic participation, responsibilities, and obligations? Will Americans devalue political values associated with geographic community as they integrate into networked communities? Virtual communities. Are they communities? In what ways do people enact the rights and responsibilities of citizenship in virtual communities? Networked communities and the elderly. To what extent does participation in networked communities enrich the lives of the elderly and/or contribute to alienation from geographic communities? Families. Does Internet use by families contribute to the establishment and maintenance of family networks? How fragile are these networks? Friendship. Does making friends in cyberspace enrich or fragment emotional life? Does dependence on cyber friends result in lower motivation to develop friendships with those close by? Computer networks as social networks. How social are computer networks? What needs do they meet or fail to meet? 2.1.4 Education Increased use of computing technologies in K-12 education is giving rise to important new areas for social science research. The Internet has penetrated rapidly and extensively into U.S. public schools. A U.S. Department of Education survey found that as of fall 1996, 65 percent of schools had access to the Internet; penetration had increased by 15 percentage points in each of the prior 2 years (Heaviside et al., 1997). The Office of Technology Assessment (OTA) estimated that in 1995, U.S. schools had 5.8 million computers for use in instruction—about one for every nine students (Office of Technology Assessment, 1995). However, the presence of computers for instruction does not necessarily translate into student use of computers for instruction. The OTA reported that despite the presence of close to 6 million computers for instruction in the nation's schools (in 1995; presumably there are more now) students spent only about 2

OCR for page 21
Page 27 hours a week using them. Like factories at the introduction of the electric dynamo or business at the introduction of computing technology, schools and teachers may not yet have learned how to modify work practices and organizational structures to take advantage of computing and communications technology. Schools have in general not found it easy to use technologies effectively for improving teaching and learning.1 Nevertheless it is important for policy makers, educators, and parents to understand what could be accomplished with computing technology in schools under optimal conditions. Although a variety of proposals have been advanced to increase the availability of computers and Internet connectivity, and substantial investment made in purchasing technology, relatively little attention has been paid to how they will be used once they are in place. Because of the decentralized nature of U.S. education, it is difficult to understand for the nation as a whole the breadth and depth of change in educational practice and outcomes associated with the increasing presence of computing and communications technology in schools and classrooms. While many state departments of education and local districts are implementing new programs with a technology component, efforts to design and employ measures of effectiveness that would allow policy makers and parents to compare across projects are generally lacking. A recent report of the President's Committee of Advisors on Science and Technology, Panel on Educational Technology, stresses the importance of experimental research in exploring what educational approaches are most effective (PCAST, 1997; see Box 2.2). The report notes that research on educational technology has received minimal funding relative to total national spending on K-12 education, and it urges increased investment. One of the major research categories proposed is the need for rigorous empirical study of which approaches to using information technology in schools are most effective. The starting point for empirical study is descriptive inventories of projects with comparable measures of effectiveness, which would provide an exceptionally useful knowledge base. Such a study can take advantage of natural variation across states and school districts and would not require active intervention.2 This mapping of the range of endeavors under way would lay the foundation for the second phase, a more intensive study of how best to use computers in education. It would be worth considering how to organize, fund, and research a small number of schools as demonstration sites where work practices and organizational structures are radically redesigned to improve teaching and learning through technology. To achieve a fair demonstration, schools would have to be paired with a second set of schools matched according to student and staff demographics and capabilities. The second set would receive economic resources comparable to those of the first set that they could deploy in a range of other ways. Although natural variation among schools would be sufficient for the descriptive phase, this active intervention is required for the second phase in order to derive useful conclusions in the short run.

OCR for page 21
Page 28

BOX 2.2 Research Recommendations of the President's Committee of Advisors on Science and Technology, Panel on Educational Technology 1.Basic research in various learning-related disciplines (including cognitive and developmental psychology, neuroscience, artificial intelligence, and the interdisciplinary field of cognitive science) and fundamental work on various educationally relevant technologies (encompassing in particular various subdisciplines of the field of computer science). 2.Early-stage research aimed at developing innovative approaches to the application of technology in education which are unlikely to originate from within the private sector, but which could result in the development of new forms of educational software, content, and technology-enabled pedagogy, not only in science and mathematics (which have thus far received the most attention), but in the language arts, social studies, creative arts, and other content areas. 3.Rigorous, well-controlled, peer-reviewed, large-scale (and at least for some studies, long-term), broadly applicable empirical studies designed to determine not whether computers can be effectively used within the school, but rather which approaches to the use of technology are in fact most effective and cost-effective in practice. SOURCE: Reprinted from PCAST (1997), p. 53. The politics and economics of designing and running such demonstration studies would be enormously complex and contentious. Yet, currently, school districts and teachers are making decisions about how to allocate resources of both money and time for technology-related efforts, without the benefit of good information about the potential consequences of their decisions. A series of discussions is called for that would involve both the public sector and the private sector—and would include educators, parents, technologists, and researchers—in exploring the feasibility and usefulness of such demonstration projects. A significant opportunity to study the use of information technology in the public schools is presented by the Schools and Libraries Universal Service Fund, which was established as part of the Telecommunications Act of 1996. With funding of up to $2.25 billion per year, the program will provide discounts on telecommunications services, Internet access, and networking, with the largest discounts going to rural and inner-city communities. By enabling a large number of schools to acquire new technology, this program in effect creates a large-scale "laboratory" where the sorts of research described above could be conducted.

OCR for page 21
Page 29 2.2 Social Infrastructure: Universal Service Formulating public policy on aspects of social infrastructure such as universal access to telephony and other communications services requires decision making about how large amounts of money are allocated and how broad segments of society are served. Although the debate about such questions may often take on a political cast, both empirical research and the application of social science theory offer much to help guide public policy making and the investment of public resources. Since the value of a network—such as the public telephone network or the Internet—depends on the total number of people connected to it (a phenomenon known as ''network externalities"), it is often argued that access to networks should be universally provided. Universal service has long been part of U.S. telecommunications policy, and there are those who argue that universal service is an appropriate public policy goal for Internet access (see, for example, Anderson et al., 1995).
3 Whether or not one agrees that universal service for networks is an appropriate objective of public policy, it is worth pointing out that the historical evidence suggests that there would be widespread popular support for applying universal service policies to new networks made possible by advances in technology. Historically, in several instances, the political demand for universal service has repeatedly induced Congress to ensure universal service at uniform rates. For example, a postal service available to all was established by the Constitution. Initially (in 1792), postal rates for a first-class letter depended on the distance it was to be carried: 6 cents for fewer than 30 miles, 8 cents for 31 to 60 miles, and so on through nine rate classes to the highest rate of 25 cents for more than 450 miles. In 1845 the rate structure was collapsed to only two categories, 5 cents for not more than 300 miles, and 10 cents for more than 300 miles. In 1863 a uniform rate (3 cents) regardless of distance and free intracity delivery were established (U.S. Bureau of the Census, 1975). Rural free delivery began in 1896. Subsidized Parcel Post became effective in 1913, effectively connecting rural residents to the advantages of city department stores through mail-order houses like Sears-Roebuck and Montgomery Ward. A premium for airmail delivery was dropped in 1978. For services provided by private businesses, government regulation was often used to ensure universal service and nondiscriminatory rate structures. Railroad rates were regulated by the Interstate Commerce Commission (ICC) beginning in 1887. Interstate telephone rates were regulated by the ICC and later the Federal Communications Commission beginning in 1919. Intrastate telephone calls were made subject to state regulatory authority. Telephone companies were required to charge a uniform fee for service connection. Cable television rates and access have also been regulated.

OCR for page 21
Page 30 Government ownership, government subsidies and loans, and direct government programs have been used to ensure universal network services. Land grants and other government assistance brought railroads to every city in the country. The Rural Electrification Administration was established in 1935 to extend electrical service to areas where high construction costs and low population density had made private service unprofitable. The federal highway program and later the federal Interstate Highway System connected every congressional district to the national transportation network. Education can also be thought of as a good with substantial network externalities. In the United States, elementary education has been provided universally (and compulsorily after the 1880s), and secondary education has been provided universally since the mid-1940s. College education has been subsidized by the state and federal governments since the Land Grant Universities were established shortly after the Civil War. Increasing fractions of the population have benefited from government-subsidized higher education. Special programs have been introduced to assist the children from low- and middle-income families to pay the cost of college. Two points about these government efforts to foster or mandate universality for network goods need to be stressed. First, all of these congressional and state efforts were designed to accomplish (as much as possible) universal geographic connectivity. Thus letters with 32-cent stamps are delivered to remote sites in the Alaskan north, in mountainous wilderness, and on small, but inhabited, islands. Even Hawaii has an interstate highway! Rural residents received telephone and electrical service just as their city cousins did. Second, the principle of universality was to extend to people in all income classes, rich and poor alike. This has often gone beyond establishing uniform rates for service to the creation of subsidized "lifeline" rates for basic service at prices presumably available to even the poorest families. The political logic behind these moves is threefold. First, they have been defended as required by the principle of democracy. Individuals cannot effectively participate in the democratic process if they do not have equal and unrestricted access to the main methods of communication and transportation. Thus as increasing fractions of the population become connected to a network, those left unconnected become an increasing burden on the democratic principle, and the cost of subsidizing their inclusion becomes smaller and smaller. Sooner or later the political calculus tips the balance toward a policy of guaranteeing universal service. The second principle that has been applied is the desirability of equal opportunity. As economic development proceeded, both high-income occupations and low-cost access to the most diverse array of consumer goods being produced became concentrated in the urban areas. Federal action was seen to be required to keep rural Americans abreast of these advances. Farmers, too, it was argued, should share in the opportunities and wealth created by the new technologies. The third argument used to defend special programs for the poor was the

OCR for page 21
Page 31 argument that connection to a network was essential or at least very helpful for self-advancement. Basic education is necessary to become employable. More education is probably better. Basic telephone and electrical service is probably necessary to hold a good job and to seek out better opportunities. The political fear is that unless government redistributive actions are taken to include the poor in the network, their lack of connectivity will doom them and their children to permanent poverty. Although it is certainly true that widespread availability may increase the value of a network, it is not necessarily the case that such access will occur only with government provision or subsidies. After all, many goods with network externalities are provided by the private sector, including our fax machines, video player/cassette market, and so on. Indeed, only a couple of years after the Anderson et al. (1995) report, for-profit firms such as Hotmail4 began offering free e-mail, supported by advertising. Basic telephone service has long been regarded as a social good that required a deliberate policy effort to achieve universal access. However, a close reading of history suggests another possible conclusion. According to Mueller (1997), penetration of basic telephone service could easily be comparable to today's rates, even if there had been no policies of subsidized access. Various comments to the FCC in its recent docket on universal service reform indicated that the current structure of pricing in telephony is costing the United States billions of dollars, with very little impact on penetration rates for basic telephone service. These deadweight losses arise because the prices of services such as long-distance calling, for which demand is sensitive to price, are set well above cost, and the prices of price-insensitive services, such as basic service, are often below cost, in direct violation of the economic principles of efficient pricing to cover joint and common costs ("Ramsey pricing"; see Kahn, 1970). Advocates of universal service for the Internet or telephony typically make their case on grounds of geography or of income. One can well see why interested parties might argue for geographic subsidization: economic theory suggests that most of the benefits of providing services to isolated areas will be captured by those who own land in those areas. Land with electricity, telephone, and road service is certainly more valuable than land with none of these features, and it is, of course, appealing to those who own the land to have someone else pay for such improvements. Geographical concerns also flow from the interest in social and economic development in rural areas. This was a past concern in the United States for telephony, and it remains an issue for expansion of new broadband services to rural areas. Rural access to even basic telephony remains a major issue in many developing countries. Whether cross-subsidies are the appropriate means to fund the expansion of rural telecommunications services is an area of ongoing public policy debate. With respect to income arguments for universal service subsidies, it is also

OCR for page 21
Page 67 (Coase, 1959). These auctions are generally regarded as having being quite successful.31 See McMillan (1994) for a readable introduction to how the FCC auctions were conducted. The economic analysis starts by considering two sorts of auctions: commonvalue auctions and private-value auctions. In a common-value auction, such as the auctioning of offshore oil drilling rights, the item that is being bid for is worth some particular amount, but the bidders may have different opinions about how much that amount is. In a private-value auction, the item in question is worth different amounts to different people. Most auctions of ordinary consumer goods such as works of art and antiques are of the private-value type. For more on the theory and practice of auctions, see the survey by Milgrom (1989) and the references cited therein. See also the discussion in Box 2.8. 2.4.9 Electronic Commerce Electronic commerce is different from physical commerce because technology changes the modes of communication, ultimately affecting the flow of information. The reduced cost of communicating, transmitting, and processing information is at the core of these differences. The marginal cost of disseminating information electronically to new or existing customers is lower than with more conventional methods, since the cost of an additional Web query or e-mail message is close to zero. Similarly, customers can use the Internet to search across competing sellers—which can be done directly by visiting various sellers' Web sites and inquiring about prices, products, and availability. Increasingly, searches can also be facilitated by using ''intelligent agents" or intermediaries that can gather and aggregate the necessary information on behalf of the customer. As a result, geographic and informational barriers that dampen competition among sellers may become increasingly irrelevant. Bakos (1997) has analyzed the implications of reduced search costs for competition, efficiency, and the division of surplus between buyers and sellers. His model indicates that when electronic marketplaces reduce the costs to the consumer of searching for the lowest price, there will be (1) an improvement in overall economic efficiency and (2) a shift in bargaining power from sellers to buyers. As a result buyers will be strictly better off, but the effect on sellers is ambiguous. A change from very high to moderate search costs will tend to make sellers better off, as new markets emerge. For instance, a market for specialty car parts might be unsustainable without a technology like the Internet to lower the transaction costs involved in finding buyers and sellers. The creation of such a market provides new opportunities for sellers. However, Bakos's model indicates that if search costs continue to fall, sellers may be made worse off since buyers can more easily find the seller that offers the lowest price. Since all sellers charging more than this lowest price will lose business, competition will tend to drive down prices until they reach the marginal cost of the product, leaving no

OCR for page 21
Page 68

BOX 2.8 Simple Insights Learned About Auctions Common-value Auctions A sensible strategy in a common-value auction, it would seem, would be to estimate the value of the item in question, add on a profit margin, and then bid that amount. However, if everyone uses such a procedure, it follows that the winner will tend to be the bidder with the highest estimate—which then is likely to be an overestimate of the true value. Hence the "winner" will usually end up overbidding, a phenomenon known as the winner's curse. Avoiding the winner's curse involves bidding down from one's estimated value, with the reduction depending on the number of other bidders. If one's estimate is higher than the estimates of 2 other bidders it may be reasonably close to the true value; but if it tops the estimates of 100 other bidders, it is almost certainly an overbid! Economists have developed a number of statistical and game theoretical models of bidding behavior in such markets that have been applied successfully in practical contexts such as auctions of parts of the radio spectrum. Private-value Auctions The most common form of private-value auction is the English auction, in which bids are successively raised until only one bidder is left who then claims the item at the last price bid. In this kind of auction, the person who is willing to bid the highest gets the item, but the price paid will generally be slightly above the bid of the second-highest bidder. Sealed-bid Auctions In a sealed bid auction, each consumer submits a bid sealed in an envelope. The bids are opened and the item is awarded to the highest bidder at the price he bid. The optimal strategy in the sealed-bid auction is to try to guess the amount the other consumers will bid, and then enter a bid slightly above the highest of these, assuming that the item is attractive to the bidder at that price. Thus bidders will not, in general, want to reveal their true valuation for the item being auctioned off. Furthermore, the outcome of the sealed bid auction will depend on each bidder's beliefs about the others' valuations. Even if these beliefs are correct on average, there will be cases in which the bidders guess incorrectly and the item is not awarded to the person who values it most. Vickrey Auctions A variation on the sealed-bid auction—known as the "Vickrey auction," after the economist who first analyzed its properties—eliminates the need for strategic play. The Vickrey auction simply awards the item to the highest bidder, but at the second highest price that was bid. it turns out that in such an auction, there is no need to play strategically—the optimal bid is simply the true value to the bidder.1 It is also worth observing that the revenue raised by the Vickrey auction will be essentially the same as that raised by the ordinary English auction, since in each case the person who assigned the highest value gets the item but only has to pay the second highest price. (In the English auction, the person willing to bid the highest gets the item, but he or she has to pay only the price bid by the person with the second highest value, plus the minimal bid increment.) 1The essence of the argument can be seen in a two-bidder example. Let v1 be the true value of bidder 1, and let b1 and b2 be the bids of the two bidders. Then the expected payoff to consumer 1 is image If v1 ‹ b2 then bidder 1 would like the probability to be equal to 1—which he can assure by reporting b1 = v1. If v1 › b2, bidder 1 would like the probability to be zero—which he can ensure by reporting b1 = v1. Either way, it is optimal for bidder 1 to report the true value.

OCR for page 21
Page 69 surplus for the sellers (Bakos, 1997). The dynamics of "friction-free" capitalism are not attractive to sellers of commodity products who had previously depended on geography or customer ignorance to insulate them from the low-cost seller in the market. As geography becomes less important, new sources of product differentiation, such as customized features or service or innovation will become more important, at least for those sellers who do not have the advantage of the lowest cost of production. Is this kind of dynamic already emerging in Internet commerce? Although there is much speculation about the effect that the Internet will have on prices, thus far there has been virtually no systematic evidence obtained or analysis done. However, one exploratory study by Bailey and Brynjolfsson (1997) did not find much evidence that prices on the Internet were any lower or less dispersed than prices for the same goods sold via traditional retail channels. Their analysis was based on data from 52 Internet and conventional retailers for 337 distinct titles of books, music compact disks, and software. Bailey and Brynjolfsson provided several possible explanations for their unexpected findings, including the possibility that search on the Internet during the sample period was not as easy as is sometimes assumed, that the demographics of the typical Internet user encouraged a higher price equilibrium, that many of the Internet retailers were still experimenting with pricing strategies, and that Internet retailers were differentiating their products (e.g., by offering options for delivery or providing customized recommendations), which added value. Because of the rapid pace of change in Internet commerce, it is not clear whether their findings will apply to current and future periods. However, they have suggested the need for close examination of the common assumption that the Internet will be simply a "friction-free" version of the traditional retail channels. Despite the uncertainties about electronic commerce and relatively few attempts to look at the broad picture, there is a great deal of private-sector interest. Electronic commerce is also receiving increasing attention from policy makers. The Clinton Administration's Framework for Global Electronic Commerce (1997; available online at ‹http://www.whitehouse.gov/WH/New/Commerce/index.html›) highlights both the economic potential of electronic commerce via the Internet as well as the need for government to avoid undue regulatory restrictions and to not subject Internet transactions to additional taxation. Right now society is in a period of intense speculation and experimentation. Experimentation involves a risk owing to path-dependence—technological choices made in the past may constrain what technological options will be compatible in the future. Standards developed now for electronic payment may remain in use well into the future, and careful thought should be given to their implications. For example, some of the architectural design of the Visa-MasterCard "Secure Electronic Transactions (SET)" technical standard was necessitated by the need to conform with current cryptographic export control policies. Yet these policies are today very much in flux and may be entirely different in a few years. Migrating the SET standard so that it is consistent with these new policies could be very costly, if not impossible.

OCR for page 21
Page 70 Even if cryptography policy changes, society may be locked into design choices already made. Thus it is critically important that any such standards be examined by those with expertise in technology, economics, business, and law—no one discipline suffices to provide the necessary expertise. One important insight about electronic commerce that follows from a legal and economic analysis has to do with assignment of liability, that is, with who ends up bearing the costs of unexpected outcomes. If the goal is to minimize overall transaction costs, liability should be assigned most heavily to those who are best placed to reduce the costs of transactions. Consider, for example, the rule in the United States that the consumer is liable for only the first $50 in losses from fraudulent credit card use. This assignment of liability has led to the development of highly sophisticated statistical profiling of consumer purchases that allows companies to detect fraudulent activity, thereby reducing the total costs of transactions. If the liability had instead rested entirely with consumers, one might have expected to see them being more careful in protecting their credit cards, but there would have been little reason for banks to invest in risk management technology. Another example is the difference between U.S. and U.K. assignment of liability for automatic teller machine (ATM) fraud. In the United States the burden of proof lies with the bank; in the United Kingdom it lies with the customer. This has led U.S. banks to invest in video cameras at ATM machines, whereas U.K. banks typically have not made such investments. The issue of liability is critical for electronic commerce. A survey released in March 1997 by CommerceNet/Nielson Media Research (1997) found "a lack of trust in the security of electronic payments as the leading inhibitor preventing people from actually purchasing goods and services online." This is remarkable considering the fact that the standard $50 limit still applies to online credit purchases. One might conjecture that credit card companies are not interested in a marketing effort to educate the public on this issue until they understand their own potential liabilities for fraud and misuse. There is also a need to understand the psychological and social dimensions of "trust," since trust is a critical component of any sort of commercial transaction. The information economy calls for new economic institutions such as "certificate authorities" that certify the connection between legal identities and possession of cryptographic keys—a public-key infrastructure. Large certificate issuers include Versign, which has close ties to the credit card issuer Visa, and GTE, which has close ties to MasterCard. The economics of this industry are uncertain and clearly depend critically on the issue of liability assignment. Another factor that is potentially delaying the growth of electronic commerce is intellectual property protection. Some of the broader issues are dealt with in section 2.4.1, "Protection of Intellectual Property," but some of the specifically commerce-oriented issues are mentioned here.

OCR for page 21
Page 71 The first such issue is the role of copy protection, which is a technical means for making it more difficult to create additional functional copies of software in a competitive environment. Copy-protected mass-market software was effectively competed away during the mid-1980s. Any copy protection that inconveniences users is difficult to maintain in a highly competitive market. See Shy (1998) for an economic analysis. More generally, numerous copy protection schemes have been proposed to help safeguard intellectual property, suggesting that there will almost certainly be a standards battle for supremacy in this market. Besen and Farrell (1994) have provided a survey of economic analysis with regard to conflicts about standards that sets forth the current state of the art in this area. More work in this area would be valuable. Electronic commerce also raises significant antitrust issues. There are large economies of scale in distribution—a single general-purpose online bookstore or CD store can serve a very large market. There are also potential demand-side economies of scale in payment mechanisms and software, which leads to a winner-take-all market structure with a single firm (or small set of firms) dominating the market. There have been a number of interesting studies of market structure in this context (see, e.g., Katz and Shapiro, 1994, for a survey); however, much more work is needed. The role of antitrust policy in an industry with strong network externalities and standardization issues is especially important to understand. A dominant firm brings the benefits of standardization, but presumably also imposes inefficiencies due to its monopoly position. The social trade-off between these benefits and costs is critically important and is the subject of much current debate. Some dispassionate analysis would be highly welcome. There has been much speculation about the macroeconomic effects of electronic commerce, such as the loss of economic sovereignty. Most economic analysis has focused on moving from multiple currencies to a single currency (as in the European Union context), but the emergence of currencies issued by private companies and barter arrangements is a distinct possibility. Economic monetary history would likely shed some light on how an economy functions in the presence of multiple private currencies, since that circumstance was common up until the turn of the last century. There is also the question of who will appropriate the benefits of electronic commerce. Varian (1996a) has argued that price discrimination will become a widely used approach to selling information. (One form of price discrimination is enabled by bundling, discussed in section 2.4.6, "Pricing Information.") He cites earlier studies that suggest that the welfare effects of price discrimination will be benign from the viewpoint of overall welfare, but price discrimination may certainly affect the division of economic gains between consumers and firms. These earlier studies typically assumed a monopolistic market structure, which may or may not be appropriate for electronic commerce. Thus extending

OCR for page 21
Page 72 these models to more competitive market structures would enhance understanding of the likely impact of electronic commerce on consumers. 2.5 Illustrative Broad Topics For Ongoing Research Workshop discussions and position papers yielded numerous suggestions for research topics, a number of which are discussed above. From these topics—spanning a wide range of interdisciplinary subjects from economic productivity to communities in the information age—the workshop steering committee selected an illustrative set of promising areas for research, listed below. •Interdisciplinary study of information indicators. The idea of developing a method for quantifying certain aspects of society in the United States is as old as the Constitution. Over the last two decades, researchers have recognized and begun to analyze the increasing role that information plays in all aspects of society. These efforts have proved most fruitful when measuring the contribution of information to the economy,
32 the size of the information work force,33 and the level of penetration of the information infrastructure.34 In most of these analyses, the conclusions drawn have been consistent with the view that society is in the process of a fundamental change through the rapid development and implementation of information technologies and the products and services associated with them. Some of these studies raise the indirect question of the value of attempting to use a set of indicators to represent the information activities of society, such as public discourse and democratic processes, to improve understanding. This approach was first pioneered by Borko and Menou (1983). In essence, looking at society from an information perspective leads us to perceive society as composed of information structures and communication behaviors. In other words, those activities that lead to the construction of environments for producing, receiving, distributing, and processing information reflect the creation of information structures, while those activities that involve transmission of information reflect communication behaviors. Box 2.9 lists some notional indicators. The dramatic information-centric changes that have occurred across all societies in recent decades suggest that the social forces enabled by the development of information structures and the prevalence of communication behaviors be measured. More fully developed, a set of quantitative information indicators offers opportunities for comparatively measuring community information assets, public participation, interconnectedness, social capital, information poverty, and universal service. It would be useful for the nation to invest in an interdisciplinary study of information indicators. The perspectives of many disciplines come to bear on the question of measuring impact. An exploration of how different disciplines do or do not reach consensus about how to measure impacts, and the extent to which consensus is desirable, is called for. From such an exchange can come broadly

OCR for page 21
Page 73

BOX 2.9 A Primitive List of Information Indicators Information Structures Books produced (general/textbooks) Cable TV access/trunk lines Number of cinema seats Number of computer systems/databases Number of database subscribers Number of journal articles/technical reports Number of libraries/archives Number of modems Number of movies released Number of newspapers Number of online subscribers Number of personal computers Number of registered computer users Number of satellite dishes Number of telephones Number of TVs/radios Number of periodicals published (general/scientific) Number of public telephones Number of radio/TV channels Number of telephone access/trunk lines Communication Behaviors Circulation of library volumes Domestic/international mail traffic First-class letters mailed Hours spent accessing the Internet Hours spent listening/viewing radio/TV accepted measures of access, use, and the impact of information and information technology. One particular outcome could be the aggregation of the kinds of micronindicators listed in Box 2.9 into broadly accepted macro information indicators such as the following: —Interconnectivity index. A measure of the facility of electronic communication, and an evaluation of the development of this dimension of the information infrastructure; —Information quality of life index. Similar to an index produced by the Organisation for Economic Cooperation and Development, an index that would attempt to evaluate the qualitative levels of communication available to individuals; —Leading information indicators. An index that would attempt to predict the growth of the information infrastructure;

OCR for page 21
Page 74 —Home media index. An index of the state of penetration of communications technologies in the home that might qualify as a leading index of the potential for future consumption of information; and —Marginalization index. An index that would measure the extent to which specific populations are excluded from participation in the information infrastructure. Were such a set of indicators developed, funding agencies like the National Science Foundation might have a standardized tool in hand through which to assess the outcomes of the research that they sponsor. •Impacts of information technology on labor market structure. Information technology has been linked to wage inequality and other changes in the structure of the labor market (more detail is provided in section 2.3). Understanding the extent to which and the mechanism by which computers may affect increased wage inequality is important in determining the nature and extent of public policy responses. This research should acknowledge that computers, by themselves, are not causal agents. Rather it is the entire constellation of economic and organizational strategies, managerial perspectives, and work practices within which computing technology is embedded that affects wage inequality. One possible response is improved training of workers for IT-related jobs. Understanding the needs for education and training requires better definition of the skills required to make use of IT. Results from such research would benefit both policy makers and the private sector as they seek to better match education and training to workplace skill requirements. •Productivity and its relationship to work practices and organizational structures for the use of information technology. Extracting the benefits of new technologies depends in part on organizational adaptation to them. As discussed in more detail above, industrial exploitation of the benefits of the electric dynamo in the early part of this century required new approaches to manufacturing. Organizations using information technology today are at a similar learning stage. A major impediment to determining optimal work practices and organizational structures has been the lack of a clear picture of what data already exist. Developing such a list would help speed up research in this area. There are a number of places where specific research needs are already apparent, such as the collection of time series data to help clarify the role of technology in organizational changes. Understanding the productivity benefits of information technology—illuminating the so-called productivity paradox—also is worthy of continued research. Important questions include how to better quantify what have been considered

OCR for page 21
Page 75 "unmeasurable" economic inputs, such as organizational knowledge, and "unmeasurable" outputs, such as product quality, associated with computers. As recognition grows that productivity gains from information technology increasingly depend not just on the introduction of new technology but also on finding new ways and organizational structures to use it, it is worth noting that advances in the technology have owed much to government-supported computer science research. Advances in economic productivity would benefit from analogous research on how to better use information technology in the workplace. This is one facet of the broader question of learning how to better use information technology to achieve a host of social and economic goals. There are already moves to increase research in this domain; one example is the National Science Foundation's interdisciplinary Knowledge and Distributed Intelligence initiative. Intellectual property issues. Information technology raises many new questions about optimal protection of intellectual property rights, posing challenges to policy makers revising intellectual property law or international agreements as well as to commercial interests considering particular intellectual protection schemes. Many new schemes have been advanced for protection of intellectual property, and more needs to be known to choose among them. While considerable research has been conducted on the effect of different patent regimes on innovation, little has been studied regarding the consequences of different copyright protection schemes (see section 2.4.1). Theoretical work and empirical research on different copyright protection regimes will help inform future actions to protect intellectual property. •Social issues addressed at the protocol level. The Internet has given rise to many new social issues in intellectual property, privacy, and data filtering. Addressing these social issues at the protocol level—through policies, rules, and conventions for the exchange and use of information—is a promising area for interdisciplinary research. Examples include: —PICS, the Platform for Internet Content Selection, which implements a set of protocols for rating Web sites (Resnick and Miller, 1996); —P3P,35 a project for specifying privacy practices; —Language specifying the terms and conditions by which intellectual property is managed; and —Open Profiling Standard,
36 a method for individual users to selectively release information about themselves under specific conditions. Each of these projects involves both technological and social dimensions. For example, PICS raises issues not only about how best to encode ratings for Web sites, but also about how to represent them; cognitive issues about how elaborate the rating schemes should be; and economic issues about how rating bureaus can recover costs. Another issue is how users can evaluate the trustworthiness of the labels provided by ratings services.

OCR for page 21
Page 76 Notes 1. See Tyack and Cuban (1995) for an analysis of why earlier technologies for improving teaching and learning never achieved their promise. See also references in CSTB (1994b, 1996c). 2. "Active intervention" refers to deliberate intervention—such as the introduction of new technology or educational practices—for the purposes of research. 3. While this discussion focuses on this question in a U.S. domestic context, in much of the rest of the world, socioeconomic disparities and the gap between urban and rural access are much greater. 4. See ‹http://www.hotmail.com›. 5. Note, however, that not all forms of communication necessarily reduce localness. For example, Wiley and Rice (1933) postulated that the telephone, a point-to-point medium, reinforces locality whereas broadcast media tend to diminish the importance of locality. 6. This work updated earlier work conducted at a time when computing was less prevalent. 7. The "output effect" also includes changing tastes or desires, e.g., the changes in preference for cars rather than horses or for word processors rather than typewriters. 8. One might expect software development to contribute to increased demand for skilled work, but recent work by Brynjolfsson (1997) found that it was not a major factor, at least not in most industrial countries. Although the U.S. software industry is fairly large and growing, it is still not large enough to explain any significant share of the effect. 9. See Brynjolfsson (1993), Attewell (1994), Sichel (1997), and CSTB (1994a) for empirical studies of the productivity paradox. See CSTB (1994a), Baily and Chakrabarti (1988), Brynjolfsson (1993), Wilson (1995), and Brynjolfsson and Yang (1996) for reviews. 10. Concurrent engineering refers to the practice in which personnel from every phase of product development—e.g., from design to production engineering, quality control, and service—collaborate in product development beginning at the earliest stages. 11. Note that improvements in the technology for transmitting and manipulating image data increasingly remove this limitation. 12. With the exception of the retirement-planning task force, the studies of differential benefits cited in this section used survey analysis of naturally occurring differential use. Statistical techniques were used to control for the effects of other variables, but because people were not randomly assigned to the use (or nonuse) of technology, strict causal claims are not warranted. In the retirement-planning study, still-employed and recently retired people were randomly assigned to task forces with and without access to technology. Because of the random assignment, causal claims are warranted. 13. U.S. Copyright Office records on documents registered for copyright are available via the Library of Congress Information System (LOCIS) for 1978 onward. 14. Figures on literacy are not always reliable, in part because the definition of literacy is somewhat vague. The numbers given in this discussion were taken from contemporary accounts. 15. Early expectations were that interactive cable services providing video on demand (VOD) or near-VOD would be lucrative and popular. However, early experiments by the cable industry showed that consumer response to VOD was unlikely to generate sufficient revenue to justify investment in interactive cable systems. Investment in two-way capabilities in the cable industry today is predicated on a market for broadband data delivery (including Internet as well as telephony and video conferencing) to both the home and small businesses, in addition to video programming. 16. See ‹http://www.cyberpatrol.com/›. 17. This set of protocols was adopted as a standard by the consortium that sets standards for the World Wide Web. 18. EPIC (see ‹http://www.epic.org/privacy/privacy_resources_faq.html›) contains an extensive list of online resources on privacy issues.

OCR for page 21
Page 77 19. This study is based on a sample of 1,009 computer users derived from a sample representative of 2,204 persons, age 18 or over, living in households with telephones and located in the 48 contiguous states. 20. Note that federal legislation passed in 1994 (which did not come into effect until 1997) allows people to restrict the release of personal information from state motor vehicle records. 21. See ‹http://www.sims.berkeley.edu/resources/infoecon/Security.html›. 22. See ‹http://www.melvyl.ucop.edu/›. 23. Branding is an effort to transform something perceived as generic into something with which people associate a brand name. A recent example is the ''Inside Intel" campaign, which built up significant brand awareness for CPUs, something that the average individual cared little about. 24. "Push" technologies send information to an intended consumer without that consumer having requested it, while "pull" technologies send information only in response to a specific request. Radio and television broadcasting and e-mail are examples of push technologies, because they both transmit information regardless of whether or not anyone specifically requested it; the World Wide Web is an example of pull technology since a page must be requested before it is sent. Note that push technologies can be used over the Internet as well; examples include the PointCast system, which delivers customized news to users' computer desktops. 25. The National Library of Medicine's MEDLINE system makes extensive bibliographic information covering the fields of medicine and health care available free of charge to the public through a Web site. 26. The Electronic Data Gathering, Analysis, and Retrieval system makes available to the public through a Web site much of the information companies are required to submit to the U.S. Securities and Exchange Commission. 27. Such a gap exists, for example, between various socioeconomic groups, between urban and rural areas, and between industrialized and developing countries. 28. Also see Markus (1987) on the theory of critical mass for interactive media. 29. See ‹http://raven.stern.nyu.edu/networks›. 30. Herodotus describes the use of auctions in Babylon as early as 500 BC. It is remarkable that a venerable economic institution like an auction has found a receptive audience on the Internet. The Internet Auction List (‹http://www.usaweb.com›) lists more than 50 sites that have regular online auctions, and more are being added every day. Computer equipment, air tickets, and Barbie dolls are being bought and sold daily via Internet auctions. Even advertising space is being sold via auction on AdBot (‹http://www.adbot.com›). 31. There have, however, been problems due to overbidding (the so-called "winner's curse" phenomenon, described in Box 2.8) and signaling. Signaling can occur in multiround auctions when the bid values are used to signal the intent of the bidder, in violation of the rule against there being any collaboration or collusion between auction participants. For example, a bid of $1,000,202 might indicate that a bidder has a particular interest in the market with telephone area code 202. 32. See Jussawalla et al. (1988); Machlup (1962); and Porat (1977). 33. See Bell (1973); Katz (1988); Machlup (1962); and Schement (1990). 34. See Dordick and Wang (1993); Ito (1981); and Kuo (1989). 35. See ‹http://www.w3.org/Privacy/Overview.html›. 36. See ‹http://www.w3.org›.