Part 1
Setting the Stage



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure Part 1 Setting the Stage

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure This page in the original is blank.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure Introduction to Part 1 Alfred V. Aho The telecommunications/information infrastructure in the United States has been evolving steadily for more than a century. Today, sweeping changes are taking place in the underlying technology, in the structure of the industry, and in how people are using the new infrastructure to redefine how the nation's business is conducted. The two keynote papers in this section of the report set the stage by outlining the salient features of the present infrastructure and examining the forces that have led to the technology, industry, and regulatory policies that we have today. Robert Lucky discusses how the nation's communications infrastructure evolved technically to its current form. He outlines the major scientific and engineering developments in the evolution of the public switched telecommunications network and looks at the current technical and business forces shaping tomorrow's network. Of particular interest are Lucky's comments contrasting the legislative and regulatory policies that have guided the creation of today's telephone network with the rather chaotic policies of the popular and rapidly growing Internet. Charles Firestone outlines the regulatory paradigms that have molded the current information infrastructure and goes on to suggest why these paradigms might be inadequate for tomorrow's infrastructure. He looks at the current infrastructure from three perspective—the production, electronic distribution, and reception of information—and proposes broad goals based on democratic values for the regulatory policies of the different segments of the new information infrastructure. The four papers that follow examine the use of the telecommunications infrastructure in several key application areas, identify major obstacles to the fullest use of the emerging infrastructure, and discuss how the infrastructure and concomitant regulatory policies need to evolve to maximize the benefits to the nation. Colin Crook discusses how the banking and financial services industries rely on the telecommunications infrastructure to serve their customers on a global basis. He notes that immense sums of money are moved electronically on a daily basis around the world and that the banking industry cannot survive without a reliable worldwide communications network. He underscores the importance of an advanced public telecommunications/information infrastructure to the nation's continued economic growth and global competitiveness. Edward Shortliffe notes that the use of computers and communications is not as advanced in health care as in the banking industry. He gives examples of how the use of information technology can both enhance the quality of health care and reduce waste. He stresses the importance of demonstration projects to help prove the cost-effectiveness and benefits of the new technology to the health care industry.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure Robert Pearlman notes that, at present, K-12 education lacks an effective information infrastructure at all levels—national, state, school district, and school site. He presents numerous examples of how new learning activities and educational services on an information superhighway have the potential for improving education. He outlines the major barriers that need to be overcome to create a suitable information infrastructure for effective K-12 schooling in the 21st century. In the final paper, Clifford Lynch looks at the future role of libraries in providing access to information resources via a national information infrastructure. He examines the benefits and barriers to universal access to electronic information. He notes that a ubiquitous information infrastructure will cause major changes in the entire publishing industry and that intellectual property rights to information remain a major unresolved issue.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure The Evolution of the Telecommunications Infrastructure Robert L. Lucky I have been asked to talk about the telecommunications infrastructure—how we got here, where we are, and where we are going. I don't think I am going to talk quite so much about where we are going but rather about the problems. I will discuss what stops us from going further, and then I will make some observations about the future networking environment. First, I will review the history of how we got where we are in the digitization of telecommunications today, and then I would like to address two separate issues that focus on the problems and the opportunities in the infrastructure today. The first issue is the bottleneck in local loop access, which is where I think the challenge really is. Then I will discuss two forces that together are changing the paradigm for communications—what I think of as the ''packetizing" of communications. The two forces that are doing this are asynchronous transfer mode (ATM) in the telecommunications community and the Internet from the computer community. The Internet will be the focus of many of the subsequent talks in this session, since it seems to be the building block for the national information infrastructure (NII). HOW THE TELECOMMUNICATIONS NETWORK BECAME DIGITAL It is not as if someone decided that there should be an NII, and it has taken 100 years to build it. The history of the NII is quite a tangled story. First, there was the Big Bang that created the universe, and then Bell invented the telephone. If you read Bell's patent, it actually says that you will use a voltage proportional to the air pressure in speech. Speech is, after all, analog, so it makes a great deal of sense that you need an analog signal to carry it. In the end Bell's patent is all about analog. Since that invention, it has taken us over 100 years to get away from the idea of analog. A long time went by and the country became wired. In 1939, Reeves of ITT invented pulse code modulation (PCM), but it was 22 years before it became a part of the telephone plant, because nobody understood why it was a good thing to do. Even in the early 1960s a lot of people did not understand why digitizing something that was inherently analog was a good idea. No one had thought of an information infrastructure. Computer communications was not a big deal. The world was run by voice. But since then the telephone network has been digitized. As it happened, this was accomplished for the purposes of voice, not for computer communication. So in 1960 we had analog voice and it fit in a 4-kilohertz channel, and we stacked about 12 of these together like AM radio and sent it over an open wire. That was the way communication was done. But if you take this 4-kilohertz voice channel and digitize it, it is 10 times bigger in

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure bandwidth. Why do we do that? That seems like a really dumb idea. But you gain something, and that is the renewability of the digital form. When it is analog and you accumulate distortion and noise comes along, it is like Humpty Dumpty—you can't put it back together again. You suffer these degradations quietly. The digits can always be reconstructed, and so in exchange for widening the bandwidth, you get the ability to renew it, to regenerate it, so that you do not have to go 1,000 miles while trying to keep the distortion and noise manageable. You only have to go about 1 mile, and then you can regenerate it—clean it up and start afresh. So that was the whole concept of PCM, this periodic regeneration. It takes a lot more bandwidth, but now you can stack a lot more voice signals, because you don't have to go very far. The reason that the network is transformed is not necessarily to make it more capable but to make it cheaper. PCM made transmission less expensive, since 24 voice channels could be carried, whereas in the previous analog systems only 6 voice channels could be carried. So digital carriers went into metropolitan areas starting in the early 1960s. At that time we had digital carriers starting to link the analog switches in the bowels of the network. More and more the situation was that digits were coming into switches designed to switch analog signals. It was necessary to change the digits to analog in order to switch them. Engineers were skeptical. Why not just reshuffle the bits around to switch them? THE ADVENT OF DIGITAL SWITCHING So the first digital switches were designed. The electronic switching system (ESS) number four went out into the center of the network where there were lots of bits and all you had to do was time shifting to effect switching. That seemed natural because all the inputs were digital anyway. It was some time before we got around to the idea that maybe the local switches could be digital, too, because the problem in the local switches is that, unlike the tandem switches, they have essentially all of their input signals in analog form. I was personally working on digital switching in 1976. I did not start that work, but I was put in charge of it at that time. I remember a meeting in 1977 with the vice president in charge of switching development at AT&T. It was a very memorable meeting: we were going to sell him the idea that the next local switch should be digital. We demonstrated a research prototype of a digital local switch. We tried to explain why the next switch in development should be digital like ours, but we failed. They said to us that anything we could do with digits, they could do with analog and it would be cheaper—and they were right on both scores. Where we were all wrong was that it was going to become cheaper to do it digital, and if we had had the foresight we would have seen that intelligence and processing would be getting cheaper and cheaper in the future. But even though we all knew intellectually that transistor costs were steadily shrinking, we failed to realize the impact that this would have on our products. Only a few short years later those digital local switches did everything the analog switches did, and they did more, and they did it more cheaply. The engineers who developed the analog switches pointed to their little mechanical relays. They made those by the millions for pennies apiece, and those relays were able to switch an entire analog channel. Why would anyone change the signals to digits? So the development of analog switching went ahead at that time, but what happened quickly was the advent of competition enabled by the new digital technology. There was a window of opportunity where the competition could come in and build digital switches, so AT&T was soon forced to build a digital switch of its own, even though it had a big factory that made those nice little mechanical relays. The next major event was that fiber came along in 1981. Optical fiber was inherently digital, in the sense that you really could not at that time send analog signals over optical fiber

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure without significant distortion. Cable TV is doing it now, but in 1980 we did not know how to send analog signals without getting cross-talk between channels. So from the start optical fiber was considered digital, and we began putting in fiber systems because they were cheaper. You could go further without regenerating signals than was possible with other transmission media, so even though the huge capacity of optical systems was not needed at first, it went in because it was a cheaper system. It went in quickly beginning in 1981, and in the late 1980s AT&T wrote off its entire analog plant. The network was declared to be digital. This was incredible because in 1984—at divestiture—we at AT&T believed that nobody could challenge us. It had taken us 100 years to build the telephone plant the way it was. Who could duplicate that? But what happened was that in the next few years AT&T built a whole new network, and so did at least two other companies! And, in fact, just a few years ago, you would not have guessed that the company with the third most fiber in the country was a gas company—Wiltel, or the Williams Gas Company. So everybody could build a network. All of a sudden it was cheap to build a long-distance network and a digital one, inherently digital because of the fiber. In the area of digital networking we have been working for many years on integrated services digital network (ISDN). We all trade stories about when we all went to our first ISDN meeting. I said, "Well, I went to one 25 years ago," and he said, "27," so he had me—that kind of thing. ISDN is one of those things that still may happen, but in the meantime we have another revolution coming along beyond the basic digitization of the network—we have the packetizing of communication and ATM and the Internet. Internet began growing in the 1970s, and now we think of it as exploding. Between Internet and ATM something is happening out there that is doing away with our fundamental concept for wired communications. First we did away with analog, and then we had streams of bits, but now we are doing away with the idea of a connection itself. Instead of a circuit with a continuous channel connecting sender and receiver, we have packets floating around disjointedly in the network, shuttling between switching nodes as they seek their separate destinations. This packetization transforms the notion of communication in ways that I don't think we have really come to grips with yet. Where we stand in 1993 is this. All interexchange transmission is digital and optical. So is undersea transmission, which is currently the strongest traffic growth area: between nations we have increasing digital capability and much cheaper prices. The majority of the switches in the network are now digital—both the local and the tandem switches. But there is a very important point here. In these digital switches a voice channel is equated with a 64-kilobits-per-second stream. It is not as if there were an infinite reservoir to do multimedia switching and high-bandwidth applications, because the channel equals 64 kilobits per second in these switches. They are not broadband switches in terms of either capacity or flexibility. Another important conceptual revision that has occurred during the digital revolution has been one involving network intelligence. The ESS number five, AT&T's local switch, has a 10-million instructions per second (MIPS) processor as its central intelligence. That was a big processor at the time the switch was designed, but now the switch finds itself connected to 100-MIPS processors on many of its input lines. The bulk of intelligence has migrated to the periphery, and the balance of the intelligence has been seeping out of the network. I always think of Ross Perot's giant sucking noise or, as George Gilder wrote, the network as a centrifuge for intelligence. This has been a direct outgrowth of the personal computer (PC) revolution. THE BOTTLENECK: LOCAL LOOP ACCESS Let us turn now to loop access, where I have said that the bottleneck exists. Loop access is still analog, and it is expensive. The access network represents about 80 percent of the total

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure investment in a network. That is where the crunch is, and it is also the hardest to change. There is a huge economic flywheel out there to change access, whereas the backbone, as we have seen, can be redone in a matter of a few years at moderate expense. There are many things happening in the loop today, but I see no silver bullet here. People always say there ought to be some invention that is going to make it cheap to get from the home into the network, but the problem is that in the loop there is no sharing of cost among subscribers. In the end you are on your own, and there is no magic invention that makes this individual access cheap. Today everybody is trying to bring broadband access into the home, and everybody is motivated to do this. The telephone companies want a new source of growth revenue, because their present business is not considered a good one for the future. The growth rate is very small and it is regulated, and so they see the opportunity in broadband services, in video services, and in information services, and they are naturally attracted to these potential businesses. On the other hand, the cable television companies are coming from a place where they want to get into information services and into telephony services. So from the perspective of the telephone companies, not only is the conventional telephone business seen as unattractive, but there is also competition coming into it, which makes it even less attractive. There are many alternatives for putting broadband service into the home. There are many different architectures and a number of possible media. Broadband can be carried into the home by optical fiber, by coaxial cable, by wireless, and even by the copper wire pairs that are currently used for voice telephony. The possibilities for putting fiber into homes are really a matter of economics. The different architectural configurations differ mainly in what parts of the distribution network are shared by how many people. There is fiber to the home, fiber to the curb, fiber to the pedestal, and fiber to the whatever! The fact is that when we do economic studies of all these, they don't seem to be all that different. It costs about $1,100 to put a plain old telephone service (POTS) line into a home and about $500 more to add broadband access to that POTS line. You can study the component costs of all these different architectures, but it just does not seem to make a lot of difference. It is going to be expensive on some scale to wire the country with optical fiber. The current estimate is that it would be on the order of $25 billion to wire the United States. This is incremental spending over the next 15 years to add optical fiber broadband. Moreover, we are probably going to do that not only once, but twice, or maybe even three times. I picture standing on the roof of my house and watching them all come at me. Now we hear that even the electric power utilities are thinking of wiring the country with fiber. It is a curious thing that we are going to pay for this several times, but it is called competition, and the belief is that competition will serve the consumer better than would a single regulated utility. Because the investment in fiber is so large, it will take years to wire the country. Figure 1 shows the penetration of fiber in offices and the feeder and distribution plants by year, and the corresponding penetration of copper. You can see the kind of curves you get here and the range of possible times that people foresee for getting fiber out to the home. If you look at about a 50 percent penetration rate, it is somewhere between the year, say, 2001 and 2010. This longish interval seems inconsistent with what we would like to have for the information revolution, but that is what is happening right now with the economic cycle running its normal course. The telephone company is not the only one that wants to bring fiber to your home. The cable people have the same idea, and Figure 2 shows the kind of architecture they have. They have a head end with a satellite dish and a video-on-demand player, or more generally a multimedia server. They have fiber in the feeder portion of their plant, and they have plans for personal communication system (PCS) ports to collect wireless signals. They have their own broadband architecture that will be in place, and, of course, we see all the business alliances that are going on between these companies right now. We read about these events in the paper every morning.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure FIGURE 1 Evolution of fiber in the local exchange network. Broad solid line, fiber; thin solid line, copper. CATV, cable television. Courtesy of Bellcore. FIGURE 2 Fiber/coaxial bus cable broadband network. VOD, video on demand; PCS, personal communication system. Courtesy of Bellcore.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure Everybody thinks that the money will be in the provision of content and that the actual distribution will be a commodity that will be relatively uninteresting. If you look at the cable companies versus the telephone companies, cable has the advantage of lower labor costs and simpler service and operations. Their plant is a great deal simpler than that of the telephone companies. They have wider bandwidth with their current system and lower upgrade costs. They are also used to throwing away their plant and renewing it periodically. Disadvantages of cable franchises have more to do with financial situations, spending, and capital availability. THE PACKETIZING OF COMMUNICATIONS: ATM Asynchronous transfer mode (ATM) is a standard for the packetizing of communications, where all information will be carried in 48-byte packets with 5-byte headers. This fixed cell size is a compromise: it is too small for file transfers and too big for voice. It is a miraculous agreement that we do it this way and that, if everybody does it this way, we can build an inexpensive broadband infrastructure using these little one-size-fits-all cells. But it seems to be taking hold, and both the computer and communications industries are avidly designing and building the new packet infrastructure. ATM has an ingenious concept called the virtual channel that predesignates the flow for a particular logical stream of packets. You can make a connection and say that all the following cells on this channel have to take this particular path, so that you can send continuous signals like speech and video. ATM also integrates multimedia and is not dependent on medium and speed. Since ATM local area networks (LANs) are wired in tree-like "stars" with centralized switching, their capacity scales with bandwidth and user population, unlike the bus-structured LANs that we have today. Moreover, ATM integrates multimedia, is an international standard, and offers an open-ended growth path in the sense that you can upgrade the speed of the system while conforming to the standard. I think it is a beautiful idea, as an integrating influence and as a transforming influence in telecommunications. THE NEW INFRASTRUCTURE Now let us turn to the other force—Internet. The Internet will be the focus of a lot of our discussions, because when people talk about the NII, the consensus is growing that the Internet is a model of what that infrastructure might be. The Internet is doubling in size every year. But whether it can continue to grow, of course, is an issue that remains to be discussed, and there is much that can be debated about that. To describe the Internet, let me personalize my own connection. I am on a Bellcore corporate LAN, and my Internet address is rlucky@bellcore.com. At Bellcore we have a T-1 connection, 1.5 megabits per second, into a midlevel company, JVNC-net at Princeton. Typically, these midlevels are small companies, making little profit. They might work out of a computer center at a university and then graduate to an independent location, but basically it is that kind of affair. They start with a group of graduate students, buy some routers, and connect to the national backbone that has been subsidized by the National Science Foundation.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure THE USER VIEW OF INTERNET ECONOMICS As for the user view of the Internet, here is what I see. I pay about $35,000 for routers and for the support of them in my computing environment at Bellcore. But people say that we have to have this computer environment anyway, so let's not count that against Internet. That is an interesting issue actually. I don't think we should get hung up on any government subsidies relating to Internet operations, because if we removed all the subsidies it really would not change my cost for the Internet all that much. All we have to do is take the LANs we have right now, the LAN network that interconnects them locally, and add a router (actually a couple of routers) and then lease a T-1 line for $7,000 a year from New Jersey Bell. That puts us into JVNC-net. We pay $37,000 a year to JVNC-net for access to the national backbone network. For this amount of expenditure about 3,000 people have access to Internet at our company—all they can use, all they want. A friend recently gave me a model for Internet pricing based on pricing at restaurants. One model is the a la carte model; that is, you pay for every dish you get. The second model is the all-you-can-eat model, where you pay a fixed price and eat all you want. But there is a third model that seems more applicable to Internet. If you ask your kids about restaurant prices, they will say that it does not matter—parents pay. Perhaps this is more like Internet! The pricing to the user of Internet presents a baffling dilemma that I cannot untangle. It looks like it is almost free. Internet is a new kind of model for communications, new in the sense that while obviously it has been around for a while, it is very different from the telecommunications infrastructure. If I look at it, what I see inside Internet is basically nothing! All the complexity has been pushed to the outside of the network. The traditional telecommunications approach has big switches, lots of people, equipment, and software, taking care of the inside of the network. In the Internet model, you do not go through the big switches—it just has routers. Cisco, Wellfleet, and others have prospered with this new business. Typically, a router might cost about $50,000. This is very different from the cost of a large telecommunications switch, although they are not truly comparable in function. With a central network chiefly provisioned by routers, all the complexities push to the outside. Support people in local areas provide such services as address resolution and directory maintenance at what we will consider to be the periphery of the network. The network is just a fast packet-routing fabric that switches you to whatever you need. THE CONTRAST IN PHILOSOPHY BETWEEN THE INTERNET AND TELECOMMUNICATIONS The contrasting approaches and philosophies are as follows. The telecommunications network was designed to be interoperable for voice. The basic unit is a voice channel. It is interoperable anywhere in the world. You can connect any voice channel with any other voice channel anywhere. Intelligence is in the network, and the provisioning of the network is constrained by regulatory agencies and policies. Internet, by contrast, is interoperable for data. The basic unit is the Internet protocol (IP) packet. That is in its own way analogous to the voice channel; it is the meeting place. If you want to exchange messages on Internet, you meet at the IP level. In Internet the intelligence is at the periphery, and there have not yet been any significant regulatory constraints that have governed the building and operation of the network. Pricing is a very important issue right now in Internet. The telecommunications approach to pricing is usage based, with distance, time, and bandwidth parameters. Additionally, there are settlements between companies when a signal traverses several different domains. The argument

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure that currently fund libraries are quite diverse, economically, demographically, and culturally. One wonders if such diversity will continue to be found in the electronic communities that fund electronic libraries or whether we will see a much greater homogeneity among the patrons of a given "library" on the network. Realistically, another aspect of this problem is that the economically disadvantaged or those uncomfortable with technology are least likely to own the network connections and information technology needed to access libraries over the network. They will continue to visit the library physically and to use it as a place where they can obtain not only access to information generally but also the information technology necessary to use electronic information resources on the network (which might include important community resources, such as listings of employment and educational opportunities or information about social services). The more prosperous, technologically literate people may have abandoned the geographically local public library for remote electronic libraries on the network, thus eroding the community commitment to continue to support the local library. DIGITAL LIBRARIES AND LIBRARY SERVICES I am not fond of the term "digital library." The term "library" is used to refer to at least three things: a collection, the building that houses that collection, and the organization responsible for it all. As organizations, libraries acquire, organize, provide access to, and preserve information. These are the primary functions of a library, though in many cases they have been augmented with more extensive responsibilities for training and teaching, for providing or facilitating access to social services, or for managing certain types of government or institutional information resources. When one considers libraries as organizations, it is clear that they deal with information in all formats—print, microforms, sound recordings, video, and electronic information resources. From this perspective, the term "digital library" doesn't make much sense and provides a very one-sided view of the mission and operation of a library. It is certainly true that an increasing part of the collection of information that many libraries acquire, organize, provide access to, and preserve will be in electronic form, and as this proportion increases it will have wide-ranging effects on all aspects of library planning, operation, policy, and funding. How quickly this transition will move and how soon a critical mass of information necessary to serve various classes of library users will actually be available in electronic form are subject to considerable debate. I think that it will take longer than many people believe. There are a number of obvious situations in which much of the primary data in various areas is available and heavily used in electronic form, including space sciences, remote sensing, molecular biology, law, and parts of the financial industries. In some cases these data can only be meaningfully stored and used in electronic forms. But even in many of these areas much of the journal literature (as opposed to primary information) is still available only in print. Already there are massive data and information archives available on the network, containing everything from planetary exploration data to microcomputer software. While these are often referred to (rather grandly) as digital libraries, they are really not, in many cases, part of any actual library's collection and are not really managed in the way in which a library would manage them. In a number of cases they are actually the volunteer efforts of a few interested and energetic individuals, and there is no institutional commitment to maintain the resources. It is important to recognize how little of the existing print literature base is currently available through libraries in electronic form, even to the limited and well-specified user communities that are typically served by the still relatively well funded academic research libraries. Libraries have been investing heavily in information technology over the past two decades, to be sure, but most of this investment has been spent on modernizing existing systems rather than acquiring

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure electronic content. The most visible results of this program of investment are the on-line catalogs of various libraries that can be freely accessed across the Internet today (although they often contain a mix of databases, such as the catalog of the library's holdings, which are available to anyone, and other licensed information resources, such as databases of journal citations or full text, which are limited to the library's direct user community). These information retrieval systems allow a user to find out what print-based information a library holds; while they are very heavily used, their end result is normally citations to printed works rather than electronic primary information. Even a system such as the University of California, which operates a sophisticated system-wide information retrieval system and had been aggressively investing in electronic information resources for some years, only offers access to a few hundred journals and virtually no books in electronic format. It holds active journal subscriptions to over 100,000 journals and its on-line catalog contains entries for over 11 million books. (In many cases the journal articles are only available to the UC community a month or more after print publication; in most cases the electronic form of the article does not include graphs, equations, or illustrations.) The conversion of the major research libraries to electronic form is years away and is less a technological problem than a business and legal issue. Interestingly, it seems likely that conversion of critical masses of electronic information may occur first for specific, often modest communities that are willing to pay substantially for access to the information resources and that are willing to pay personally (or at least organizationally), rather than through the support of an intermediary agency like a library. In these situations it also seems fairly common for individual users to be willing to pay for access to information transactionally (i.e., by the article retrieved or viewed or by the connect hour) rather than under the flat-fee license model favored by libraries. In many cases primary rights-holders feel much more comfortable with an arrangement that gives them income for each use of the material, rather than trying to decide "what it's worth" in advance as they would have to do in a flat-rate contract agreement. Indeed, this conversion of information resources to electronic form is already well advanced in the legal community. This area has moved quickly because it seems that the user community is not price sensitive and because of some unusual situations with regard to concentrated ownership of much of the key material by corporations that are both rights-holders and providers of access services directly. And, as has already been discussed, it may prove impractical or even impossible, for a variety of reasons, for libraries to provide much patron access to these electronic information resources once they are established. Certainly this has been the case in areas such as law (with the exception of law school libraries, which benefit from special arrangements intended to familiarize future graduates with the electronic resources). It is important to recognize that these commercial information providers are not libraries, though they may offer access to immense databases that represent key resources in a given discipline. They acquire and mount information to make a profit and remove it if it does not generate sufficient revenue. They will not preserve little-used information just because it is an important part of the historical or cultural record. And many of these providers have followed a marketing strategy that emphasizes sales to a limited market at very high prices rather than a much larger volume market at much lower prices per unit of information. In considering the conversion on the existing print base to electronic form, one basic issue that must be considered is the negotiation for license with the rights-holders. In most disciplines there are many publishers who contribute to the print base. The notion of a major research library negotiating and managing contracts with thousands of publishers is clearly absurd, as is the notion of negotiating a contract for a $100/year journal. (It will cost much more than that to negotiate the contract, in the absence of a "standard" contract that all parties could just accept unmodified. Such standard contracts do not exist today, and there has been little success in developing one, despite the efforts of such organizations as the Coalition for Networked Information.) A number of companies have begun to act as rights aggregators for libraries: University Microfilms Inc. and Information Access Company, for example, will license libraries' full-text databases containing material they

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure have, in turn, licensed from the primary publishers. But coverage of these full-text databases is limited, and the material is not sufficiently timely to substitute for print subscriptions (or electronic access directly from the publisher). Until libraries can acquire electronic information under some framework that matches the simplicity and uniformity that characterizes the acquisition of most printed material today, the growth of electronic library collections will be slow. The limited amount of library collections available in electronic form is already beginning to have effects that cause concern. Network-based access to electronic information is unquestionably convenient, and users tend to use electronic information resources that are available without consideration that they may be incomplete or may not represent the highest quality or most current work on a given subject. To some users, electronic library collections are already being viewed as defining the literature in a given area; these users have been too quick to embrace the transition to electronic formats. In the next decade we will be dealing with a mixed transitional environment for libraries, where some parts of the collection exist in electronic formats but a large part continues to be available only on paper. This will greatly increase the complexity of managing the provisions for library services, defining priorities, and developing policies and strategies to position libraries to function effectively in their role as information providers to the public both in today's geographically bound, print-based world and in the future environment of electronic information on the NII. CONCLUSION: THE BROADER CONTEXT OF INFORMATION PUBLISHING ON THE NII It is important to recognize that libraries are part of an enormously complex system of information providers and consumers that also includes publishers, government at all levels, scholars and researchers, the general public, scholarly societies, individual authors, various rights-brokers, and the business community. Even the much more constrained and relatively homogeneous scholarly communications and publishing system is vastly complex. Libraries play an integral part, but only a part. There may be an expanding division between the world of mass market information, much of which is bought and sold as a commodity in a commercial environment, and scholarly information, which is primarily produced and consumed by nonprofit institutions that place great value of the free flow of information (although there are currently many for-profit organizations involved in the scholarly publishing system). There are vast changes taking place throughout this entire system as a result of the possibilities created by networks and information technology. Organizations are reassessing their roles: scholarly societies that once viewed themselves very much like commercial publishers are now rethinking their relationships with their disciplinary knowledge base, their authors, their readers, and the libraries that have traditionally purchased much of their output. Individual scholars are exploring the possibilities of various types of network-based publishing outside the framework established by traditional publishers, and in some cases also outside the customary social constructs such as peer review. Indeed, processes such as publication and citation that were relatively well understood in the print world and that are central to numerous areas in our society are still very poorly defined or understood in the network context. Libraries are still basically oriented toward artifact-based collections and toward print. They are struggling to respond to the new variations on publishing and information distribution that are evolving in the networked environment and to determine what their roles should be with regard to these developments. Considering that libraries have played a relatively minor role in anything involved in television or radio in this regard, I would suggest caution in predicting their roles in a system of information creation, distribution, and use that is being transfigured by the introduction of networks and information technology based solely on the effects of these technologies on existing

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure library operations and services rather than on the system as a whole. I do not believe that libraries are going to disappear, but they may change substantially in their roles, missions, and funding in the coming years. Certainly there is going to be a growing demand for properly trained, innovative, technologically literate librarians in many new roles in the networked information environment. It is clear that the visions of information access through the NII are attractive and represent worthy societal goals. In many regards they build on the tradition of public libraries and their essential role in our society as it has developed during the 20th century. But without outright subsidy of access to electronic information or major changes in the framework that determines the terms and conditions under which the public has access to such information, these visions of the NII's potential may be difficult to achieve, and it may be unrealistic to assume that libraries have the resources to step up to these challenges. It seems clear that libraries have a number of potential—perhaps central—roles to play in implementing the public policy objectives that have been articulated for the NII. But they may not be the route to achieve all of these objectives, and in any event their roles and efforts will need to be complemented by investments from other sources, such as the federal government, in the development of public access information resources. We need to be very clear about who the user communities are, who is to be subsidized, what is being subsidized, and how that subsidy will be financed.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure Discussion ROBERT KAHN: Bob Lucky, if your model of the Internet being free is really going to hold for the rest of the country, how are we going to get it into the 70 million homes in America at $37,000 plus $7,000 each? ROBERT LUCKY: I don't know. That is where the cost is going to be. The model is really based on business use, where we pay an average of $5 per person per month for everything that we can associate with the Internet, while the telephone might be closer to $100 when you throw in the long-distance charges and so forth. But in trying to get them into homes, particularly if there is a separate line involved, you face this enormous cost and bottleneck of that access, and so I simply do not know. PAUL MOCKAPETRIS: I was struck by your comments about the difference between the phone company model and the Internet model. You said that the local loop is what is expensive and the long distance is not, in the phone company case. In the Internet case, you said you pay for access but then the Internet is free, so it seems like those two are the same model in the sense that the local loop is the thing that is dominating the cost, and I was just curious whether I was wrong or not. LUCKY: I am not sure I understand, but it is sort of the same form as Bob Kahn's question. The costs are in that access. In business you have a model that is able to share that cost among many people because you have one T-1 line into a big building and then you have the local area network where most of the costs are. But most of us associate those costs in the local area with our computing environment and not the communication. I spent millions of dollars keeping up all the local area networks and all the people that run them and so forth, but I say, I have do that for computers anyway and the communication in the Internet just comes out for free from that. A totally different model applies for the home because it is one on one and I don't know how you would do that unless you overlay it on your voice. I mean, if you share that line and you have a flat rate, then again you could conceivably have Internet access that is rather cheap. MOCKAPETRIS: I have one other question. A lot of people have different metrics about what part of the Internet is commercial and what part is not. What is the right way to count? We have numbers about how many hosts are registered that are not entirely sound in some ways—sort of like sampling license plates and seeing how many are commercial and how many are government. What would be the appropriate way to think about that? Packets or packet miles? LUCKY: I do not know; we do not know. I do not think you can count packets and distinguish which is which right now, so again I am sorry not to be able to answer these questions. In fact, no one knows how many Internet users there are really. People say 20 million, but all we know for sure is the number of registered hosts. I have no idea, even in our company; I said 3,300

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure people. We have 3,300 addresses, but we think maybe only 1,200 are active. We really do not have good statistics on a lot of the stuff. GEORGE GILDER: How many hosts? LUCKY: About 2 million hosts. GILDER: The concept of universal service confuses me. It took 50 years and about a trillion bucks to get universal service, something like that. To proceed forward with some requirement for universal service would stifle all progress. You cannot instantly create universal service. Universal service is something that can happen when you get sufficient volume so that incremental costs are very low. You have to start as an elite service almost necessarily. I wonder, when you speak of universal service, just what kind of requirement is meant or how that can be defined. CHARLES FIRESTONE: I think that what you are going to see is a dynamic concept of universal service. You are right—something that is new and innovative but eventually becomes necessary or heavily penetrated might become universal service in a future year. One of the big issues that we are going to be facing is to figure out what should be included in the bundle of universal service (just like we need to figure out what should be included in the bundle of privacy rights), and this, I think, right now is just a connection, a dial-tone connection. One of the issues, for example, is touch-tone telephone service. Is that something that in some states is considered universal service? People have to have access to touch-tone or they have to have a touch-tone connection into their homes, because otherwise they cannot avail themselves of the services I think you have in mind. But eventually there may be some governmental service—I am thinking down the line—some access to local information, your community. I know, for example, that Santa Monica has had the PEN [Public Electronic Network] system. Now, universal service may only mean having free access to the Santa Monica City Council or their local network—an ability to access without having to pay extra for it. But it is something that has to be dynamic and I am sure will be. The other question is the application to libraries. What is the library equivalent in the electronic era? How do you connect into information, and what information should be a public resource as opposed to something that is available strictly on a pay-by-the-drink or per-bit basis. That is something that I think we as a society we are going to have to come to grips with. MICHAEL ROBERTS: There is a lot of clamor these days for improving the security of the network, especially the Internet, so I think Colin Crook has got it right. Especially in a network that has to have universal access, you assume the network is at some level insecure and you secure the applications. The question is, for public-sector areas that are critical, such as health care, libraries, and also intellectual property, what sort of process should we be thinking about from a policy standpoint in securing those applications? Is it possible, for instance, to have the sort of certification of applications that the financial people can do very privately and behind closed doors applied to private-sector applications? Is it possible to apply product liability sorts of considerations to that area even in the public sector? EDWARD SHORTLIFFE: I have some thoughts about it. One of the intriguing problems about privacy and confidentiality of health care data is that the issue is not necessarily the security of an application per se but what somebody who has access to the data does with them. In other words, there is a potential for abuse by people who are authorized to access patient data. That is why legal remedies with criminal penalties are required when someone misuses privileged medical information to which they have access. There are no national standards in this area at present. As a result, one of the big emphases I have heard in the medical area is that we need to begin to introduce some uniform penalties, probably with preemptive federal legislation about the misuse of data to which people do have valid rights of access. As for the more generic issue of trying to prevent people from breaking into data sets, instituting appropriate certification of software has a valid role. What you are talking about is

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure simply having databases out on the network with lots of different applications that could access them. Of course, you run into the question of what should be the nature of the varied access methods that you could use to get to those data sets, independent of specific applications that may have been written to access them. MARY JO DEERING: I have a question for Ted Shortliffe, whose clear vision I have always appreciated, but also for people in the audience who are on both the engineering and content side. It picks up on something that Charlie Firestone said this morning about the process of disintermediation that is going on in society with regard to information. The same thing is going on in health, actually, and it is a process that is parallel to the deinstitutionalization of health care. Health care reform is really going to continue pressure in those directions, with an emphasis away from hospitals and high-end acute care providers, toward primary care and preventive medicine and home care, for that matter. My request specifically to Ted right now concerns that wonderful sketch [Figure 3], where you actually began to paint what the health information infrastructure would look like. It did not really include any of the linkages that would reflect that type of new reality in health care, that would provide the linkages that would be necessary among nonhospital institutions, nonspecialty providers, and perhaps the consumer. I think we would all like to see what that would look like. SHORTLIFFE: You are absolutely right. As you know, we had a meeting on the subject of the NII and health care last week that the National Research Council sponsored [''National Information Infrastructure for Health Care," October 5–6, 1993] at which I think the message was driven home loud and clear. I personally am in primary care, and therefore I am sensitive to it as well. Good point. KAHN: I have a two-part question, one part for Cliff Lynch and one part for Ted Shortliffe. I know, Cliff, that you are very well aware of the importance not only of accessing information but also of delineating what you can do with it—when you get it, not only whether you can copy it or distribute it or make derivative works, but all the things that are typically covered under copyright. In your talk you really did not get into that, and I am wondering what your own views are as to how that particular aspect of library development is going to proceed. How are we going to know what we can do with this stuff? CLIFFORD LYNCH: I think that this issue of what you can do with information is already a major problem. As soon as you move into a mode where you are licensing information rather than purchasing it, and licensing information from different sources with different contractual provisions—and, just to add to the fun, do not necessarily have a standard sort of taxonomy of things you can do—you are into a situation where it is very difficult to know what you can do beyond reading the information once and then purging it from memory and never printing it. I think that this is a particularly troublesome trend, as we start thinking about much more intelligent ways of handling information. As people start building personal databases and using intelligent agents, "knowbots," things of that nature, that correlate information from multiple sources and refine it, this sort of trend may be a major barrier to making intelligent use of information. It is something I am really concerned about. Another dimension of this is multimedia. I keep hearing about multimedia, but at least on bad days, I think the only ones who will be able to afford it are groups like major motion picture studios because only they can afford enough lawyer time to clear the rights. ROBERT PEARLMAN: This issue is very, very big in the education sector, because one of the things that kids are doing these days is making multimedia reports, combinations of video and text and graphics, and they just grab images and text from everywhere, and it is a tremendous ethical problem. But what has happened is that an industry is developing now, and CD-ROMs are out that basically provide images that you can use. In other words, an industry is growing up that says, "Sure, National Geographic will sell it to you at a big price, but we will give you similar images

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure that you can buy and use for practically nothing," so there may be this kind of development in other sectors as well as competing sources of information. KAHN: We have heard from Bob Lucky that the Internet, to certain kinds of businesses, essentially looks almost free and from Bob Pearlman that to the educational community it is virtually uneconomic at the moment. The other part of my question for Ted Shortliffe is where the technology fits in the case of the medical community. I think we heard from Ted that there is a strong motivation on the part of physicians in the hospitals to gain access to this technology. The medical profession itself, and I believe you, could provide a good, strong justification for that. In fact, I see it coming. The question I have is whether there is any economic justification for the end patient, namely, the user of the health care system, in having some coupling to the Internet and, if so, at what level of capability? SHORTLIFFE: This relates to Mary Jo Deering's question. First, when you look at what the health care field is spending on computing right now, the amount of money that it would take for any given institution to hook up to the Internet is minuscule compared to what it is spending overall in information technology, so I don't see cost as a barrier per se. It is more the perceived benefit and how you actually would make use of the national network, given the lack of standards for actual connectivity and data sharing. The same kind of argument could be made as we begin to see the evolution of new health care delivery plans. It may well become economically beneficial for health care providers to pay for linkages into the homes in order to provide patients with access to information that would prevent them from becoming more expensive users of the health care system. A lot of visits to doctors are unnecessary. If people only had more and easy access to the kind of information they might need, you can imagine how this might have an impact on overuse of certain kinds of facilities. The problem with that approach is that you are hypothesizing not only an availability but also an education of the end-user patients and health care users which right now is unlikely. The biggest users of health care are the people least likely to have the facilities in their homes at present and the ones least likely to have the education that would allow them to make optimal use of such technology. So we are talking about a major social issue that would facilitate allowing patients in their homes, and people who are nonpatients who simply need access to health information, to make good use of the kind of information that might be made available. KAHN: Is that a practical suggestion you just put out, that the doctors or hospitals literally pick up that mantle somehow? SHORTLIFFE: It is not going to be the doctors. If you think the doctors are in charge of the health care system, you are a few years behind. It is going to be health plans as they begin to look at how they can compete for patients in large areas, especially in the big metropolitan areas. How this will play out in more rural areas is another matter that I think is a great worry to people, because the emphasis tends to be on the competitive marketplaces around the big cities. However, the health plans will pay for these technologies if they find it is to their competitive advantage to do so. LINDA ROBERTS: I am trying to look for the common thread in all that we have heard this morning. It strikes me that in most cases what you have been talking about is doing what we do better. But I think there really is an opportunity to do better things in every one of the sectors that have been talked about this morning. I am thinking about the library as one example. We really have public libraries as a compromise that, if I understand it, was reached between the publishers and the communities that really wanted everybody to have their own library. There were people who had libraries and who did not need public libraries, and they were the exception rather than the rule. What is so interesting about what could happen in the future is that there really could be a much more decentralized system of libraries. Everybody could have their own in their own home, and what is even more fascinating about some of the things that are

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure happening in education when you talk to teachers is that a lot of the material that could be in these libraries does not necessarily have to be controlled and produced by the publishers out there. So it seems to me that one of the things we really have to think about for the future is what we haven't been doing that we ought to be doing and that might create new opportunities for business and industry, but might also create new opportunities for learning more broadly. VINTON CERF: With respect to something Bob Pearlman said, that somehow it was not economic for the schools to be a part of the network environment, I am a little puzzled because I assume that we spend, as a country, a great deal of money on education, so it is not like there are zero dollars out there. A lot of dollars are spent for education, so it is obvious a trade-off is being made about the utility of being a part of a network environment and spending money on other things. Maybe you can help us understand a little bit about how the educational dollar is spent. Is it mostly for personnel? Do we find networking not so useful until all of the teachers are trained in its use, as opposed to throwing equipment into the schools and expecting somebody to do something useful with it with no adequate software, and so on? I want to suggest that we are like parents of teenagers when the kids are out late and we do not know where they are and we're worried. There are a thousand different possible things that could have happened to them even though it is probably the case that most of those things are mutually exclusive, and yet we still worry about all thousand of them. We worry about how this technology will finally reach critical mass, will finally get to the point where there are enough people who have access to it for good, sound economic reasons that we can start doing some of the things that we have heard about this morning. I would like to suggest that we actually do not know yet which of the things will trigger the regular availability of all of the technology. The computer scientists got it first because they had to have the stuff to write programs. Then they got to do all these other neat things with the networks. Bob, what in your mind is the triggering event for making this economically interesting for the schools? PEARLMAN: This, of course, is a very complicated question. If you look clearly at technology in U.S. schools, it has not resulted in many economies. For instance, in the mid-1980s there were some companies that were trying to market what were called integrated learning systems in the schools, on the basis that they would actually save in the use of teachers. In general they did not really save in the use of teachers, which is where the real economies are. In schooling, about 80 percent of school budgets have to do with personnel, so the only way to reasonably save money is by saving on personnel. Some are saving on, say, custodial or food service personnel, but in the main you have to save on teacher personnel, and the only way you really get at that is by totally reorganizing schooling. I was associated with one of the new American school design awardee groups in Cambridge for the last year and a half. We came up with a design that we think in the long run is going to save money. It requires quite an up-front investment in communications technology. For our kids to be able to work much more on their own with teacher advisors managing their affairs and with mentors, we had to establish up front an infrastructure in our design of a local area network in the school connected to the Internet with those up-front costs and with connections in the community, which meant that the community had to be wired properly. All of these up-front costs had to be borne by somebody. With that kind of structure in place, we felt that all sorts of economies could occur in personnel in schools, but we had to test the proposition. The problem is that from the point of view of a school that exists right now, we are only talking about additional cost add-on. I don't mean that people really cannot afford to get onto a network; it is just hard for them to justify it. There are not real economies nationally. You get on the Internet, and what do you see? You are really just bringing in more programs—more kinds of things to look at, more curriculum possibly. But you are not really enabling a saving at that site.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure What we really need is some real, serious innovation in the organization of schooling. That may take a lot of forms, and the charter school development may help. I am not a partisan of Whittle's efforts, but I am not happy that he cannot make it in the new design world. In fact, I would like to see some private player actually try to do it, because the question is how to put together the package, how to develop a school that is more economical when it is tied to a system—meaning many, many schools, whether in one location or around the country, that actually work together and produce some savings because they are sharing curriculum, curriculum costs, development costs, and teachers through the media. That is what has to happen. Although we have things called school districts, or states that pretend to be school districts, they don't really bring about efficiencies in the way that a corporation like Citibank is trying to accomplish, looking at how to make a network that really makes all that blend together. So my answer is only that we are going to have to have lots of experiments and new designs for schooling over the next several years in order to actually exploit information technology properly. CAROL HENDERSON: I want to pick up on an earlier question by Bob Kahn and tie together a couple of strains that we heard this morning in connecting the health care sector and the library sector. We often hear about patient records and telemedicine, but I do not know that it is widely realized how often people go from the doctor's office to the public library to try to get information about what it is they have just heard, what their child has been diagnosed with, or what they are looking at in terms of caring for their aged parents down the line, and so on. When you look at the kinds of questions they ask, they really want to mine the medical field's information, but they want to do it, in a sense, outside the medical field because they want to know whether there are alternative kinds of treatment and what the literature says about this drug that they are supposed to be taking if they are also taking something else. The usefulness of a neutral source of information about information is something that I think is very valuable, and perhaps we should not give up too easily. Access for patients, or people before they become patients, to preventive health care information and to information about their conditions is something that libraries can be a big help with. I think perhaps we also passed over the idea of libraries as community institutions and as information providers. It is often the libraries that have mounted databases not just about their collections but also about community information and referral sources. Libraries are often the central source of information on where in a community you go for various government and social services across agencies as well as an institution that, for instance, reaches out to newer immigrant groups and helps get them into the mainstream.

OCR for page 21
The Changing Nature of Telecommunications/Information Infrastructure This page in the original is blank.