Click for next page ( 30


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 29
Infroduclion and ConI~x! WHAT IS THE INTERNET? The Internet is a diverse set of independent networks, interlinked to provide its users with the appearance of a single, uniform network. Two factors shield the user from the complex realities that lie behind the illu- sion of seamlessness: the use of a standard set of protocols to communi- cate across networks and the efforts of the companies and organizations that operate the Internet's different networks to keep its elements inter- connected. The networks that compose the Internet share a common architecture (how the components of the networks interrelate) and software protocols (standards governing the interchange of data) that enable communication within and among the constituent networks.] The nature of these two abstract elements architecture and protocols is driven by the set of fundamental design principles adopted by the early builders of the Internet. Because an appreciation of these principles is important to un- derstanding what makes the Internet what it is, several of them are dis- cussed at length below. Those who design and operate the Internet gen- erally characterize the Internet in terms of these principles, which is not surprising given that the Internet derives from work done by researchers . 1Some would argue that the term "Internet" embraces the entire interconnected data world rather than just the IP-based infrastructure. That broader definition includes net- works using other protocols that interface with the IP-based Internet. 29

OCR for page 29
30 THE INTERNET'S COMING OF AGE in computer science and engineering fields that rely on abstraction as a technique for managing the complexity of what computer scientists study or build. The success of the abstracted interface through which users encoun- ter the Internet contributes to the Internet illusion. Software such as Web browsers makes use of names for things attached to the Internet- www.example.com, for instance that hide the nature of the networks to which both they and the user are connected. This, of course, has enor- mous advantages for users, who need not worry about the complexities of the networks they are making use of. Making things appear simple, however, can lead to unmet expectations. For example, a user who at- tempts but repeatedly fails to connect to a Web site say, www.example. com will look to the Internet service provider (ISP) to resolve the prob- lem. However, the odds are good that the computer named www.example.com will turn out not to be attached to the network of the user's provider. The provider, in fact, may not have a direct connection to the provider servicing www.example.com and may not be able to even tell the user what the problem is, where the problem is located, and the likelihood of its happening again. If a user considers connecting to that site to be mission-critical, such a response is likely to be very frustrating. Advances in technology and services can, however, improve the quality of the illusion substantially, as can a better understanding by users of how the Internet is constructed. We will return to the technological, economic, and policy issues surrounding interconnection in Chapter 3. One consequence of the Internet illusion is that the ordinary Internet user is likely to assume that a connection to the Internet via a given provider of Internet services amounts to a direct connection to the totality of the Internet. But in reality, the user has only contracted for Internet service with one of a number of ISPs enterprises that provide Internet connectivity to end users or other ISPs. Each ISP controls and operates only a fraction of the global network system. To reach all of the end destinations, content, or services that a user wishes to reach, an Internet service provider may have to forward the user's communication through several other networks, none of which it controls. Interconnections are made largely though network links that are bilaterally coordinated be- tween ISPs. It is important to distinguish between the publics Internet, which is 2"Public" is used here in the same sense it has in the context of the public telephone network. It does not denote public ownership; it denotes instead a network to which anyone can connect and in which any customer can exchange traffic with any other. The line between private and public is not always sharp; in particular, the physical networks they use are not necessarily distinct.

OCR for page 29
INTRODUCTION AND CONTEXT 31 normally what is meant when "Internet" is written with a capital I, and the Internet's core technologies (standard protocols and routers), which are frequently called "IP technology" in reference to the key protocol used on the Internet. Throughout this report, these terms Internet and IP will be used in this way. The public Internet is distinguished by global addressability (any device connected to the Internet can be given an address and each address will be unique) and routing (any device can communicate with any other). In practice, however, as a consequence of interventions imposed by ISPs and local network managers such as the deployment of firewalls and other technologies for filtering communica- tions traffic not all data are allowed to pass to all devices and not all devices are assigned public addresses. IP technologies are also employed in private networks that have full, limited, or even no connectivity to the public Internet; the distinction between public and private blurs because to the extent that private networks acquire connections to the Internet, they by definition become part of it. If one looks at the elements that physically make up the Internet, one sees two categories of objects. The networks that make up the Internet are composed of communications links, which carry data from one point to another, and routers, which direct the communications flow between links and, ultimately, from senders to receivers. Communications links may use different kinds of media, from telephone lines to cables originally deployed for use in cable television systems to satellite and other wireless circuits. Internal to networks, especially larger networks in more devel- oped parts of the world, are links that can carry relatively large amounts of traffic, typically via optical fiber cables. The largest of these links are commonly said to make up the Internet's "backbone," though this defini- tion is not precise and even the backbone is not monolithic.3 Links closer to users, especially homes and small businesses, typically have connec- tions with considerably less capacity. Large organizations, on the other hand, tend to have high-capacity links. Over time the effective capacity of links within the network has been growing. Links to homes and small businesses the so-called "last mile" have until recently, with the emer- gence of cable modem, digital subscriber line (DSL), and other technolo- gies, been largely constrained to the relatively low speeds obtainable us- ing analog modems running over conventional phone lines. Analog modems remain the dominant mode of home access. 3There is no easy way to specify which networks comprise the Internet backbone. For instance, in some countries a rather modest link may serve as the local backbone. Nor do all connections between providers take place through the backbone there is no assurance that any particular data packet will flow through any part of the Internet backbone.

OCR for page 29
32 THE INTERNET'S COMING OF AGE Routers are computer devices located throughout the Internet that transfer information across the Internet from a source to a destination. Routing software performs several functions. It determines the best rout- ing paths, based on some set of criteria for what is best, and directs the flow of groups of data (packets) through the network. Path determina- tion at each step along the way depends on information that each router has about the paths from its location to neighboring routers as well as to the destination; routers communicate to one another some of this path information. A number of routing algorithms that determine how routers forward packets through the network are in use; routing protocols medi- ate the interchange of path information needed to carry out these algo- rithms. The Internet can be divided into a center, made up of the communica- tions links and routers operated by Internet service providers, and edges, made up of the networks and equipment operated by Internet users. The line between center and edge is not a sharp one. Users who connect via dial-up modems attached to their computers clearly sit at the very edge. In most business settings as well as in an increasing number of homes, LANs sit between the ISP and the devices that the Internet connects. These LANs, and the routers, switches, and firewalls contained within them, sit near the edge, generally beyond the control of the ISP,4 but not at the very edge of the network. Software applications running on these computing devices today, typically PCs use Internet protocols to establish and manage informa- tion flows that support applications over the Internet. Much as a common set of standard protocols lies at the core of the Internet, common stan- dards and a common body of software are features of many applications, the most common being those that make up the World Wide Web (the Web). The Web adds its own protocols for information exchange that build on top of the fundamental Internet protocols, and it also provides a standard way of presenting information, be it text or graphics. More specialized software, which also makes use of the Internet's basic proto- cols and frequently is closely linked to Web software, supports such ap- plications as real-time audio or video streaming, voice telephony, text messaging, and a whole host of other applications. In light of the promi- nence of the Web today, Web-based applications and the content and services provided by them are sometimes viewed as synonymous with the Internet; the Internet, however, is a more general-purpose network over which the Web is layered. 4Though ISPs do sometimes provide firewalls for their customers.

OCR for page 29
INTRODUCTION AND CONTEXT 33 Following usage from the telecommunications industry, the essential physical components communications links and routers of the network (including links that are parts of other networks, such as the telephone lines used by dial-up modems or DSL or the high-capacity fiber-optic cables shared among Internet, other data, and voice communi- cations services) can be referred to as "facilities." Internet service provid- ers use these facilities to provide connectivity using the Internet proto- cols. What is done with the facilities and basic connectivity comes under the heading "services." These services, which include such things as access to content (e.g., viewing Web sites, downloading documents, or listening to audio), electronic commerce (e.g., shopping, banking, and bill paying), or telephony, are enabled by both devices and software in the hands of users and service providers. Some services are enabled merely by installing software on user computers, while others rely on functional- ity implemented in computers and software attached to the Internet by a third party. In either case, the general-purpose nature of the Internet has meant that there does not have be any arrangement between the Internet service provider and the provider of a particular service. While this state- ment generally holds true today, we are seeing the emergence of excep- tions to it in the form of application-specific delivery networks (e.g., Akamai) that employ devices located throughout the network, generally near the edges. Chapter 3 discusses these trends and their implications for the future development of the Internet. A multitude of businesses are based on selling various combinations of these elements. For instance, many Internet service providers (ISPs) integrate connectivity with content or services for their customers. Some ISPs rely in part or in toto on access facilities (e.g., dial-up modem pools) owned and operated by other providers, while others operate most or all of these facilities themselves. Also, ISPs may opt to own and operate their own communications links, such as fiber-optic cables, and networks or they may run Internet services over links and networks owned and oper- ated by other communications companies (just as companies have resold conventional voice telephony services for years). The tale is well told about how, over the past decade, the Internet evolved from a novel, but still developing, technology into a central force in society and commerce,5 and the committee will not belabor the point here. Suffice it to say that the transformations resulting from the Internet along with expectations for continued growth in its size, scope, and influ- 5see, for example, computer science and Telecommunications Board ~csTsy, National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, D.C.: National Academy Press.

OCR for page 29
34 THE INTERNET'S COMING OF AGE ence, have given rise to widespread interest and concern on the part of government and society. A more realistic and better-informed appraisal of Internet issues has become imperative now that governments at all levels seek to control its evolution and use and dedicated issue-advocacy groups have begun to proliferate. This report, written by a group of experts in a number of areas technologies, operation, and management of the Internet; associated communications infrastructures, such as the public switched telephone network; and related policy and social issues- is intended to explain key trends in the Internet's evolution and their implications for policy. It focuses on trends that are often misunderstood or incompletely treated by the mass media and it highlights specific areas of policy that warrant more or better consideration. The remainder of this chapter characterizes the Internet's special design attributes and outlines several key trends in facilities and services. SUCCESS BY DESIGN- ABSTRACT FEATURES AND PRINCIPLES Why has the Internet been so successful? Much of the answer lies in the combination of two factors functionality and lower costs. The new functionality stems from the Internet's unique design principles and fea- tures that make connection, interconnection, and innovation in both fa- cilities and services relatively easy. The Internet's characteristics have also made it possible to use the underlying communications infrastruc- ture more efficiently, thereby setting a lower price point for the commu- nications it enables. Both factors have generated a pattern of innovation in Internet technologies and uses. Its relatively rapid responsiveness to users and other design attributes distinguish the Internet from other parts of the information infrastruc- ture, such as the public switched telephone network (PSTN) or the televi- sion networks (cable and broadcast). The design of those other networks is more focused on the center, and greater functionality is located within the networks. They have been more centrally developed and managed and historically have limited what users can do with them. In contrast, as detailed below, the Internet's design is effectively neutral to what services operate across the network. This enables a relatively unrestricted set of applications to run over it without the need for changes to be made within the network. Much of the design of the Internet can be traced to the principles adopted by the research community that undertook its early develop- ment. These principles and the resulting architecture have been codified in research papers and in a special set of documents describing the Internet's design known as requests for comments (RFCs), a name reflect-

OCR for page 29
INTRODUCTION AND CONTEXT 35 ing the interactive and iterative nature of Internet technology develop- ment. Especially notable are the articulation of the end-to-end arguments and RFC 1958.7 These and other documents embody some value judgments and re- flect the fundamental political and ethical beliefs of the scientists and engineers who designed the Internet: the Internet architecture reflects their desire for as much openness, sharing of computing and communica- tions resources, and broad access and use as possible. For example, the value placed on connectivity as its own reward favors gateways and in- terconnections over restrictions on connectivity but the technology can be used permissively or conservatively, and recent trends show both. Another value underlying the design is a preference for simplicity over complexity. These values have been advanced through the architectural view embodied in voluntary standards set by such bodies as the Internet Engi- neering Task Force (IETF),8 which has been the dominant standards- setting body. Within this body, there has been open competition between compatible implementations. Other standards-setting bodies have also contributed to the establishment of key standards. One such body is the World Wide Web Consortium, which has worked on standards related to the Web. Another is the International Telecommunication Union (ITU). To date, Internet standards generally tend to be developed on a per- ceived-need basis and respond to technological developments; they also continue to be linked to the activities of the network research community. The design values of the Internet have been reinforced by the envi- ronment in which the Internet was developed. In its early years as a cooperative research project, it was isolated from some of the stresses and strains associated with commercial marketplace interactions. Today the IETF, like other organizations associated with the Internet, must respond to the economic forces of a robust marketplace. Whether and how the traditional Internet design values will be maintained is an important issue for the future of the Internet. 6See J.H. Saltzer, D.P. Reed, and D.D. Clark. 1984. "End-to-End Arguments in System Design," ACM Transactions on Computer Systems 2~4~:277-288, November. 7Internet Architecture Board. 1996. Architectural Principles of the Internet, Brian Carpenter, ea., Request for Comments (RFC) 1958, June. Available online at . 8While the IETF does apply an architectural view to the development of Internet stan- dards, it does not have anything to do with controlling how the networks that make up the Internet are actually built and configured.

OCR for page 29
36 THE INTERNET'S COMING OF AGE The Internet's "Hourglass" Architecture As an open data network,9 the Internet can operate over different underlying technologies, including those yet to be introduced, and it can support multiple and evolving applications and services. In this layered architecture, bits are bits and the network does not favor by its design or effectiveness any particular class of application.l Evidence of this open- ness lies in the fact that the Internet's essential design predated a number of communications technologies (e.g., LANs, ATM, and frame relay) and applications and services (e.g., e-mail, the World Wide Web, and Internet radio) in use today and that within the Internet all of these technologies and services, both new and old, can coexist and evolve. The shape of an hourglass inspired its selection as a metaphor for the architecture the minimal required elements appear at the narrowest point, and an ever- increasing set of choices fills the wider top and bottom, underscoring how little the Internet itself demands of its service providers and users.ll As a consequence of this hourglass-shaped architectural design, inno- vation takes place at the edge of the network, through software running on devices connected to the network and using open interfaces. By con- trast, the PSTN was designed for very unintelligent edge devices tele- phones and functions by means of a sophisticated core that provides what are termed "intelligent facilities." Edge-based innovation derives from a fundamental design decision made very early in the development of the Internet and embodied in what is called the end-to-end argument in systems design.l2 Aimed at simplicity and flexibility, this argument says that the network should provide a very basic level of service data trans- port and that the intelligence the information processing needed to provide applications should be located in or close to the devices at- tached to the edge of the network. Underlying the end-to-end argument is the idea that it is the system 9Computer Science and Telecommunications Board (CSTB), National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. Washington, D.C.: National Academy Press. 10The layering principle is a powerful one and appears in other contexts. One sees oper- ating system application interfaces such as those provided by Windows or Unix that allow a large number of application programs to run on diverse computing platforms as well as bus protocols (e.g., PCI or USB) that allow a large number of peripheral devices to work with a variety of different computing platforms. 1lSome caution is needed in interpreting the hourglass metaphor. The narrow "waist" at the middle of the hourglass is a metaphor for the minimally specified choice of technology at this point and is not intended to convey any sense of a choke point or bottleneck. 12This was first expressed in J.H. Saltzer, D.P. Reed, and D.D. Clark. 1984. "End-to-End Arguments in System Design," ACM Transactions on Computer Systems 2~4~:277-288.

OCR for page 29
INTRODUCTION AND CONTEXT 37 or application, not the network itself, that is in the best position to imple- ment appropriate protection. If the network or network provider tries to take on this task it is likely to implement something that is too heavy- handed and performance-inhibiting for some applications and too light for others. Both the sender and receiver are held ultimately responsible for assuring the reliability of communications services (e.g., making sure that what is received is complete and in order), so as to protect end users against the vagaries of the networks that lie between them.l3 End systems are also responsible for protecting themselves an end system must be able, for example, to authenticate the sender of a message requesting, say, the deletion of a file located on the system.l4 The original architects of the Internet made a key design decision to use the principle of layering to separate applications from the underlying transport infrastructure of the Internet. By hiding the realities of how the Internet is constructed for instance, the topology of the network or the physical configuration of its elements, how routing is performed within the network, or how particular data transport services are implemented- the architecture enables people to write applications that run over it with- out having to possess any knowledge of these realities. In fact, without using specialized diagnostic tools, there is very little way for application software that makes use of the Internet to discover the detailed character- istics of the underlying networks. In general, even a poorly designed application can be added in a few sites at the edge of the network without putting the network at risk; this is how new applications can be experi- mented with, tested, and improved. This manifestation of the Internet illusion discussed above has been key to the explosion of new services and software applications of the Internet. The combination of a standardized interface to the network and the location of intelligence at the edges means that developers can write and field new devices or new software without any coordination with network operators or users or any changes in the underlying transport 130ne counterexample is denial-of-service attacks at the network level (i.e., "storms" of IF packets sent to a network or router), which can be argued to deserve remedy within the network itself. 140ne can think of the result of combining the end-to-end argument and the hourglass architecture in another way. By providing an unreliable datagram delivery service in which the network attempts to deliver a given datagram (piece of information) but does not guar- antee such delivery, the Internet makes minimal assumptions about the characteristics of the underlying transmission networks and passes a minimal set of functions up to higher levels of the protocol. This design allows complex networks of connectivity to be overlaid across a highly diverse collection of communications elements.

OCR for page 29
38 THE INTERNET'S COMING OF AGE network. Nor do new applications or changes need to be deployed all at once. Even though many developers of network applications do not un- derstand or appreciate the technical and management challenges con- fronting those who build and operate Internet networks, they are still able to succeed in developing all kinds of popular new applications. Thus, not only do we see PCs and larger computer systems attached to the Internet, we now see televisions (e.g., WebTV), telephones, per- sonal digital assistants (PDAs), and other devices being attached as well; the future is likely to see many other devices emerge (e.g., music appli- ances directly connected to the Internet). Of course, not all such applica- tions represent improvements (and many will fade away over time), but the Internet supports rapid feedback and the evolution of new and im- proved features and function, both of which are associated with the Internet's culture of cumulative knowledge building. The corollary to ease of innovation at the Internet's edges is that innovation at the center of the network is difficult and can be very slow because building new features into the network requires the coordinated actions of many providers and users. The problem is exemplified by the difficulties of deploying new network-level features such as enhanced quality of service or IP multicast (both discussed further in Chapter 2~. This is not to say that an increasing number of sophisticated things are not being implemented inside the Internet to optimize the delivery of various services. For example, algorithms for filtering and load balancing are found in some routers because they provide benefits in terms of ser- vice quality for certain traffic (perhaps at the cost of raw switching speed). Web caching entails adding devices throughout the network to improve network performance. Such caching is achieved by moderating or redi- recting specific types of network traffic in ways that can avoid congestion by making use of temporary local copies of frequently accessed informa- tion. Also, businesses that are building applications that require a great deal of network capacity or low-latency delivery of information require- ments not met very well on today's Internet are coping by building their own application-specific delivery networks, which employ devices lo- cated throughout the edges of the network as a work-around. Installation requires cooperation (and may require colocation) with particular ISPs. These technologies are controversial from an architectural and robustness standpoint as they disturb the end-to-end model. Robustness implica- tions are discussed in Chapter 2 and architectural implications are dis- cussed in Chapter 3.

OCR for page 29
INTRODUCTION AND CONTEXT 39 The Robustness Principle 1ne robustness principle is arguably the single most enabling char- acteristic of the Internet.l6 It was initially adopted for the ARPANET in order to accommodate the unpredictably changing topologies anticipated for defense applications (i.e., dynamic network reconfiguration) and then for the Internet in order to accommodate interconnecting a diverse set networks built by multiple implementors out of components using mul- tiple implementations (i.e., heterogeneity of devices and technologies). In accommodating both requirements, the Internet accommodates decen- tralized management, growth, and accordingly evolution. In practice, this robustness principle has taken several forms. One way of viewing robustness is that the rule for interpreting standards (and other specifications) that are not quite as precise as they might be in a perfect world should be for the sender to take the narrowest interpreta- tion (i.e., the intersection of all possible interpretations) and for the re- ceiver to be prepared for the broadest possible interpretation (i.e., the union of all possible interpretations).l7 Robustness also entails conserva- tive and careful design at the transport level that is able to deal with a 15The robustness being discussed here should not be confused with the same term used elsewhere in this report, especially Chapter 2, where it denotes lack of vulnerability to failures or attack. 16This principle was written down by Jon Postel in the 1979 Internet protocol specifica- tion: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior" don Postel. August 1979. Internet Experiment Note (IEN) 111 (the IP specification), p. 22~. The same text appears in September 1981, in RFC 791, p. 23, and a variant appears in the TCP specification, under the heading "robustness prin- ciple": "TCP implementations should follow a general principle of robustness: be conser- vative in what you do, be liberal in what you accept from others" (Information Sciences Institute, University of Southern California. 1980. DOD Standard Transmission Protocol, RFC 761, January. Available online at ~. A conservative approach in Internet protocol design appeared earlier in Internet Engineering Note 12. However, that paper falls short of enunciating a "robustness principle." See Lawrence L. Garlick, Raphael Rom, and Jonathan B. Postel. 1977. Issues in Reliable Host-to-Host Proto- cols. Internet Engineering Note (IEN) 12. Augmentation Research Center, Stanford Re- search Institute, Menlo Park, Calif., June 8. Available online from . 17It should also be noted that the robustness principle can be (and has been) cited as a justification for non-interoperability. For example, a vendor can release protocol elements that, by a narrow (or even reasonable) reading of the standards, are not compliant with the standard, and then claim that other implementations that do not interoperate with them are inadequate because they are not robust enough. The most literal interpretation of the robustness principle could also be taken as a requirement that each application or system must protect itself against any behavior whatsoever by others, but the general assumption is that others will behave in acceptable ways.

OCR for page 29
42 THE INTERNET'S COMING OF AGE about monthly charges. The level of charge may vary according to pro- vider promises of service quality. Other factors affecting price include the ISP's dependence on advertising as a source of revenue, the bundling of sales of equipment and Internet service (e.g., "free PC" deals), and the bundling of Internet access with content and special services. Business clients of ISPs follow a variety of pricing models, frequently based on usage. Pricing for interconnection within the Internet itself that is, the charges that ISPs pay to other ISPs also follows a variety of pricing models, ranging from flat (traffic-insensitive) rates for interconnection to barter arrangements with peers; these models are in flux. Notably, un- like the PSTN, very few pricing models are based on either distance or the exact volume of traffic carried. Interconnection prices are often privately negotiated rather than based on fixed rates. Internet interconnection con- trasts with the PSTN, where terms for interconnection and financial settle- ment are well established and the subject of regulation. Low barriers to entryfor innovation. Consistent with the "Criteria for an Open Data Network," the Internet is designed to be open from the standpoint of users, service providers, and network providers, and as a result it has been open to change in the associated industry base as well as in the technologies they supply and use. A wide range of applications and services, some leveraging the commonality of IP and others addition- ally leveraging standards layered on top of IP, most notably the Web interface, have flourished. As that industry base grows and matures, questions arise about whether innovation will face other kinds of entry barriers. Tippy markets. The Internet is the epitome of a network market. By definition, participation in a large network market is more rewarding; the larger the network, the larger the number of users. A small initial advan- tage in market share, often associated with being first, can snowball into a large advantage. These snowball effects are amplified on the Internet by the ease and negligible cost of distributing software through the network, which can promote much faster change than is typical for a product with physical distribution on a scale of months or a couple of years rather than several years or a decade. The desire to tip the market, seen in many competitive markets, is epitomized by the struggle to build a leadership position in streaming audio and video. This tippiness of the Internet marketplace suggests a pattern of highly concentrated markets and mar- ket leaders who greatly outdistance their competitors an outcome that 2)Computer Science and Telecommunications Board (CSTB), National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. Washington, D.C.: National Academy Press.

OCR for page 29
INTRODUCTION AND CONTEXT 43 would be at odds with the historic expectation of heterogeneity in tech- nology implementation. Indications are, however, that these positions of leadership are unstable. At least sometimes, a sufficient investment of resources or other circumstances can allow newcomers to trump the in- cumbents and tip the market in another direction.22 In recent years, for example, Microsoft acquired much of the market for Web browsers, a market that was once dominated by Netscape Communications. Whether these patterns prove enduring and sustainable remains to be seen. The Internet has been well served by an insistence that there is often more than one "right" answer to a question. Its designers argue that no single technology solves all the problems well, or even well enough, so that no single technology should be considered as the sole solution. Thus, when there is a common standard, there are frequently multiple indepen- dent implementations of it. No single vendor has cornered the market in good technology, and when one has gotten close to a monopolistic posi- tion, the traditional Internet community has been critical of the situation because of its potential for inhibiting continued innovation. It is not that developing proprietary extensions or protocols to implement optional features (which can generally coexist with standard protocols) is a prob- lem per se. Rather, a monopolistic position would preclude the signifi- cant benefits of having the essential elements of the infrastructure (hard- ware or software) exist in multiple, independent implementations, available from multiple vendors. Competition has served the Internet well. Internet Organizations Several private, nonprofit organizations play critical roles with re- spect to the Internet. These include the Internet's principal standards- setting bodies: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the Internet Engineering Steering Group (IESG). Along with the ever-growing number of other organizations in- volved in setting Internet standards, they are grappling with a growing number and diversity of stakeholders and with the ever-larger commer- cial stakes associated with the outcomes of their work. Another class of organizations deals with operational issues: for example, the North American Network Operators Group (NANOG) provides a forum for troubleshooting and exchanging technical and operational information. 22For an overview of network economics, see Hal Varian and Carl Shapiro. 1998. Informa- tion Rules: A Strategic Guide to the Network Economy. Boston, Mass.: Harvard Business School Press.

OCR for page 29
44 THE INTERNET'S COMING OF AGE Most visible recently has been ICANN, a newly formed body that has assumed overall responsibility for managing the Internet's addresses and names. Its work has received considerable attention and been the subject of vigorous debate as address and name management become more con- tentious and controversial activities. And, while the Internet Corporation for Assigned Names and Numbers (ICANN) was not established to take on the broader mission of Internet governance, it has not been able to avoid some international governance questions in the course of its work, leading observers to see its potential to play a larger role in the ambigu- ous arena of Internet governance. Regional address registries have re- sponsibility for managing the pool of addresses delegated to each region of the world. KEY TRENDS IN INTERNET DEVELOPMENT The Internet has already gone through several iterations. New rout- ing protocols have been deployed in bounded administrative domains, for example, and replaced with other protocols as technology has ma- tured. IP addresses at one time had to be given out in blocks of fixed size, whereas today they are assigned in blocks defined by demonstrated needs. What has worked over a period of some 25 years has been continual, generally gradual change, characterized in most cases by continued inter- operation between newer and older hardware and software. Sudden revolutionary changes for instance, the sudden phasing out of one pro- tocol in favor of another have not worked as well.23 For this reason, it is unrealistic to believe that major infrastructure components, whether hard- ware or software, can be changed without a significant period of coexist- ence and interoperation. The history of the Internet argues for an expec- tation of change from time to time and for design choices that at each step include the ability to transition to the next step. Advancing the Internet is about improvements in three areas: (1) the nature and business of supplying network facilities; (2) Internet connec- tivity; and (3) applications, content, and services. One of the things that is special about the Internet is that its architecture allows an Internet busi- ness to separate these three areas or to combine them in different ways. 23One notable "flag day" transition occurred in the ARPANET on January 1, 1983, when all hosts had to simultaneously convert from NCP to TCP. The transition, which came at a time when the Internet was far smaller, nonetheless required careful advance planning (Barry M. Leiner et al. 1998. A Brief History of the Internet. Version 3.1, February 20. Available online from ~.

OCR for page 29
INTRODUCTION AND CONTEXT 45 Growth in Backbone Capacity The heart of the Internet grows through the interactions of ISPs and major equipment manufacturers (principally router vendors and commu- nications circuit suppliers). Increased capacity speed, performance, and the accommodation of more users and more connections is the watch- word. In terms of fundamental communications, ever-increasing exploi- tation of optical fiber facilities has been the trend. Growth in Internet traffic (by a factor of roughly 2 every year) has been outstripping growth of computing speed (by the Moore's law factor of 2 every 18 months).24 To maintain this trend, equipment manufacturers are constantly chal- lenged to improve the performance of communications equipment nearly twice as fast as the PC and PC-component manufacturers improve PCs. For staying on this curve, the equipment industry is highly dependent on the help of innovations from both industry- and government-funded re- search (the latter comes chiefly from the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF)~. It is generally believed that given current technology and some sus- tained research support, equipment manufacturers should be able to con- tinue to improve performance in the time frames required to keep on this performance trajectory. In 2005, if current trends persist, the fastest link will be roughly 2 terabits per second (Tbps), requiring routers that can move data at 100 Tbps rates internally, and 5 years later, links will be approaching the 100 Tbps level. At that point, routers that can handle petabits (1000 Tb) per second will be required, and the requirement for extremely fast routers becomes a major challenge. Some predict that all- optical networking unlike the networks today, which combine optical fiber with routers based on electronics will provide a solution. How- ever, the channel switching speeds of today's optical technologies are far slower than the speeds of today's routers, suggesting that optical switching's importance may come from automating and speeding up the management of aggregated traffic flows.25 24See Lawrence G. Roberts. 2000. "Beyond Moore's Law: Internet Growth Trends." Computer 33~1~:117-120. 25One thing that appears crucial is further development of optical multiplexing. One particular technique, known as wavelength division multiplexing (WDM), allows one to pack a great deal of data into a single fiber by using multiple lasers operating at different colors in parallel (it is much harder to use one laser to signal at very high bandwidths). One approach will be the use of link management techniques, whereby routers aggregate traffic to different destinations, so that traffic is placed onto a switched flow, bypassing intermedi- ate routers. These large aggregates, not individual packets, would be switched optically. To switch a packet, the existing router technology works well. In this approach, different colors would be configured on a timescale of minutes to days to carry these aggregates from source to destination routers, bypassing intermediate routers and switches.

OCR for page 29
46 THE INTERNET'S COMING OF AGE Growth and Diversification of the ISP Market The several thousand Internet service providers differ widely in size, type of service they provide, and type of interconnection they have with other service providers. As the market has grown in overall size, it has evolved to comprise both very large players the tier 1 providers that constitute the Internet's backbone and the large, consumer-oriented ISPs- and many smaller players that focus on particular segments of the mar- ket. Some serve particular markets (e.g., consumers or businesses) while others provide such specialized services as hosting Web servers for other companies. Peering, transit, and other interconnection arrangements- which have both technical and economic dimensions have played a vi- tal role in enabling the interlinking that defines the Internet, and several issues related to these arrangements and their evolution are covered in Chapter 3. Upgrading the Local Access Infrastructure Today most home users access the Internet through narrowband con- nections made by modems using the public switched voice network. This approach has led to fairly ubiquitous access services from multiple pro- viders that offer very similar pricing and features. Such access achieves relatively low data rates compared to what the telephone company's cop- per loops can provide and does not provide the continuous connectivity that Internet protocols were designed to take advantage of. The approach required little investment in new access infrastructure and could make use of the existing voice infrastructure fairly straightforwardly. Now, however, dial-up access, with its low bandwidth and need to complete a telephone connection each time access is desired, is increas- ingly seen as limited, though it remains the least common denominator for residential service today. At the same time, a range of new applica- tions that require higher bandwidth and/or continuous connectivity are being developed. Broadband access enables services that, because they require high capacity and limited delay, cannot be provided via dial-up access to the Internet; it also makes many existing services much faster and more re- sponsive. Broadband also enables multiple applications to be run, such as simultaneous telephony and Web browsing. Software or music down- loads that require minutes or hours over a dial-up connection take as little as a few seconds via a broadband connection. Another benefit of broad- band Internet connections is that, rather then requiring a phone call and connection to be set up, they can be online all the time. This has two significant implications. Routine monitoring tasks can easily occur con-

OCR for page 29
INTRODUCTION AND CONTEXT 47 sinuously (e.g., notifying users that they have new mail), and network interactions can take place immediately (without waiting the minute or two required to establish a dial-up connection), reducing the overhead required to retrieve information or conduct a transaction. Looking up a telephone number online, for example, instead of in a phone book, is a viable option when one does not need to wait for an Internet connection to be established; similarly, other activities become possible with better connectivity. While these advantages are compelling, it remains easier at this early stage of deployment to posit likely benefits than to quantify with confidence actual consumer demand. While the wireline and wireless telephone companies still dominate the provision of voice services and broadcast entertainment still uses ra- dio signals delivered over the airwaves or via cable, delivery of voice, video, and other services over the Internet is emerging. As an increasing number of users have broadband Internet connections, it is reasonable to project that the use of various IP-based telephony services is likely to increase substantially and that video applications will probably grow as well. A number of services that compete with existing broadcasting and entertainment businesses are emerging that are likely to increase in use, including Internet delivery of music and Internet "radio" broadcasting. Another emerging trend is the use of distributed, peer-to-peer applica- tions (Napster and its offspring, particularly those that operate without any centralized server facility) to exchange content among Internet users, capabilities that harken back to the early days of the Internet, which was designed to support peer-to-peer connectivity. These developments will have several implications, including a change in the value consumers place on Internet access (especially broadband service) and potential stresses on the Internet itself. As an increasing number of people become familiar with broadband and its implications, the rate and patterns of broadband deployment as well as the types of services being offered have become the subject of public debate today, with congressional and multilevel regulatory scru- tiny in the United States and political activity by organizations represent- ing consumer and industry perspectives. Deployment is a nontrivial undertaking; it will require billions of dollars in investment to deploy broadband pervasively. Broadband tech- nologies are being deployed at varying rates by a number of companies. Cable companies are deploying two-way hybrid fiber/coax infrastruc- tures capable of providing high-speed Internet services. Both incumbent and competitive local exchange carriers are also investing in broadband access, primarily through a family of DSL technologies, which leverage existing copper wiring to provide high-speed Internet services. As with cable, the solution of leveraging copper plant is generally considered an

OCR for page 29
48 THE INTERNET'S COMING OF AGE interim step on the path to providing very high bandwidth connections to the home using fiber-optic cable, although the time line for this is both uncertain and likely to vary according to local circumstances. Because two major facilities for broadband cable and the incumbent local exchange carrier's copper loops are owned by incumbent players in regulated industries and the third option today wireless depends in part on spectrum allocation, deployment issues are tightly coupled to both the interests of incumbents and the evolution of the regulatory re- gimes that apply to these players. Thus, for example, cable's move into Internet access and telephony have led to increasing political activity and government scrutiny of the terms and conditions of its Internet service offerings and associated competitive conduct. The 1996 Telecommunications Act26 sought to promote competition and consumer choice as key enablers of high-quality, affordable broad- band local access to the Internet. Efforts to enhance consumer choice fall into two general classes: facilities-based competition and unbundling of network elements through regulation. Facilities-based competition is competition among multiple access providers each of which operates its own infrastructure. In such a regime, competition would exist, for ex- ample, between the copper pair infrastructure owned by the local ex- change carriers, the hybrid fiber/coax infrastructure being deployed by cable operators, and wireless services. The premise of facilities-based competition is that a multiplicity of facilities-based providers and their heterogeneous business models will keep any one provider from domi- nating and creating a bottleneck for innovation and control of content on the Internet. Another approach to ensuring consumer choice included in the 1996 act is to use regulation to unbundle the elements of the incumbent carrier's networks, thereby enabling the entry of competitors. For example, in- cumbent local exchange carriers are required to resell their copper lines to subscribers the so-called "local loops" to other telecommunications providers, allowing these entrants to offer competitive voice service or other services such as DSL over these lines. More recently, there have been calls for various forms of unbundling of the cable infrastructure, an idea generally referred to as "open access" (Box 1.1~. The architecture of the Internet fundamentally supports unbundling the issues that arise with respect to unbundling include what particular approaches work tech- nically in the context of particular access technologies, as well as a com- plex set of economic and policy issues. 26Telecommunications Act of 1996, Public Law No. 104-104,110 Stat. 56 (1996~.

OCR for page 29
INTRODUCTION AND CONTEXT 49 Growing Role for Wireless Services At the same time as cable and DSL technologies are starting to be deployed, there have been considerable interest and investment in build- ing competitive Internet access via high-speed wireless networks. One can also expect the development and deployment of wireless services to provide mobile access. Deployment has benefited from FCC efforts to open up radio-frequency spectrum for such services, and it remains con- tingent on the ability to make spectrum available and to resolve issues related to the siting of transmission towers in local communities. Addi- tionally, new satellite ventures are planning to deploy broadband com-

OCR for page 29
50 THE INTERNET'S COMING OF AGE munications delivered from space; these will be a boon to sparsely popu- lated areas where any sort of terrestrial infrastructure deployment is prob- lematic. In addition to providing access in competition with wired technolo- gies or access where terrestrial infrastructure is not cost-effective, wireless Internet can be expected to play a key role for a wide range of mobile applications. There are many instances, such as when a user is in a car or in a public space, where being connected through a wire is simply not a practical option. IP connectivity in such situations could lead to all sorts of new applications (and businesses), some of which have not yet been thought of and which might turn out to be as popular as normal e-mail or Web surfing itself. The popularity of the i-mode phone service provided in lapan by DoCoMo shows the potential for rapid adoption of wireless data services elsewhere. Voice and Data Services Many industry analysts predict that the rapid growth of data net- works, particularly the Internet, will result in voice traffic increasingly being carried using IP technology (voice over IP or IP telephony). While the time frame for completing such a transition is unclear today, it is clear that many service providers, equipment manufacturers, and customers are moving in this direction. The public switched telephone network (PSTN) is itself evolving to a more data-centric architecture, and the land- scape of equipment suppliers is also rapidly changing. As use of these services grows, they will have significant impacts on the traditional, regu- lated voice service providers and may provoke calls for IP telephony to be subject to regulation akin to that in place for circuit-switched voice ser- vices. Chapter 4 examines these issues, as well as the more general ques- tion of what happens when Internet-based services compete with other communications industries. Rise in the Use of Single-Purpose Devices Today the majority of devices connected to the Internet are general- purpose computers. Most users access the Internet through general-pur- pose computers that are used for a multitude of tasks, and the servers used by providers of content and services over the Internet are also gener- ally based on general-purpose computing systems. However, single-pur- pose devices offer several advantages for both uses. First, a carefully designed, single-purpose device can often be made much less inexpen- sively, with prices more in line with prices of other consumer electronics devices than those of general-purpose computers. Second, single-pur- pose devices are likely to have fewer failure modes and harmful interac-

OCR for page 29
INTRODUCTION AND CONTEXT 51 lions with other devices. Third, single-purpose devices lend themselves to simpler interfaces and greater ease of use.27 For those providing content and applications over the Internet, pos- sible single-purpose devices include network-attached file servers and specialized audio or video servers. For end users, a single-purpose device running a standard protocol can interact with any service that supports that protocol (e.g., consumers wishing to listen to music could use simple devices that stream audio or simple stand-alone music players that down- load music). Looking ahead, it is reasonable to project that networked systems will include a diverse set of embedded systems in homes and commercial settings, and computers used in other infrastructures, such as electric power, will also be networked. International Data Corporation, for ex- ample, has forecast that the number of devices connected to the Internet will more than double every year for the foreseeable future and that non- PC devices will account for nearly half of Internet devices shipped by 2002.28 Acknowledging the limitations of market research, it is nonethe- less reasonable to plan for their widespread use. Such a future can have several significant implications for the Internet infrastructure. Widespread use will, for example, increase the draw on the IP address space. The existence of large numbers of more specialized devices aimed at narrower applications also may put pressure on the Internet model of using a single, standard protocol (as illustrated by the introduction of the Wireless Access Protocol as a solution for the mobile, wireless space). Finally, because they could be used for passive monitor- ing, deployment of a large number of smaller networked devices also raises privacy concerns, including the question of informed consent. FUTURE EVOLUTION AND SUCCESS Reflecting its widespread deployment and adoption,29 substantial commercial investment,30 and broad societal awareness, the Internet has become a mainline piece of the communications infrastructure. Expan- 27Conversely, a proliferation of single-purpose devices, especially in the absence of stan- dardized interfaces, could complicate the user's experience. 28International Data Corporation (IDC). 1999. Death of the PC-Centric Era (IDC Executive Insights). Boston, Mass.: IDC. Available online at . 29Nielsen Media Research and CommerceNet reported in June 1999 that the number of U.S. and Canadian Internet users aged 16 and older was 92 million, an increase from 79 million in the preceding year's study (Associated Press, June 18, 1999~. 300ne measure comes from a Cisco-sponsored study conducted at the University of Texas

OCR for page 29
52 THE INTERNET'S COMING OF AGE sion into the foreseeable future appears inevitable, and new technologies and new applications that leverage these technologies and new opportu- nities will continue to emerge. The Internet's design principles and the values that underlie these principles have been critical to the spectacular success of the Internet. However, from today's vantage point of relative maturity, there are questions that need to be asked about the underlying structures that have brought us to this point the looseness of the Internet's internal coordination mechanisms, of the process by which Internet standards are developed, and of the interconnection arrange- ments that tie it together and questions about the Internet's scalability and reliability. Because of the prominence of the Internet, as well as its potential for disruptive effects on both business and society, there are pressures for government to act in these and many other areas related to the Internet. Thus far, telecommunications regulators have been reluc- tant to intervene. Government involvement with the operation of the Internet has largely been limited to places where it was already involved, such as transitioning the administration of the Domain Name System, or places where Internet and more traditional telecommunications issues overlap, such as within the PSTN local exchanges. On the other hand, there have been both regulatory attention and legislative activity aimed at consumer protection, such as protection of personal privacy and protec- tion against junk e-mail or spam, as well as measures aimed at content and conduct over the Internet (e.g., the Communications Decency Act and gambling) and measures aimed at enhancing Internet-based applica- tions, such as legislation governing digital signatures. Underlying this hands-off approach has been the belief that the Internet will continue to expand, mature, and evolve and that interven- tion could threaten that success. Indeed, indications are that much of this evolution can be expected to occur naturally, without recourse to reme- dial action by government or other players. However, the technical and policy challenges alluded to above raise questions about whether existing mechanisms are up to the task of supporting increasing demands and pressures and what the role of government should and should not be. In addressing these questions, this report seeks to distinguish the issues that will probably be self-resolving from those whose resolution will require greater attention and/or new approaches. at Austin (Anitesh Barua et al. 1999. Measuring the Internet Economy: An Exploratory Study, Technical report. Austin, Tex.: Center for Research in Electronic Commerce, Graduate School of Business, University of Texas. Available online at ), which found that the revenue of companies in the Internet infrastructure business (ISPs, including backbone providers, network hardware and software companies, manufacturers of com- puters and servers, suppliers of security products, and manufacturers of optical fibers and associated hardware) totaled nearly $115 billion annually in 1998.