National Academies Press: OpenBook

The Internet's Coming of Age (2001)

Chapter: 3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency

« Previous: 2 Scaling Up the Internet and Making It More Reliable and Robust
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 107
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 108
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 109
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 110
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 111
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 112
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 113
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 114
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 115
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 116
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 117
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 118
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 119
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 120
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 121
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 122
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 123
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 124
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 125
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 126
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 127
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 128
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 129
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 130
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 131
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 132
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 133
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 134
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 135
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 136
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 137
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 138
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 139
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 140
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 141
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 142
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 143
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 144
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 145
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 146
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 147
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 148
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 149
Suggested Citation:"3 Keeping the Internet the Internet: Interconnection, Openness, and Transparency." National Research Council. 2001. The Internet's Coming of Age. Washington, DC: The National Academies Press. doi: 10.17226/9823.
×
Page 150

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Keeping the Internet the Internet Interconnection, Openness, and Transparence What is referred to as "the Internet" is actually a set of independent networks interlinked to provide the appearance of a single, uniform net- work. Interlinking these independent networks requires interconnection rules, open interfaces, and mechanisms for common naming and address- ing. (The issues associated with interlinking the Internet with the Public Switched Telephone Network are considered separately in Chapter 4.) The architecture of the Internet is also designed to be neutral with respect to applications and context, a property referred to here as transparency. This chapter examines the current and expected future state of these inter- connections and interfaces. INTERCONNECTION: MAINTAINING END-TO-END SERVICE THROUGH MULTIPLE PROVIDERS The Internet is designed to permit any end user ready access to any and all other connected devices and users. In the Internet, this design translates into a minimum requirement that there be a public address space to label all of the devices attached to all of the constituent networks and that data packets originating at devices located at each point through- out the networks can be transmitted to a device located at any other point. Indeed, as viewed by the Internet's technical community in a document that articulates the basic architectural principles of the Internet, the basic 107

108 THE INTERNET'S COMING OF AGE goal of the Internet is connectivity. Internet users expect that their Internet service provider will make the arrangements necessary for them to access any desired user or service. And those providing services or content over the Internet expect that their Internet service providers will similarly allow any customer to reach them and allow them reach any potential customer. (Subject, of course, to whatever controls are imposed at the behest of the subscriber for security purposes.) To support these customer expectations, an Internet service provider must have access to the rest of the Internet. Because these independent networks are organized and administered separately, they have to enter into interconnection agreements with one or more other Internet service providers. The number and type of arrangements are determined by many factors, including the scope and scale of the provider and the value it places on access for its customers. Without suitable interconnection, an Internet service provider cannot claim to be such a provider being part of the Internet is understood to mean having access to the full global Internet. In 1995, interconnection relied on public network access points where multiple providers could exchange traffic.2 Today, there is a much larger set of players and a much greater reliance on private interconnects that is, direct point-to-point links between major network providers. In- deed, there are multiple arrangements for interconnecting Internet ser- vice providers, encompassing both public and private (bilateral) mecha- nisms, connections between commercial networks and public network facilities, and even arrangements for connecting networks defined by ownership or policy as "national" to the international Internet complex. Some of these international connections are constrained by concerns raised by national governments about specific kinds of content being carried over the Internet. Connections among Internet service providers are driven primarily by economics in essence who may have access to whom with what qual- ity of access and at what price but all kinds of considerations are trans- lated into policies, frequently privately negotiated, that are implemented in the approaches to interconnection and routing. A significant feature of today's competitive Internet service marketplace is that direct competi- tors must reach interconnection agreements with each other in order to provide the overall Internet service that their customers desire. These JIB. Carpenter, ed. 1997. Architectural Principles of the Internet, RFC 1958. Network Work- ing Group, Internet Engineering Task Force, June. 2Private interconnections existed then as well, but since everyone was also connected via the government-funded NSFNet backbone, they were viewed as backdoor connections to handle instances of high traffic volume.

KEEPING THE INTERNET THE INTERNET 109 business agreements cover the technical form of interconnection, the means and methods for compensation for interconnection based on the services provided, the grades and levels of service to be provided, and the processing and support of higher level protocols. Interconnection also requires that parties to an agreement establish safeguards chiefly in the form of rules and procedures to ensure that one provider's network is not adversely affected by hostile behavior of customers of the other pro- vider. While, as evidenced by the Internet's continued growth as an inter- connected network of networks, the existing interconnection mechanisms have proven adequate thus far, concerns have been expressed about inter- connection. Interprovider, public-private, and international connections all raise questions of public policy, or Internet governance. This section focuses on interprovider connections because it is these connections that drive the shape and structure of the Internet. Structure of the Internet Service Provider Industry There are several thousand Internet service providers in the United States.3 These providers cover a range of sizes, types of services they provide, and types of interconnections they have with other service pro- viders. The Internet service provider business has grown substantially, with entry by many new players, following the phasing out in the mid- l990s of the government-supported NSFNet backbone. Changes in the nature of these players are as significant as changes in the number. As the mix has evolved, so have business strategies. One sees ISPs chasing par- ticular segments of the market (e.g., they specialize in consumers or busi- nesses or they run Web server farms), trends toward consolidation though mergers and acquisitions, and moves to vertically integrate a full range of services, from Internet access to entertainment, news, and e-commerce. The interlinked networks that are the Internet form a complex web with many layers and levels; the discussion that follows should not be taken to suggest simplicity.4 30ne source of information on Internet service providers is Boardwatch magazine s Direc- tory of Internet Service Providers. Golden, Colo.: Penton Media, June 1999. Available online from <http://boardwatch.internet.com/isp/summer99/introduction.html>), it lists 5078 ISPs in North America, a figure that covers a wide range of sizes and business models. 4see, for example, the results of sell Labs, Internet Mapping Project, which provides a visualization of data gathered in mid-l999 indicating the complexity of the Internet. A number of maps are available online at <http://www.cs.bell-labs.com/who/ches/map/ gallery/index.html>.

110 THE INTERNET'S COMING OF AGE A straightforward and useful way to categorize ISPs is in terms of the interconnection arrangements they have in place with other providers. The backbone service providers, which include commercial companies as well as several government-sponsored networks like DOE's ESNET, use trunk capacities that are measured in gigabits, or billions of bits, per sec- ond. Roughly a dozen of the ISP companies provide the backbone ser- vices that carry a majority of Internet traffic. These providers, termed "tier 1," are (recursively) defined as those providers that have full peering with at least the other tier 1 backbone providers. Tier 1 backbones by definition must keep track of global routing information that allows them to route data to all possible destinations on the Internet which packets go to which peers. They also must ensure that their own routing informa- tion is distributed such that data from anywhere else in the Internet will properly be routed back to its network. Tier 1 status is a coveted position for any ISP, primarily because there are so few of them and because they enjoy low-cost interconnection agreements with other networks. They do not pay for exchanging traffic with other tier 1 providers; the peering relationship is accompanied by an expectation that traffic flows and any costs associated with accepting the other provider's traffic between tier 1 networks are symmetrical. Tier 1 status also means, by definition, that an ISP does not have to pay for transit service. Much of the Internet's backbone capacity is concentrated in the hands of a small number of tier 1 providers, and there is some question as to whether it is likely to become even more concentrated, in part through mergers and acquisitions. Concerns about market share in this segment have already emerged in the context of the 1998 merger between MCI and WorldCom, at that time the largest and second largest Internet backbone providers. In that instance, European Union regulators expressed con- cerns about the dominant market share that would have resulted from such a combination. In the end, to get approval for the merger, some of MCI's Internet infrastructure as well as MCI's residential and business customer base was sold off to Cable & Wireless and the merger went forward.5 Some of the advantage held by the very large players lies in their ability, owing to their large, global networks, to provide customers will- ing to pay for it an assured level and quality of service. These very large companies provide customers with solutions intended to allow those cus- tomers, in turn, to connect with higher levels of performance to other 5see, for example, Mike Mills. 1998. '~Cable & Wireless, MCI Reach Deal; British Firm to Buy Entire Internet Assets., Washington Post. July 14, p. C1.

KEEPING THE INTERNET THE INTERNET 111 users in the same network, using such technologies as virtual private networks, and they also offer widely dispersed customers the convenience of one-stop shopping. Such large players also allow customers to inter- connect to the public Internet but generally without making the service guarantees. Part of their dominant position also stems from their tier 1 status, which assures their customers (including tier 2 and tier 3 ISPs) of their ability to provide a high quality of access to the public Internet. In addition, tier 1 providers, by determining how and with whom they inter- connect, affect the position of would-be competitors. Below tier 1 sit a number of so-called tier 2 and tier 3 service provid- ers, which connect corporate and individual clients (which, in turn, con- nect users) to the Internet backbone and offer them varying types of ser- vice according to the needs of the target marketplaces. This group spans a wide range of sizes and types of providers, including both a small set of very large providers aimed at individual/household customers (e.g., America Online) and a large number of smaller providers. These include providers of national or regional scale as well as many small providers offering dial-up service in only a limited set of area codes.6 A recent trend has been the emergence of so-called free ISPs, which provide residential Internet service at no charge, typically in exchange for a demographic profile of the customer and an agreement by the customer to view adver- tising material delivered along with the Internet service. This class also includes the networks operated by large organizations, including those of large corporations, educational institutions, and some parts of govern- ment. These ISPs cannot generally rely on peering alone and must enter into transit agreements and pay for delivery of at least some of their traffic. Some of these providers have not invested significantly in build- ing their own facilities; instead they act as resellers of both access facilities (e.g., dial-up modem banks) and connectivity to the Internet backbone. While industry analysts have long predicted increased consolidation and the demise of the smaller providers, recent trends indicate that the business remains open to a large number of players.7 However, optimism here is tempered by two considerations. First, many of the very small players are only active in small markets or geographical regions. Second, 6Matt Richtel. 1999. "Small Internet Providers Survive Among the Giants." New York Times. August 16, p. D1. 7Boardwatch magazine's directory of Internet service providers in North America showed continual growth in the number of ISPs from February 1996 to July 1999. See Boardwatch magazine's Directory of Internet Service Providers. Golden, Colo.: Penton Media, June 1999. Available online from <http://boardwatch.internet.com/isp/summer99/introduction. html>.

2 THE INTERNET'S COMING OF AGE subscriber data show that a single player, America Online, with more than 20 million subscribers, has a significant share of the consumer mar- ket.8 Another area of interest is the emerging broadband market. The recent flap over open access illustrates the concerns that some have about the market share and the behavior of the providers of the communica- tions links themselves (i.e., the facilities' owners), the Internet service providers, and the content providers, with which both facilities and ser- vice providers may have business arrangements. Another recent trend has been the establishment of a new form of ISP, the hosting provider. This type of ISP operates both single-customer (dedicated) and shared-application servers, typically providing Web ser- vices on behalf of companies who would rather outsource the manage- ment of machine rooms and Internet connectivity. They offer customers a certain level of service (as seen by those throughout the Internet that make use of the customer's service) by arranging for (purchasing) transit services with a sufficient set of backbone connections. Interconnection Mechanisms and Agreements Internet interconnection arrangements in some ways echo those of telephony, since the public telephone network is also a collection of dis- tinct networks linked together to provide a uniform service. However, telephony, unlike the Internet, leverages and reflects decades of state, federal, and international regulation and standards-setting that have shaped the terms and conditions of interconnection, including financial settlements. Internet interconnection, by comparison, is relatively new, and the technology, market structure, and arrangements are evolving. Providing Internet-wide interconnectivity requires that the parties who own and operate the constituent networks reach agreement on how they will interconnect their networks. The discussion in this section looks at interconnection at three levels: the physical means of intercon- nection, the different patterns of traffic exchanged by providers (transit and peer), and the financial arrangements that underlie and support the physical means and different traffic patterns. The focus here is on teas- ing out the essential elements of interconnection, but this should not be taken to mean that interconnection is a simple matter. There are many players at many levels, and in each case there is more than one choice of physical interconnection, logical interconnection, and financial arrange- 8Data from Telecommunications Report's online census, January 2000, reported in David Lake. 2000. "No Deposit, No Return: Hard Numbers on Free ISPs." The Industry Standard, March 27.

KEEPING THE INTERNET THE INTERNET 113 meet, and implementation of each choice depends on a complex set of negotiated agreements. Physical Interconnection Public exchanges are a way of making the interconnections between a number of providers more cost-effective. If n providers were individu- ally to establish pairwise interconnections, they would require n~n-l)/2 direct circuits. A public exchange, where all n providers can connect at a common location, permits this to be done much more inexpensively, with n circuits and a single exchange point. A provider interconnects to an exchange point, either physically by installing his own equipment and circuit into a specific location (e.g., the MAE-West facility at NASA Ames Research Center or the Sprint NAP in Pensauken, New lersey) or logi- cally by using a leased network connection to an interconnect provider through an ATM or Ethernet network (e.g., the MAE-East ATM NAP in northern Virginia or the Ameritech ATM NAP in Chicago). These inter- connect networks are usually operated by large access providers, who hope to derive considerable revenue by selling access lines to ISPs wish- ing to attach to each other through the access provider's facilities.9 In recent years, the public interconnects have acquired a relatively poor reputation for quality, in part owing to congested access lines from the exchanges to tier 1 providers, which results in packet loss, and in part owing to exchange point technology that cannot operate at speeds com- parable to major backbone trunks. This trend is likely to accelerate as large backbones move to extremely high-speed wavelength division mul- tiplexing (WDM)-based bunking, which exceeds the data rates that can be handled by today's exchange point technology. Another option is to use a direct, point-to-point connection. One motivation for point-to-point connections is to bypass the bottleneck posed by a public exchange point when traffic volumes are large. Be- tween large providers, connections are usually based on high-perfor- mance private interconnects, for example point-to-point links at high speeds (DS-3 or higher). Direct connection can also provide for better management of traffic flows. The very large volume of traffic that would be associated with a major public access point can be disaggregated into smaller, more easily implemented connections (e.g., a provider manages 9If they provide direct connections to multiple provider networks, public exchanges can also turn out to be very efficient places to locate other services such as caches, DNS servers, and Web hosting services. And because public exchanges bring together connections to various providers, they are also useful places to conduct private bilateral connection through separate facilities.

4 THE INTERNET'S COMING OF AGE 10 OC-3 connections to 10 different peers in different locations rather than a single OC-48 connection to a single exchange point that then connects to multiple providers). Another reason for entering into private connections is the desire to provide support for the particular service level agreements and quality-of-service provisions that two networks agree to in their peer- ing or transit agreement. Logical (Routing) Interconnection When two or more ISPs establish an interconnection, they exchange route advertisements to specify which data packets are to be exchanged between them. Route advertisements describe the destination Internet addresses for which each provider chooses to accept packets from the other. These advertised routes are loaded, generally through automated mechanisms, into each other's routing tables and are used to determine where (including to which providers) packets should be routed based on their destination address. There are two common options for how providers accept each other's traffic: transit and peer. In the transit model, the transit provider agrees to accept and deliver all traffic destined for any part of the Internet from another provider that is the transit customer. It is possible that two pro- viders in a transit arrangement will exchange explicit routing informa- tion, but more typically the transit provider provides the transit customer with a default route to the transit network while the transit customer provides the transit provider with an explicit set of routes to the customer's network. The transit customer then simply delivers to the transit provider all packets destined for IP addresses outside its own network. Each transit provider establishes rules as to how another net- work will be served and at what cost. The transit provider will then distribute routing information from the transit customer to other back- bones and network providers and will guarantee that full connectivity is provided. Address space for the customer provider may come from its transit provider or from its own independent address space should that provider have qualified for such allocation. (The issues surrounding ad- dress allocation and assignment are discussed in Chapter 2.) 10 The preferred way for large providers today to interconnect is through peer arrangements. In contrast to transit arrangements, where one pro- vider agrees to accept from the other traffic destined for any part of the 10Some providers or customers engage in the practice of multihoming, whereby they establish transit connections with multiple ISPs, generally to provide redundancy. This can introduce both technical and management issues, including how to allocate traffic among the multiple paths, that will not be discussed in detail here.

KEEPING THE INTERNET THE INTERNET 115 Internet, in a peering relationship, each provider only accepts traffic des- tined for the part of the Internet it provides. Peers exchange explicit routing information about all of their own addresses along with all of the addresses of their transit customers. Based on that routing information, each peer only receives traffic destined for itself and its transit clients. This exchange of routing information takes the form of automated ex- changes among routers. Because the propagation of incorrect routing information can adversely affect network operations, each provider needs to validate the routing information that is exchanged. For smaller providers the only option (if any) for physical intercon- nection is typically at a public exchange point. Location at a peering point implies that the peering relationship may still suffer from poor (or at least uncontrolled) service quality, since the exchange point or the connections to it may be congested; they may, however, be very cost-effective, espe- cially for smaller providers. Once interconnectivity is established through a public exchange, providers may attempt to enter into a bilateral peering agreement with other providers located at the same interconnect. This can be a cost-effective means of bilateral peering, because connectivity to many other providers can be aggregated onto a single connection to the exchange. Financial Arrangements for Interconnection The issue of compensation for interconnection is a complex one. The essence of interconnection is the handing over of packets, according to the routing information that has been exchanged, to be routed onward to- ward their destination. Compensation reflects the costs associated with provisioning and operating sufficient network capacity between and within ISP networks. As a basic unit of interconnection, packets are some- what akin to call minutes in voice telecommunications. However, archi- tectural differences between the Internet and PSTN make accounting in terms of packets much more complicated than call-minute-based account- ing. Even if an infrastructure were to be put in place to count and charge on a packet-by-packet basis, the characteristics of packet routing would make it difficult to know what the cost associated with transmitting a given packet would be.ll As a result, interconnection schemes that are 1lSeveral of these characteristics are noted in a paper by Geoff Huston. 1999. Intercon- nection, Peering, and Settlements, Technical Report. Canberra, Australia: Telstra Corporation, Ltd., January. They include the following: packets may be dropped in the course of their transmission across the Internet; the paths that packets follow are not predetermined and can be manipulated by the end user; and complete routing information is not available at all points, so that the undeliverability of a packet may not be known until it approaches its destination.

116 THE INTERNET'S COMING OF AGE used in other contexts, such as the bilateral settlements employed in inter- national telephony, are not used in the Internet, and interconnection has generally been established on the basis of more aggregated information about the traffic exchanged between providers. Some of these issues have to do with the cost of the interconnection, traffic imbalances (e.g., one provider originates more traffic than it terminates), and relative size (one provider offers greater access to users, services, and locations than the other). Two financial models predominate; one is linked to the transit model and the other to the peer provider model discussed above. In the transit model, a transit customer buys transit service from a transit provider and pays for an access line to that larger provider's net- work. These arrangements take the form of bilateral agreements that specify compensation (if any) and the terms of interconnection, including service guarantees (level and quality of service) that each party makes. In the early days of the commercial Internet, providers did not pay for tran- sit services. Before ISPs insisted on payment for transit, nonbackbone ISPs could become free riders in the so-called hot potato scenario, whereby a network would dump traffic for destinations beyond those advertised by a particular provider, thereby forcing the backbone ISP to carry traffic it had not agreed to carry. Private interconnects help prevent free riding, because it is more straightforward to identify this condition given a direct mapping between the link and a single provider. In the peer model, two ISPs agree to a peer relationship based on a perception of comparable value. These agreements are generally barter agreements between peers that assume an exchange of a roughly compa- rable level of traffic or, on some other basis, that the costs and benefits of a peer relationship will be mutually beneficial. Peer barter arrangements echo what is called in telephony "sender keeps all" or "bill and keep"- the network to which a customer connects keeps the fees paid by that customer for traffic carried on both its and another provider's network. Peering among the tier 1 providers is perhaps the most visible, but peer- ing is also conducted among smaller players and at the regional or local level. Logical peering and financial peer relationships generally coincide, but there are exceptions. In some instances a customer will pay for a nontransit service that, logically though not financially, looks like peer- ing. For example, ISP A may pay ISP B for access to B's customers but not B's peers. The value attached to either transit or peer relationships is not based only on the number of bits exchanged nor is it based solely on the origin, destination, or distance it also reflects the value attached to particular content. Consider, for example, a large, consumer-focused ISP ("ISP A") and a major, popular content provider that is connected to the Internet through another provider ("ISP By. ISP A will be judged by its custom-

KEEPING THE INTERNET THE INTERNET 117 ers based on the quality of service that it provides. To the extent that A's customers value content directly available from ISP B. customer judg- ment of ISP A will depend on the quality of the interconnect established between A and B. Thus ISP A may be willing to pay extra for higher capacity links to ISP B in order to ensure better performance for custom- ers accessing the content provider. The complementary argument may also hold true: the content provider may well derive revenue from adver- tising that in turn depends on the return rate of viewers, so it (and, conse- quently, its ISP) will be willing to pay extra for interconnection relation- ships that ensure that customers of ISP A receive a good quality of service. (This is a major business consideration for the Internet hosting providers described below.) Accordingly, the performance that a consumer experi- ences with a particular piece of content depends in part on the capacity of the interconnects between the consumer's and content provider's com- puters, which in turn depends in part on the willingness and ability of the consumer and content provider (and their ISPs) to pay for those intercon- nections. Chapter 2 discusses a number of issues surrounding quality of service (QOS) mechanisms, including the dim prospects for deployment of inter- provider QOS; here we discuss some issues related to interconnection. If the stresses associated with the development and evolution of today's peering and transit agreements, which have generally only addressed much broader service level agreements, are any guide, establishing agree- ments that enable interprovider quality of service would prove difficult. Providing guarantees of better service to a subset of users means that resources are set aside that become unavailable for other users. This can only develop if higher grades of quality of service are sold at a premium price and if there are mechanisms to adequately compensate ISPs. If the necessary business agreements would take years to develop, then inter- provider QOS would take years to deploy. Also, congested interconnec- tions exacerbate quality-of-service differences between connections across a given provider's network, as compared with connections across mul- tiple provider networks. They often result in companies connecting all their sites through a single provider's network rather than through a variety of providers and depending on this interprovider connectivity. They also result in large content-hosting providers almost always attach- ing to each of the major backbone networks (usually as a transit customer rather than a peer) to bypass interprovider interconnects and improve overall robustness of access for their customers. Specific mechanisms for quality of service are starting to show up in parts of the Internet, but not as generally deployed, end-to-end services that any application can take advantage of to reach users Internet-wide. They are being offered only inside specific ISPs as product differentiators

118 THE INTERNET'S COMING OF AGE or being bundled with specific applications (such as Internet telephony). Thus there is pressure for alternatives to the baseline Internet. What content or service providers do today is enter into an agreement with a company that delivers specialized content services located throughout the Internet so as to improve the quality of the connection seen by end users. For example, RealAudio or Akamai will load streaming media content onto their servers. The Internet is being overlaid by these applica- tion-specific delivery networks. These overlay networks do not provide end-to-end connectivity between the original content or service provider and the end user and are open only to those providers who are willing and able to pay for specialized services. Considerations Affecting Decisions to Enter into Peering Agreements As noted above, peer status is advantageous to ISPs because it means that they will not have to pay other providers for transit and because for its customers it is taken as evidence of a high service quality. In making financial arrangements to support its interconnection with the rest of the Internet, each provider is strongly motivated to maximize its revenue streams from its customers and minimize its expenses, including charges paid to other providers. There are, therefore, natural pressures for each provider to want to become a peer and for a peer to resist one of its customers asserting a peer relationship and for it to resist one of its peers asserting that it should, in fact, be a customer rather than a peer. The issue of who could attain peer status first received considerable publicity in 1997 as a result of announcements by UUNET and Sprint that they would no longer peer freely with any and all networks (along with an announcement by PSINet that it explicitly would agree to peer with smaller players), raising concerns in some circles about the implications for smaller networks. These concerns have lingered; reflecting the signifi- cant barrier to entry for an Internet provider that peering represents, smaller ISPs and new entrants have resorted to litigation to attempt to attain peering. Part of the difficulty of assessing peering issues is the fact that the terms of peering agreements are private. There are some generally un- derstood criteria used by backbone providers to determine whether or not another network qualifies for peering or is viewed as a potential customer for that ISP's transit services. These criteria generally include (1) having a national network with a dedicated transcontinental backbone of at least a certain speed, (2) exchanging a minimum amount of traffic with that ISP (usually with comparable amounts of traffic travelling in both directions), (3) providing around-the-clock operational support, and (4) agreeing to abide by certain rules and policies in how routing traffic is

KEEPING THE INTERNET THE INTERNET 119 processed and/or filtered. However, these are only general consider- ations, and the terms of peering agreements are private. As the standards and requirements are usually covered by nondisclosure agreements, their contents and components are not widely known. This private approach reflects in part the multidimensional, subjective determination of who is and who is not a peer. ISPs are motivated to be conservative in this process. If an ISP pub- lishes explicit rules, these are an invitation to lawsuits in the event that it either declines to peer with an entity that arguably meets the published rules or agrees to peer with an entity that arguably does not. By contrast, companies that really are peers, and thus clearly stand to benefit from peering, will usually realize this and conclude appropriate agreements if discussions can be carried out in a relatively private context. If an objec- tive framework for deciding who are peers were to be developed, it would entail either the industry itself agreeing on one or a governmental (or, given the Internet's global reach, an intergovernmental) entity develop- ing one, both of which are problematical and may lead to fewer peering arrangements, not more. Absent such a framework, peering will be based on the premise that two parties try to prove to each other that they are peers. The economics of a proposed peering relationship is the dominant, but not the only, consideration that goes into a decision to peer or not. Fundamentally, agreements between tier 1 providers and smaller provid- ers pose additional challenges because the asymmetrical traffic carried by the two classes raises questions about compensation for the costs associ- ated with the connection and the termination of the smaller network's traffic. The absolute volume of traffic also matters. Because the expense associated with setting up dedicated links makes sense only when lots of traffic is being exchanged, it is not cost-effective to establish a private interconnect unless a significant amount of traffic is being exchanged. Few small providers are in a position to use the private peering approach, and large providers are unlikely to view private interconnects with small providers as attractive. The costs of establishing these links are also usu- ally much less for a facilities-based provider; non-facilities-based provid- ers are at a cost disadvantage in implementing such interconnects. There are also concerns on the part of tier 1 providers about the potential for free-riding in peering. For this reason, many large tier 1 backbone pro- viders are reluctant to peer with smaller networks because doing so would open them up to this vulnerability. Interconnect technologies that provide more point-to-point control over traffic flows, such as ATM, offer some advantages in dealing with this problem but do not completely eliminate it.

20 THE INTERNET'S COMING OF AGE Peers need not be the same size, and there are cases where the major backbones will peer with smaller providers despite the asymmetry in traffic capacity. For instance, even if swapping traffic on a barter basis would not be supported based solely on the amount and balance of traffic exchange, it may prove attractive to the backbone provider if the smaller provider has a network, albeit modest in size, that is national or world- wide in scope. It is possible in such circumstances to fashion a peering agreement in which the smaller provider interconnects with the backbone at enough places such that all the smaller provider's traffic stays within the smaller provider's own network most of the time, thus minimizing the cost to the larger peer. Competitive positioning also enters into the equation. In general, there is an interest in retaining a competitive advantage over new en- trants. The type of interconnections that a provider has in place is an important business consideration because it establishes the service qual- ity that customers experience when transferring data across the provider's boundary. Indeed, many would-be ISP customers rely on the type of peering being used as an indicator of quality. Because private intercon- nects can provide a better service quality owing to their greater capacity, dedicated nature, and ability to more carefully manage the traffic across them, the existence of such interconnects is often seen by customers as a sign that a provider offers generally higher quality Internet service.l3 Peer status is used at least in part because there are no agreed-on quantitative metrics and processes for evaluating the quality of Internet interconnec- tions, particularly public metrics that detail the status of connectivity. The question of measuring quality is exacerbated by the dynamic growth in the volume of traffic throughout the Internet as well as by changes in the types of traffic being carried. In addition, to alleviate concerns that a customer of today may be- come a competitor tomorrow, most tier 1 providers have rules in place disqualifying existing customers from becoming peers. Thus, a smaller provider just entering the market may find that it must by definition purchase transit in order to provide Internet access service, but that its status as a customer will prohibit it from attaining peer status in the future. These trends of course reinforce the position of the established larger players. Indeed, it has been asserted that in the past several years no ISP has been able to attain tier 1 status without doing so by purchasing 13Large customers often specify tier 1 status as an element in requests for proposals for Internet service (at least in part because there are no well-agreed-to quantitative metrics and processes for evaluating the quality of Internet interconnection; this problem is exacer- bated by the dynamic growth of traffic in backbones, such that connectivity that might have been considered good at one point in time may be wholly inadequate 6 months later).

KEEPING THE INTERNET THE INTERNET 121 an ISP that already had full peering status.l4 As a result, in order to grow its traffic with a tier 1 provider so that it can attain peer status, a new entrant may have to discount its product substantially to entice new cus- tomers who will put up with a lower quality of service in the meantime. Alternatively, it could pay a substantial amount to a backbone provider in order to achieve high quality interconnection or purchase a provider that already has peering agreements. Robustness considerations also come into play when providers con- sider entering into a peering relationship. Limits in the protocols used to exchange critical routing information as well as in the hardware and soft- ware used in the core of the Internet mean that full peering can lead to a breakdown of the routing system. Provider A may peer with provider B in such a way that provider B is only supposed to provide routing infor- mation for provider B's directly attached customers and those providers it is providing transit for. But a very simple error in a routing configura- tion may flood provider A's router with bad routing data such that a large amount of provider A's traffic would be inadvertently sent through pro- vider B. which may not have the capacity to properly deliver that data, resulting in a service outage for provider A's users.l5 Also, while routing information is exchanged and processed automatically, configuring the routing requires judgment, as does troubleshooting when problems arise. Problem resolution depends in part on informal interactions among net- work operators. ISPs must acquire the necessary skills, some of which are best obtained from prior employment at a provider that already has a network of the size, scope, and complexity of a tier 1 provider. As a consequence, an ISP may avoid entering into a peer relationship with another provider if it feels the other provider does not have the proper personnel or processes in place to prevent routing disturbances. Evolution of Interconnection Models The discussion above reflects the interconnection arrangements that have historically prevailed in the Internet industry. Recently there have 14The Cook Report on Internet, November 1999, p. 10. Available online at <http:// www. cookreport. com /08.08.shtml>. 15One example of this was reported in "Risks List." Risks Digest 19~12), May 2, 1997. "On 23 Apr[il] 1997 at 11:14 am EDT, Internet service providers lost contact with nearly all of the U.S. Internet backbone providers. As a result, much of the Internet was disconnected, some parts for 20 minutes, some for up to 3 hours. The problem was attributed to MAI Network Services . . . which provided Sprint and other backbone providers with incorrect routing tables, the result of which was that MAI was flooded with traffic." Available online at <http: / /www.infowar.com/iwftp /risks/risks-19/19_12.txt>.

22 THE INTERNET'S COMING OF AGE been innovations in interconnection that provide alternatives to the con- ventional binary choice between attaining peer status or becoming a tran- sit customer. Similar innovations have been tried before on a not-for- profit basis, such as in some of the public exchanges. What is new today are moves by both existing players and new entrants to use new business models for interconnection. The committee is aware of a number of in- stances where tier 1 providers have entered into arrangements that are somewhere between the pure peer interconnection and the pay-for-tran- sit interconnection. As with conventional peering agreements, the terms are subject to nondisclosure, so it is difficult to characterize or examine them in detail. But their existence is one indication that the Internet industry is responding to market forces by providing such alternatives. One prominent example of an entrant following a new business model is InterNAP, which has built a business around providing an alternative to conventional peering or transit. InterNAP establishes high-perfor- mance interconnection points in key locations and then connects these points to top ISPs, including a number of the tier 1 providers. The connec- tion arrangement lies halfway between peering and transit. The business relationship resembles a transit model inasmuch as InterNAP pays for the connection/service and thus has a predictable-service-level agreement. But the routing relationship is peering it only forwards traffic into an ISP if it will be terminated there. It does have to pay for transit services in some cases but claims that a majority of the routes in the default free zones are available to it through peer routing. InterNAP is able to pay a reasonable price for its connections because it is mostly delivering traffic into each ISP that would eventually get there anyway by some other route. With these interconnection relationships established, it then sells Internet access to smaller ISPs, who it claims receive a better quality service than if they had purchased transit service from just one ISP or made use of a public exchange and experience less hassle than if they had tried to negotiate a number of peering relationships on their own. InterNAP also provides service to major Web hosts, who then can connect to most large ISPs without having to manage individual relationships with them. Because it was historically seen as too cumbersome to charge on an individual basis for the transfer of data packets, the Internet has tradition- ally been based on the establishment of revenue-neutral boundaries at its center, where the major tier 1 ISPs connect, and on the selling of transit service to downstream providers on the basis of rough measures such as link capacity or average data rates. This simple model has worked rea- sonably well but does not give content providers or ISPs any way to make additional payments to support a desired service level. Nor are there processes in place that allow a content producer to transfer money

KEEPING THE INTERNET THE INTERNET 123 through its ISP to the ISPs of its target consumers to reduce the costs that either ISP incurs in carrying its content to the target consumers. In response to these perceived shortcomings in the Internet's inter- connection arrangements, content cache providers are introducing new financial models. Akamai provides a good example. It places servers within the networks of ISPs, as near the consumer as possible. Content producers who want to ensure access by their customers pay Akamai to host their content, and Akamai (in some cases) pays the ISP to host the Akamai server. This new financial mechanism allows the content pro- ducer to pay the ISPs that serve end users. The arrangement has both advantages and disadvantages. One advantage is that it provides a nonconsumer source of revenue for ISPs. On the other hand, as discussed in more detail below, this sort of infrastructure is an application-specific delivery overlay network that a producer can only use by paying for it, so it is somewhat of a departure from the Internet's traditional architecture. Monitoring Internet Interconnection In contrast to telephony, which has been the subject for many years of economic oversight and regulation, the Internet is by and large unregu- lated, with the Federal Communications Commission having, thus far, demonstrated no interest in intervening in it. Another avenue for inter- vention is the application of antitrust law in particular instances; in the Internet context, this has taken the form of reviews of proposed mergers. It is the view of this committee that current policy should continue, as should monitoring. The Internet interconnection market model has risks. It assumes a reasonably competitive environment, where competition among ISPs keeps transit agreement charges reasonable, where there is no one ISP so dominant that it can refuse to peer with any other and thus force all the other ISPs to pay for access to the dominant ISP's customers, and where there is not pervasive vertical integration of backbone ISP and content and service businesses. The small number of tier 1 providers and the difficulties of attaining this status have, however, raised concern about the competitiveness of the ISP marketplace, in particular about the barri- ers to market entry. As discussed above, for a provider to attain tier 1 status, it must by definition reach peering agreements with all (or at least most) other tier 1 providers; a provider's inability to reach agreements with all or most of them is sufficient to prevent the provider from becom- ing a tier 1 provider. Additionally, providers offering transit service fre- quently incorporate into their interconnection agreements restrictions on transit customers becoming peers. Thus, where a provider starts with some or all of its relationships being of the transit sort, it may be unable to

24 THE INTERNET'S COMING OF AGE attain tier 1 status. The emergence of alternatives to the pure peer or transit interconnection models suggests, however, that the marketplace may be finding ways to reduce these pressures by introducing intercon- nection models that better suit the business needs of parties that seek to establish an interconnection agreement. The existence of public exchanges means that some form of connec- tion to the Internet is generally available to all providers; this, in turn, means that concerns can focus on the nature, terms, and quality of inter- connection rather than on participation. While these considerations are also key to interconnection in the PSTN, they are significantly more com- plex and dynamic in the Internet. At any given point in time there is a wide range of applications and services in use across the Internet, each with different implications for interconnection. And all indications are that new types of applications and services will continue to emerge on a regular basis. Related to this is the variability in the value attached to Internet data packets in other words, the price that a party would, in principle, be willing to pay for transmission of an individual packet de- pends on a number of factors, including the application the packet is associated with, its content, and its points of origination and termination. OPENNESS AND INNOVATION The Internet was developed first as a joint research effort and then as a joint engineering effort by the research community. The standards development process, which came to be formalized through the Internet Engineering Task Force (made up of technical experts from academia and industry), emphasized standardization because it grew out of a highly diffuse but collaborative development environment. In that environment, new concepts would be experimented with and then a project would be launched that included parallel development of implementations and standards. Eventually, several companies would offer compatible prod- ucts. These attributes were responsible for a healthy competition among designs for applications and the protocols they use and frequently for the development and availability of multiple implementations of products that would be available from multiple vendors. Crucially, these multiple implementations have been consistent with the development and adop- tion of a single standard for key functions. It should be noted that the term "standard" refers to several different types of specifications, including the following: 1. An application programming interface (API) published by a soft- ware provider. A developer may need to enter into a contract to make use

KEEPING THE INTERNET THE INTERNET 125 of the API, and the specification is subject to change at any time at the discretion of the software provider; 2. A complete specification published by a corporation (e.g., Sun's lava language, the Microsoft Windows APIs, and the Microsoft/Intel- driven PC architecture). In such cases, the companies or organizations that develop the specification have at least some degree of control over what changes are made to the specification. 3. An open specification published by a neutral institution, such as the World Wide Web Consortium (W3C) or the IETF. Developed under appropriate procedures, this approach permits multiple actors to develop and control a standard without running afoul of antitrust laws. 4. A standard that is enforced by some regulatory authority, such as the National Television Systems Committee (NTSC). These are standards that have, until the development of the HDTV standard, been mandated for all U.S. television broadcasters. Many of them are developed in indus- try (by individual companies or industry consortiums) and are then adopted as official standards. The core standards employed in the Internet tend to fall into the third category, although some fall into the second. The terms "standard" and "open standard" are not synonymous. The Internet model for development is characterized by openness, which re- fers to the ability of multiple vendors to independently construct prod- ucts that work with one another. Openness means that customers can mix products from one vendor with products from another (e.g., use one vendor's client software with another vendor's server) and that applica- tions from one vendor operate over infrastructure provided by another. Openness relies crucially on the development and adoption of standards. An interface designed with only one vendor's products in mind can be readily implemented only in that single vendor's environment. From this perspective, privately designed interfaces that are publicly published are not open interfaces. Standardization processes enable multiple vendors to cooperate on the development of new elements that will allow them to )6Closely allied with open standards is the practice of open source, best known as the mechanism through which the Linux operating system is distributed and developed. Open source practices resemble the third type (open specification published by a neutral institu- tion) but in a somewhat different fashion. All parties are free to implement their own modifications to the open source (with the use of the resulting code subject to whatever use agreement was attached to the original code base), but an individual or organization gener- ally decides which modifications are incorporated into what is considered the standard code base.

26 THE INTERNET'S COMING OF AGE develop new markets markets that are much larger than would be the case if each developed its own competing technology. Then the vendors compete by providing competitive products that build on the standard- ized elements. When this process works well, it results in greater benefits for both vendors and customers. Critical Open Standards in the Internet The Hourglass Architecture The existence of an abstract bit-level network service as a separate layer in a multilayer suite of protocols provides a critical separation be- tween the actual network technology and the higher-level services through which users actually interact with the Internet. Realizing the Information Futures depicted this layered modularity as an hourglass, with an "open bearer service" at the narrow waist (Figure 3.1~. In the Internet, this abstract, bit-level transport service is provided by the Internet Protocol (IP). At this level, bits are bits and nothing more. Above the waist, the glass broadens out to include a range of options for data transport and applications. Right above the IP layer is the transport layer, which is made up of the enhancements that transform the basic IP bit transport service into the range of end-to-end delivery services needed by the applications reliable, sequenced delivery; flow control; and end- point connection establishment. The Transport Control Protocol (TCP), the most commonly used transport mechanism, is often lumped together with IP as TCP/IP. There is, however, an important distinction: IP defines those features that must be implemented inside the network, in the switches and routers, while the transport layer defines services that are the responsibility of the end node. The upper layers, above IP and transport, are where the applications recognized by typical users reside, such as e-mail, streaming audio and video, and the Web. The technolo- gies below the waist make up the bit-carrying infrastructure and include both the communication links (copper wire, optical fiber, wireless links, and so on) and the communication switches (packet routers, circuit switches, and the like). Figure 3.2 shows where some familiar Internet technologies, protocols, and applications fit within the hourglass construct. Imposing a narrow point in the protocol stack removes from the ap- plication builder the need to worry about details and evolution of the underlying network facilities and removes from the network provider the )7Computer Science and Telecommunications Board (CSTB), National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. Washington, D.C.: National Academy Press.

KEEPING THE INTERNET THE INTERNET 127 / Layer 4 Applications \ ~ ~ ~ Hi\ ~ ~ C:) In: Con ~ 1 \ Layers Middleware Services / \ ~ 53 ~ Id/ \ ~ ~ / \ ~ Repositories ) \~DirectoriesJ ,^ Multisite ~ / \ ~K ~~ ~ / \ J ~ / \ ~ / Open Bearer \ Transport Services and / Service Interface Layer2 \ Representation Standards / / >(fax, video, audio, text, and so on)4 Layer 1 ODN Bearer Service LANs r Point-to-Point ~~ Circuits ,~,~ FIGURE 3.1 The hourglass model of Internet architecture. SOURCE: Computer Science and Telecommunications Board (CSTB), National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. Washington, D.C.: Na- tional Academy Press.

28 THE INTERNET'S COMING OF AGE e-mai' phone . . SMTP HTTP PTP \ TCP U0P . .~ ~ _ ~ f IP | or J , ~ - .~ or ~ ethernet PPP Or 1 ~ sonet . . copper 1 leer - FIGURE 3.2 How some Internet-related technologies, protocols, and applications fit into the hourglass model. SOURCE: Adapted from a figure by Steve Deering, Cisco Systems. need to make changes in response to whatever standards are in use at the higher levels. This separation of IP from the higher-level conventions is one of the tools that ensure an open network; it hinders, for example, a network provider from insisting that only a controlled set of higher-level standards should be used on the network, a requirement that would in-

KEEPING THE INTERNET THE INTERNET 129 hibit the development and use of new services and might be used to limit competition. The core functions those that lie at or near the waist of the hourglass are the most critical functions where openness must be guar- anteed to enable innovation. When these core interfaces to the network are not open, multivendor application innovation is more difficult. Pos- sible consequences include constrained user choice and deterioration in the quality of products that vendors offer. lust which functions should be considered to lie in the waist of the hourglass that is, implemented according to a single, Internet-wide stan- dard and available throughout the Internet is open to interpretation and debate, and there is no consensus among those who design, operate, or use the Internet. The core standards are understood by many to include more than just IP, but opinions differ as to what else should be included. Indeed, in the hourglass metaphor, the curved side walls of the glass do not draw a sharp distinction between what is in the waist and what lies above it. What other than IP is needed in practice? · Domain Name System. The DNS, which provides a common set of names for hosts connected to the Internet, is generally viewed as an essen- tial core function of the Internet. Some would also include additional network directory services in the category of core functionality. (While a single directory service has not been universally adopted, this is one of the solutions offered to deal with conflicts between the DNS name space and trademarked names; see Box 2.1 in Chapter 2.) · Routing protocols. Providers must typically exchange routing infor- mation at interconnect points. Have the routing structure and routing protocols become critical enough for interoperability that they should be considered to lie at the core? This issue is especially important in the light of increasing doubts that today's routing architectures will continue to be adequate as the network continues to expand. · Dynamic Host Configuration Protocol (DHCP). This network proto- col enables a DHCP server to automatically assign an IP address to an individual device attached to a network. It might be the case that many applications that do not even require TCP functionality will still depend on DHCP to obtain an Internet address when they are started up. Some also believe that significant benefits would result if standard mechanisms for authentication were widely available. This and other middleware functions are ones where application builders and users can both realize substantial benefits when standard solutions are deployed. In recent years, the same processes that enabled growth and innova- tion in the network layers have started to have an even more dramatic effect on higher-level protocols and, accordingly, on user-visible applica-

130 THE INTERNET'S COMING OF AGE lions. For instance, a protocol like HTTP also provides a type of core functionality, albeit in the narrower space of the World Wide Web rather than the Internet as a whole. The flap over instant messaging openness (e.g., AOL's Instant Messenger and Microsoft's MSN Messenger) illus- trates the tensions that arise between those who argue for openness (through standardized interfaces that are open to all application develop- ers) and those who seek to retain or increase their market share (through closed, proprietary protocols). In each of these instances, there are ten- sions between creativity and openness, typical of any standardization effort. In each, the affected parties application developers, service pro- viders, and consumers must decide when and where one or the other should be emphasized. While most of this discussion has examined the upper half of the hourglass, the innovation that the Internet's architecture enables at the "transmission" level is another crucial element of the Internet's success. Keeping IP service independent of the technology below it has several benefits. First, competition at the technology level which can be ex- pected to reduce cost and increase function will be greater the less the service definition constrains innovation in communications technologies. The abstract interface means that users are free to select among competi- tive service providers. Underlying hardware (and the software required to enable it) can be changed without changing the application software. The consumer who uses a particular Web browser with his dial-up Internet service can use the same browser if he switches to a DSL or cable modem service and is able to shop around for better performance or price without incurring a switching cost for the applications he runs (although investments may have to be made in new hardware or software associ- ated with the Internet service itself). Second, the technology indepen- dence also provides significant stability over time. IP can outlive any particular technology over which it is implemented and IP can be imple- mented on top of new communications technologies as they emerge as has happened already with Ethernet, ATM, frame relay, and cellular digi- tal packet data (CDPD), to name a few.l8 18However, the emergence of new communications technologies has led to efforts to modify TCP in order to improve performance. For example, the throughput with standard TCP is reduced below the apparent capacity of the communications link when traffic flows over a satellite link because the standard TCP algorithm uses a probing algorithm that requires the sender to wait for old data to be acknowledged before increasing the data rate. Because the distance over which signals must travel is considerably greater for geostation- ary orbit satellites than for terrestrial links, one must wait correspondingly longer for the radio signals carrying the acknowledgment to be transmitted from receiver to sender. The

KEEPING THE INTERNET THE INTERNET 13 The Internet As a Platform for Application Innovation The Internet is widely acknowledged to be a key platform for creative ana innovative applications across the communications, information, commerce, and entertainment businesses. Much of this innovation rests on the hourglass architecture, discussed above, which encourages inde- pendent evolution of the network, services, and applications, enabling incremental support for new media (e.g., sound, animation, video) with- out changing the infrastructure visible to the application. This architec- ture allows applications to take advantage of network bandwidth innova- tions and permits users to run applications regardless of who their network provider is. While some of these applications that run on top of IP are vendor- specific and proprietary, many others are themselves based on open stan- dards. The processes established by the Internet Engineering Task Force for creating new protocols that rely on the core protocols are open to anybody and designed to be vendor-neutral. The open process by which protocols are developed also means that the protocols are very well docu- mented facilitating the development of applications and creating a large ' tJ 1 1 1 tJ tJ base of expertise. These simple, standard interfaces also allow applications to aggregate other applications very simply. For example, an e-mail application (e.g., Hotmail) can be combined with an advertising application (e.g., DoubleClick) and a news service (e.g., Reuters) with relatively little work. This ease of aggregation also permits secondary opportunities to build services that Internet applications can reuse, such as news feeds; advertis- ing; middleware services such as authentication and name registration; and infrastructure services such as online data storage and application hosting. The rosy expectations for electronic commerce rest on the standard- ized, open Internet protocols and the ease with which applications can be developed and aggregated. E-commerce, particularly where the sale of same problem occurs, to a lesser extent, with any long-latency Internet link, and the satellite issue is recognized as one instance of a broader class of long-latency link performance problems. Another communications technology development driving efforts to revise TCP is the use of wireless data links for Internet traffic. In this case, the higher errors rate and consequent packet loss associated with wireless transmission reduce throughput because the TCP algorithm interprets packet loss as an indication of network congestion and at- tempts to adapt to this apparent congestion by reducing the transmission rate. Efforts are under way in the IETF and other venues to develop modifications to TCP that accommo- date these new technologies while remaining backward-compatible with existing TCP implementations.

32 THE INTERNET'S COMING OF AGE physical goods is involved, also depends on successful implementation of the back-office functions of inventory management, order fulfillment, and shipping. It has also leveraged other key business innovations such as just-in-time inventory and rapid package delivery services.l9 These characteristics of the Internet have been instrumental in attract- ing the thousands of companies developing applications and services that rely on the Internet, leading to billions of investment dollars. The net result is a competitive industry that rapidly channels new ideas into prod- ucts of value to end users and that has been rapidly creating a fountain of technology and customer assets. Also, by providing what appears to the user to be a single network, the Internet allows an application to reach nearly every customer and business, creating an enormous market oppor- tunity through the network effect, which says that the value in connecting people and services is proportional to the square of the number of con- nected people and services. In a reflection of its successes, the term "Internet" has attained a status akin to a valued brand name for both businesses and end users. Indeed, no other platform for computing and communications applica- tions today shares all of these attributes. Investors, developers, and users alike have viewed the Internet as a place of enormous opportunity and a community rich in information and applications. The amount of private and corporate investment dollars poured into developing new Internet applications has been stunning. This climate set the stage for tension between, on the one hand, the potential for seemingly unbounded inno- vation in applications and services and, on the other, the potential for Internet-based businesses to foster market consolidation, to raise barriers to open access, and to drive other outcomes in their effort to make and maximize profits. Evolution of Internet Standards Setting Several trends have emerged that run counter to the openness para- digm that has characterized the Internet's development. Companies de- velop products and technologies in the hope of capturing a market. One trend is that technical issues are becoming complicated by the desire to achieve or exploit a competitive or proprietary advantage as well as qual- ity. This may well be an inevitable consequence of the market forces in- 19Because it depends on shipment of goods to individuals, which tends to be more expen- sive than bulk shipments to retail outlets, business-to-consumer commerce may also have benefited from the ability to offset the perceived costs by not collecting sales tax for out-of- state shipments. (See the section on taxation of Internet-based commerce in Chapter 5 for a discussion of these tax issues.)

KEEPING THE INTERNET THE INTERNET 133 valved, but given the benefits afforded by standards, maintaining a bal- ance between standards and proprietary trends is important. Companies will push for standards when they need them for business but will keep many key aspects of the technology, such as specific data structures or algorithms, proprietary, often protecting them as patents or trade secrets. Another important factor is marketplace demands for speed of innova- tion; it costs time as well as other resources to develop a product that involves proposing a new standard, which means that a standards pro- cess can work against innovation and responsiveness to customer de- mand. The growing stakes in the standards process itself threaten to over- whelm the traditional open standards mechanisms along the lines of those provided by the IETF (Box 3.1~. They mean that the interests reflected by participation in the IETF are increasingly not only technical but also com- mercial, and participants are more political in what they do and do not say to influence standards setting. Companies also may seek to protect their ideas through patent protection. These factors make it more difficult for standards bodies to address and fill gaps in what the market has provided. The larger market and more widespread interest also mean that the number of participants has grown; it is very difficult for a work- ing group of 100 or 200 people to do design work. As was the case in the past, much of the standards development work is done in smaller design teams within the working group and then vetted by the larger group. Nonetheless, the participation of many more individuals increases the likelihood that compromises will be made that degrade the quality and crispness of a standard. Institutions have reacted to these challenges in many different ways. The IETF standard process underwent several revisions, all of which tended toward more formality in order to cope with the increased atten- dance. The International Telecommunication Union (ITU) has tried to streamline its already formal processes in order to shorten the standard- setting cycles. And various new forums have arisen that focus on specific subjects; they have adopted policies that expedite the development of standards. In fact, the IETF does not hold the monopoly on Internet standards development. When developing Internet standards, compa- nies and industry groups are likely to select whichever standards body they believe will be the most effective avenue for their business plan, and they may pursue simultaneous standardization efforts in multiple forums. In addition to the IETF, several more traditional standards bodies, includ- ing the ITU, International Organization for Standardization (ISO), the European Telecommunications Standards Institute (ETSI), the American National Standards Institute (ANSI), and the Institute of Electrical and Electronics Engineers (IEEE), are developing and adopting standards re-

34 THE INTERNET'S COMING OF AGE

KEEPING THE INTERNET THE INTERNET 135

136 THE INTERNET'S COMING OF AGE fated to the Internet. Also, there are a number of instances where more narrowly focused interests and a desire for faster standards development have led to the use of consortium-based alternatives to either the IETF or more traditional standards bodies. These groups, such as the World Wide Web Consortium or the Wireless Access Protocol Forum, tend to be nar- rower in scope, less open, and more industry-centered. Internet stan- dards are being developed in an active, diverse, and dynamic market space a model that parallels the freewheeling creativity of the Internet. There are two basic, conflicting views on Internet standards. One is that there should be exactly one standard for any function, and that this standard should be debated in an environment that guarantees fair repre- sentation of all parties and fair processing of all contributions. The sec- ond view is that there may well be many competing standards for the same function, and that market competition will select which standards best serve a given function. The telecommunications world embodied by the ITU traditionally adopted the first view. The reality of the Internet market, on the other hand, fosters the second view. Today it can be argued that the market impact of standards from treaty bodies such as ITU is essentially indistinguishable from the impact of those from other bodies. The acceptance and use of a standard has more to do with its applicability to marketplace demand or the ability of a dominant vendor to deploy code that becomes a de facto standard than with what stan- dards body approved it. Examples such as lava, developed by Sun, or the initial Web protocols, which were developed by an informal group of research institutions, show that the market can also widely adopt more open solutions before they are blessed by any standards group. Given incompatible options for protocols, the Internet market, as a tippy market, will pick one when the need arises. This does not mean that only one protocol will necessarily be adopted for a particular purpose: POP and IMAP are a compatible set of protocols for Internet mail both of which can be employed locally without affecting the standard interface used by senders and recipients of e-mail. However, between incompat- ible suites, the market picks a solution: the very strong force of what economists call network externality means that the benefits of being able to communicate widely are so strong that they drive the widespread adop- tion of one of the alternatives. The force here is much stronger than it is in, say, operating systems, where consumers continue to sustain both Macintosh and Unix platforms despite the dominant share held by Win- dows. In networking, the need to communicate is the dominant factor, and losers are prone to fall by the wayside. For example, despite the fact that some believed that it was less capable, the Simple Network Manage- ment Protocol (SNMP), which was more widely used, won out over an

KEEPING THE INTERNET THE INTERNET 137 arguably better, competing protocol for network management, the Com- mon Management Information Protocol (CMIP). The choices made by the market vary. Sometimes it chooses a vendor's proprietary solution and in other cases it chooses an open stan- dard over the vendor-controlled solution. A key example of the open standard winning was the choice of basic network transport protocol, in which the Internet's open standard, TCP/IP, beat out the proprietary Xerox Network Systems (XNS) standard despite the fact that Xerox and other vendors supported the latter. In the absence of an open standard, the market will also generally pick a winner. For example, when Sun developed the Network File System (NFS) and Remote Procedure Call (RPC) protocols (used to access files across networked computers and control the execution of programs on other computers), there was no open standard (e.g., IETF) alternative in development. After the fact, there were some weak calls for the development of an open alternative, but these never resulted in the development of an alternative standard.20 If a competent open standard is made available, it would be attractive in the market and could win out over proprietary standards. But if there is no competent standard, the market still will pick an alternative (the tipping phenomenon). Why are open standards less frequently devel- oped than proprietary ones? Several factors contribute. Today, industrial development is so rapid that pressures to focus on products limit the amount of time technical staff in industry can spend on efforts aimed at the broader Internet community. Moreover, there is a fundamental ten- sion between, on the one hand, having a freedom of choice that enables individual players to reap the benefits of innovation and, on the other, picking standards that benefit all. In essence, this is a prisoners' dilemma game. A common standard maximizes social welfare because the larger market engenders network externalities. But each player is tempted to diverge from the common standard if it believes it might be able to cap- ture the entire market (or a large portion of it) for itself. At the same time as industry is less likely to support the development of open standards, government is investing less to support the work of an academic, non- commercial core of people who care about developing open standards. And, finally, incentives are drawing people from the research commu- nity into industry. A situation where standards are more likely to be proprietary (or at least vendor-controlled) is not an obviously bad thing. Vendor-controlled standards can, like patents, be pro-innovation. If vendors are unable to 20A working group in the lETF is developing an improved version of NFS in cooperation with Sun.

138 reap the benefit of THE INTERNET'S COMING OF AGE investment, investment would be stifled. However, vendor interests in proprietary standards sometimes reflect less an inter- est in turning a vendor standard into a revenue stream through licensing than a desire to use vendor control of a standard to hold onto market share. However, this is not a case where one situation is clearly bad and the other clearly good (e.g., open versus proprietary or licensing versus control); rather, an appropriate balance must be struck. One contributor to continued vitality in the development of open standards is support for the networking research community. Govern- ment has supported open standards for the Internet not by directly set- ting or influencing standards but by providing funding for the network- ing research community. Such research leads both to innovative networking ideas and to specific technologies that can be translated into new open standards, which in turn can offer a richer set of alternatives in the marketplace. END-TO-END TRANSPARENCY Closely associated with the concept of openness, which speaks to the use of common standards for communications across the Internet, is the notion of end-to-end transparency. A product of two fundamental prop- erties of the Internet the hourglass, end-to-end architecture and the unique addressability of devices attached to the Internet transparency is a defining characteristic of the Internet. The hourglass-like architecture, in which the Internet protocol provides the fundamental means of send- ing data across the Internet, allows any type of communication, applica- tion, or service to ride on top of the Internet. With suitable software running at each end and no knowledge other than each other's Internet address, any two devices connected to the Internet are able, in principle, to enter into any desired type of communication, provided there is enough network capacity and sufficiently low or predictable latency (delay) to support the application. Crucially, this communication takes place as a result of actions by users at the edges of the network; new applications can be brought to the Internet without the need for any changes to the underlying network or any action whatsoever by Internet service providers. Indeed, over the life of this report, many new applications and associated communication pro- tocols have emerged. A noteworthy example is the rapid emergence and ensuing widespread use of a group of new protocols (e.g., Napster and Gnutella) that are designed to allow distributed sharing of files among Internet users, frequently for the purpose of exchanging music encoded in the mp3 format. (The challenges to intellectual property protection pre-

KEEPING THE INTERNET THE INTERNET 139 sensed by these protocols have, of course, given rise to controversy about the implications of their use and led some to attempt to block their use.) As has been noted by a number of observers of the Internet, transpar- ency often falls short of the ideal described above. Pragmatic measures taken in response to operational considerations (e.g., making address management more tractable or coping with a shortage of available ad- dresses) are one factor that clouds transparency. Another factor is techni- cal measures taken by both users and ISPs aimed at protecting networked computers from attack or enhancing the performance of a network by controlling the use of applications that place particular demands on net- work resources. And the business and marketing strategies of some Internet players involve offering services that are not fully transparent. In examining transparency issues, it is important to distinguish between transparency violations that users choose to adopt and those violations that are imposed on them. Addressing Issues One transparency challenge concerns the means by which computers are assigned Internet addresses. It is common practice today to assign Internet addresses in a dynamic rather than static fashion. Dynamic as- signment provides an address on request from a networked computer, generally via the Dynamic Host Configuration Protocol (DHCP), from a pool of globally unique Internet addresses. This makes configuration and management easier and also reduces the number of IP addresses required to support a group of computers. When a device is turned on or reset (in the case of a permanently connected computer) or makes a connection to a network (in the case of a dial-up connection), it uses the DHCP protocol to send a message to a DHCP server to have an address assigned to it. The server responds with a message containing an IP address, and the software running on the device configures the device to adopt that ad- dress. When addresses are assigned in this fashion, the relationship be- tween device and address is not constant over time; the address is fixed 2)For example, transparency has been a topic of interest to the Internet Architecture Board (IAB). A recent draft report issued through the IETF echoes a number of these issues (Brian Carpenter. 1999. Internet Transparency. Internet Engineering Task Force Internet Draft (work in progress), December. Available online from <http://www.ietf.org/internet-drafts/draft- carpenter-transparency-05.txt>~. The IAB also held a workshop on the subject (M. Kaat. 1999. Overview of 1999 IAB Network Layer Workshop. IAB Internet Draft (work in progress), October. Availableonlinefrom<http://search.ietf.org/internet-drafts/draft-iab-ntwlyrws- over-02.txt >I.

140 THE INTERNET'S COMING OF AGE only until the device is disconnected from the network, reset, or powered down. As a result, an application cannot rely on the IP address to reach a device directly to complete a call a dynamically assigned IP address does not uniquely identify a particular device over time. This situation is quite unlike that of other sorts of addresses such as phone numbers, where a person's phone number is statically mapped to a telephone or a location (though there are calling features, such as call forwarding, that allow a limited form of dynamic rerouting to occur by making use of databases within the telephone network). Thus if one were to implement an IP- based telephony service, one could not use a dynamically assigned ad- dress directly. Dynamic assignment is not an insurmountable problem, however. Solutions must make use of indirection, in which a directory service is established to provide a mapping between some sort of identi- fying name and the current IP address that should be associated with that name. Keeping the directory up to date requires that each device send a message to the server on start-up notifying it of the current IP address that should be associated with its name. Maintaining an up-to-date direc- tory with accurate data and operating the directory with sufficient integ- rity that its information can be trusted is a difficult technical and social problem. Work on a protocol that provides such a capability is now a proposed standard from the IETF. Provided that a suitably robust service can be implemented, dynamic addresses are as suitable as static addresses for any sort of application, and dynamic address assignment can be thought as a situation that requires additional technology development and deployment rather than a fundamental obstacle to transparency. Another addressing-related challenge to transparency is posed by network address translation (NAT), a technology introduced in Chapter 2 in connection with addressing and routing issues. NAT provides a work- around that permits multiple computers attached to a network to share a smaller number of globally assigned Internet addresses. NATs and fire- walls including NAT functions are employed by users and ISPs for a variety of reasons. These include providing a larger number of comput- ers with Internet access using a limited pool of Internet addresses, provid- ing local control over the addresses assigned to individual computers, and providing the limited degree of security that is obtained by hiding internal addresses from the Internet. Network address translation involves the mapping of a set of local addresses, which are not visible to the outside world (i.e., not visible on the Internet), to a global address (i.e., visible on the Internet). A crucial distinction between NAT and dynamic addressing is that the mapping takes place without any explicit communication between the device and the NAT about the address assignment that has been made. The device

KEEPING THE INTERNET THE INTERNET 141 continues to use its local address without regard to the action of the NAT; the NAT takes care of translating the addresses on packets flowing in and out of the network between the two sets of addresses. A transparency problem arises because this translation is performed only on the portion of the packet that labels the destination addresses (analogous to the address on an envelope) not on any addresses that are contained within the packet (analogous to the addresses contained in the text of a letter inside the envelope). The reason that translation cannot in general be done on the addresses within the packet lies at the heart of the transparency question: because the Internet architecture permits any ap- plication to run over the Internet, the NAT cannot in general know where and in what form the addresses are placed within the packets. To make such an application work, one of two things must happen. One option is for the NAT to include an application layer gateway that has knowledge of the application's protocol, thereby allowing it to iden- tify and translate the address as it is transmitted. Many NATs provide this gateway function for commonly used applications such as File Trans- fer Protocol (FTP). This need for NATs to be application-aware violates a basic attribute provided by the hourglass architecture that one is free to employ new applications running over the network without having to make any changes whatsoever within the network. There are also costs associated with deploying computers with sufficient computing power to carry out the application-level translations. The other option would be for the application to discover that the network is making use of NAT and then make the necessary translations itself; requiring an application to learn about the details of the network is an undesirable violation of the basic Internet architecture.22 Significant problems arise if one wishes to initiate communications between two computers, each of which is sitting behind a NAT, since neither has a way of knowing the internal address of the other. Consider an application like IP telephony. With NAT, one must resort to using a third computer outside either network to act as a telephony server that bridges between the other two. A particular problem is that the only way for a computer behind the NAT to discover that it is receiving an incom- ing call is for it to repeatedly ask, or poll, the telephony server if there is a 220ne other option is to avoid passing addresses. This solution works in some cases where a protocol does not inherently require the exchange of global identifiers but was implemented that way prior to the advent of NAT. However, the applicability of this solution is limited because some types of applications require that globally unique identifi- ers be transmitted from one computer to another.

42 THE INTERNET'S COMING OF AGE call. Such a work-around places increased demands on both network capacity and the telephony server. Another set of situations where NAT raises difficulties are ones where simultaneous communications among devices that sit behind a NAT (i.e., local) and devices that sit outside a NAT (i.e., remote) are desired. Ex- amples of such situations include multiparty conferencing (telephony or video) and games; both are situations where there can be a mix of local and remote participants. Signaling becomes more complicated because an application cannot provide the same address information to applica- tions running on local and remote machines. It is not impossible to handle these situations, but they make the software more complicated to imple- ment correctly and more difficult for users to configure properly. Similar problems arise if people start installing appliances, such as security de- vices, that need to be accessed from both the inside and the outside of the house (i.e., behind the home gateway or outside of it). NAT also interferes with security protocols such as IPSec,23 though not with higher-layer security protocols such as SSL or S/MIME. The basic problem is that if the packet payload is encrypted, addresses within it cannot be translated by a NAT. Because IPSec is a more broadly appli- cable protocol, used notably for standard Internet-layer virtual private networks, the incompatibility is a significant concern for some users. Nonuniform Treatment of Bits Internet transparency also implies the uniform treatment of all traf- fic in terms of the application, protocol, and format and in terms of the content of the communications being carried across the Internet. In its idealized form, the hourglass architecture treats all bits uniformly, with their transmission through the network a function of one thing only- available capacity (and whatever controls the end points place on the communications, such as the TCP pacing algorithms). The situation is slightly different when quality-of-service technologies are built into the network (discussed in detail in the section on quality of service in Chapter 2) in order to provide for special treatment of particular classes of traffic, in accordance with a customer's contract with an ISP; in this context, "uniform" means uniform within a particular class. Transparency is limited by the blocking of particular types of Internet communications, pursuant to choices reflecting ISP policy, the prefer- ences of individual customers, or, in the case of larger organizations that 23S. Kent and R. Atkinson. 1998. Security Architecture for the Internet Protocol, RFC 2401. Available online at <http://www.ietf.org/rfc/rfc2401.txt>.

KEEPING THE INTERNET THE INTERNET 143 operate their own network infrastructure, organizational policy. These restrictions fall into two broad categories: restrictions placed at the edges in order to meet the objectives of end users and restrictions placed within the network by Internet service providers. The classic example of a restriction placed on transparency at the edge of the network is the firewall, which is a blocking device placed at the entry point to a subnetwork and operated by either the customer or the ISP on behalf of the customer. It can be configured to exclude those types of communications that are not desired or, more stringently, to block all content not explicitly designated as acceptable. Typically, these restrictions are used to block traffic that could be used to exploit vulner- abilities of the computers within the network. Communications may also be blocked on the basis of the application being run (e.g., when a business seeks to enforce a prohibition on the use of streaming media applications by its employees to reduce bandwidth use or increase worker productiv- ity) or content (e.g., filters that block objectionable content). How is undesired traffic filtered? Internet applications are generally associated with particular "ports," which are a set of numerical identifiers each of which is associated with a particular type of service or applica- tion. These are somewhat standardized; for instance, an HTTP server is frequently associated with port 80. To protect computers against certain types of attack, a firewall can block packets associated with particular ports (and thus applications) that are known to pose a risk. Firewalls will frequently block packets associated with unknown ports as well, in order to keep rogue applications from carrying on unauthorized communica- tions. An application not identified to the firewall as permissible can attempt to circumvent the firewall by making use of another port, per- haps one that is dynamically adjusted (so-called port-agile applications). For example, Real Networks software is both port- and protocol-agile, able to switch from the default UDP protocol to TCP or even HTTP run- ning over a standard port for HTTP traffic when firewalls block the pre- ferred protocol. From the perspective of the application developer, this is done for legitimate (i.e., nonmalicious) reasons, to increase access to end users. From the perspective of the operator of a particular network, how- ever, it may be viewed as subverting a policy decision that may have also been made for legitimate reasons (e.g., to reduce the traffic on a network or prevent those connected to that network from running applications that an organization has decided to prohibit). Port numbers are perhaps the easiest method of filtering, but filtering can also be performed using other information contained in packet headers or the contents of the data packets themselves. In a response to the difficulties of providing large quantities of data or a high quality of service to end users, the Internet is being overlaid by

44 THE INTERNET'S COMING OF AGE application-specific delivery networks and caching services. Content or service providers may, for example, enter into an agreement with a com- pany that delivers specialized content services located throughout the Internet so as to improve the quality of the connection seen by their end users. Local caching or overlay distribution networks do not provide end-to-end connectivity between the original content or service provider and the end user. Also, depending on the particular technical and busi- ness model, such networks may only be available to those providers who are willing and able to pay for specialized services. Such service elements in the Internet provide optimizations that make the network more usable for particular applications. If they work prop- erly, they maintain the illusion of end-to-end behavior. But if they fail to work properly, the illusion of transparency can be broken (see the section "Robustness and Auxiliary Servers" in Chapter 2~. Importantly (from a transparency perspective), these are not inherent services that end sys- tems can depend upon. This has several implications. First, where these service elements are not implemented in the network, the end user can still employ the full range of services and applications, though the perfor- mance may be degraded relative to what would be possible if the en- hancements offered by network service elements were available. Second, applications cannot depend on these enhancements being present in all networks. Third, a new application can be deployed without necessitat- ing changes within the network, although its performance may not be optimal in the absence of supporting elements within the network. In short, the introduction of supporting elements does not necessarily vio- late the end-to-end architecture but at some point makes it effectively impossible to use a nonsupported service. A related issue has to do with ISP interpositioning, in which an ISP adds facilities to the network to intercept particular requests for Web pages or elements of Web pages, such as graphics, and replace them with ISP-selected content. For example, an ISP might select information or advertisements that are locally relevant, in much the same way as local advertisements are inserted into network programming by local broad- cast stations or cable system operators, to rewrite Web pages or some portions of Web pages. Such a practice may, of course, be seen as a value- added service, but it diverges from the end-to-end content delivery model that has characterized the Internet thus far. It has the potential to deprive both end users and publishers of full control over how content is deliv- ered, particularly where it occurs without control by the end user. The port-agile tactic described above illustrates the broader point that there are limits to the extent to which the content or applications can be blocked. Since the Internet's architecture allows application writers to layer traffic of their choosing over the basic Internet protocols, it is, in

KEEPING THE INTERNET THE INTERNET 145 general, difficult to recognize all instances of an application. Application writers can also modify their application protocols to stay one step ahead of attempts to block them. A likely result of persistent attempts at block- ing would be an escalating battle in which firewall software authors and ISPs attempt to identify and block applications while application devel- opers work to find ways to slip past these filters. The long-term result of such a struggle might well be a situation where much of the traffic is hard to identify, making it difficult to implement blocking policies. Another technical development could also fundamentally limit the ability of ISPs to filter traffic: widespread adoption of encryption at the IP layer (e.g., deployment of IPSec) would preclude ISPs from examining the informa- tion being transmitted or deducing the application being run using infor- mation contained in packet headers above the IP layer. If it wished to continue to impose controls under such conditions, an ISP might be forced to adopt a policy that blocks everything that is not identifiable and ex- pressly permitted. Market and Business Influences on Openness Economic pressures as well as technical developments are having an impact on transparency and the end-to-end principle. In the consumer ISP market, there are many consumers who are more than willing to subscribe to networks that do not follow the classic Internet provider model in all respects, selecting from among a small number of ISPs that provide a somewhat sheltered environment or at least preferential offer- ing of selected services and content. For example, if one looks at mass- market consumer behavior today, with thousands of ISPs to pick from, most consumers select AOL, an offering that emphasizes access to its custom content over access to the full Internet. (Of course, AOL ended up responding to consumer pressure by adding access to the full range of Internet content.) The 2000 AOL-Time Warner merger is another sign that Internet players believe there is a business case for combining access and content offerings. Such vertical integration, where a network pro- vider attempts to provide a partial or complete solution, from the trans- mission cable to the applications and contents, could, if successful, cause a change in the Internet market, with innovation and creativity becoming more the province of vertically integrated corporations. Microsoft's "ev- eryday Internet" MSN offering further supports the notion that businesses see a market for controlled, preferred content offerings as a complement to the free-for-all of the public Internet. Vertical integration has several obvious economic motivations. Open interfaces can make it harder either to coordinate changes at more than one level, which might be needed for some forms of innovation, or to

146 THE INTERNET'S COMING OF AGE capture as much of the benefits of innovation as integration might allow. On the other hand, the telecommunications industry, once highly inte- grated vertically, provides evidence that there are limits to vertical inte- gration. Today, pressures are confronting the makers of telecommunica- tions equipment as multiple vendors whose products are organized horizontally collectively challenge the vertically integrated circuit switch. An important enabler of this trend has been the transparency and open- ness of IP technologies. IP-capable hardware and software can be pur- chased from many vendors. And it is no longer necessary to own the facilities of a network to offer voice services over it. Many businesses are customers of a fairly small number of very large networks, which have substantial incentives to hold on to their custom- ers. To do so, they can use such means as leveraging the billing relation- ship with the customer or their ability to deliver service to the customer premises. Also, providers that have a large market share have incentives to try to own or co-own the most popular services for their customers so as to keep them inside their network. These economic and business forces can act as disincentives to the continued free sharing of the best infrastructure ideas. Large scale cre- ates additional incentives to providers to build their network with inter- nal proprietary protocols that optimize the performance of both applica- tions and the network in such areas as reliability, security, or control over bandwidth and latency. A leading example of where such optimi- zations might be deployed is telephony, but other possibilities include e-mail and chat, caching, video, and routing. Hotmail's valuation as an e-mail service or ICQ's as a service for instant messaging reflects the number of customers they are able to serve. For example, Hotmail does not use the Internet-standard mail protocols internally nor does it use standard POP or IMAP for external access by users.24 Such internal deployments start to have implications for applications running at the edges of the Internet. Today Hotmail is at sufficient scale that special code to support its proprietary protocol is written into e-mail clients; the result is that a frequently used Internet service is no longer running the standard Internet protocols. Equipment suppliers are similarly willing to accommodate such customer demands; their routers are more pro- grammable than ever to support custom protocols. There is a tension here between immediate improvements and long-term benefits: today's optimization may be tomorrow's roadblock, and design choices made to optimize a particular application may or may not turn out to be benefi- 24The proprietary protocols are intended to allow it to scale better. Hotmail does, how- ever, allow users to read standard POP e-mail accounts through Hotmail.

KEEPING THE INTERNET THE INTERNET 147 cial when a new application emerges. Also, the extent to which optimi- zation will occur in a decentralized network such as the Internet is lim- ited by difficulties in reaching agreement to deploy optimizations networkwide. Another pressure for nonstandard protocols, illustrated by the recent flap over instant messaging protocols, is the desire to differ- entiate oneself from competitors and capture value above the basic IP bit-transport service. Thus, market pressures, combined with technical pressures relating to optimization, raise the prospect that we might end up with several "separate" Internets differentiated by the use of proprietary protocols or customized content. One scenario would be that some dozen or half- dozen tier 1 service providers would operate somewhat separately, al- though still using IP and other standard Internet protocols to enable some degree of interoperability among them. If a situation develops where several large providers start using proprietary protocols inside their net- works, the incentives for new content and application development could shift. Content and application developers will target the networks of these large companies rather than an abstract Internet, and at the same time the large providers will have a huge incentive to make it difficult for customers to switch to another provider. As a result, tying applications to their proprietary protocols becomes good business early on they might even pay application developers to do this. A base of, say, many millions of customers might justify the cost of the extra coding and maintenance that supporting multiple protocols would require. Some of this can be seen today, for example, in AOL's system, where providers of content and services, most of whom also do business on the Internet, register AOL keywords and develop AOL-specific content to allow AOL users to access their content and services. The potential viability of applications being developed for environments that are less inclusive than the Internet is also illustrated by the content being developed for the wireless access protocol (WAP, a standard aimed at mobile phones and similar wireless platforms) and Palm's Web clippings. However, there are forces arrayed against the possibility of this more closed model supplanting the more open Internet model. One is that anonymous rendezvous and the ability to support transitory relation- ships appear to be important capabilities. E-commerce, which is an im- portant Internet application, depends on the ability to establish connec- tions between two previously noncorresponding companies; multivendor value chains have become critically important in today's networked economy. In fact, many customers explicitly need to work across mul- tiple organizational overlays without having to agree to use a particular network. This point was demonstrated by past attempts to develop stan- dards for electronic data interchange (EDI): interoperable protocols are

148 THE INTERNET'S COMING OF AGE mandatory and balkanization appears to be useful only in the short term. Providers offering noninteroperable EDI solutions were profitable for a while, but the lack of interoperability among systems ultimately stifled the growth of EDI. On the Internet today, there is a good deal of invest- ment in a new data description standard from the World Wide Web Con- sortium, XML, to be used for business-to-business e-commerce, and many industry groups are working to define standards for describing data in specific domains so as to enable interoperability on a large scale. Suppose three ISPs develop different protocols to deliver a particular application over the Internet. To reach customers within closed networks, they would need to make their protocol work over each of the closed network's proprietary protocols and might also need the closed networks to configure their networks to enable the applications to work. From the perspective of the would-be application provider, the ISPs become a road- block to innovation. If, on the other hand, we assume that there are proprietary ISP protocols, but ISPs also support IP end-to-end in some fashion, application providers can choose to make their protocol run over IP and bypass the constraints of the closed network provider (at least until the provider notices a large fraction of its IP traffic in this new protocol). It is this sort of marketplace dynamic that is valuable. Another drawback of the closed solution is that it may end up impos- ing undesirable costs on all parties. For example, for both consumers and application developers, closed solutions represent a lock-in to a single solution (where the lock-in reflects the cost of switching). For the cus- tomer, it may mean investing in new hardware and software; for the developer, it may mean retooling a product. From the perspective of a provider, there is the risk that deviation from standards means that they will miss out on some new "killer app" developed elsewhere that offers them dramatic new business opportunities (e.g., an increased customer base or demand for enhanced services). There have been examples of proprietary solutions that found it diffi- cult to gain widespread acceptance on the Internet. For instance, the past decade saw a debate on whether to adopt the ISO X.400 standard, the Internet standard SMTP, or one of a number of proprietary systems for e- mail. The market settled on SMTP, and the other proposals have become largely moot. More generally, there are obstacles to proprietary ap- proaches being adopted Internet-wide. All of the users on the Internet will not sign up with a single provider all at once, and few users only need to be able to interact with other users connected to the same single provider. For example, in an e-mail exchange, neither the sender nor the receiver is likely to know (or care) which ISP the other is using or which e- mail standard their respective ISPs are using. They simply want to ex- change an e-mail message. The success of a proprietary solution depends

KEEPING THE INTERNET THE INTERNET 149 on the Internet provider developing and offering working gateways to all the other services, which will entail additional cost to the provider. From the perspective of customers, open standards help maximize the benefits they realize from the sheer quantity of people and services they can inter- act with. If all belong to a single Internet, the benefits of adding a new user to it accrue to everyone located anywhere on the Internet, whereas in a partitioned Internet, the benefits of an additional user would be limited to the customers of that user's ISP. The recent past has also seen pressures placed on online providers that relied on proprietary technologies. Non- Internet-based online providers have had to respond to the Internet phe- nomenon by supplementing their more closed content and services with access to the full Internet or by reinventing themselves as Internet-based services. Today, both fully open and sheltered models are being pursued with vigor in the Internet marketplace, reflecting different business models and different assumptions about the desires of consumers. Consumer ISPs cover a broad spectrum, with more closed services at one end that emphasize their custom content and services and more open services at the other end that emphasize Internet connectivity but may also provide some preferred content or services. To date, the more closed providers have also continued to offer some degree of access to the wider Internet through the connectivity afforded by the basic Internet protocol (with a few having adding this support in response to market demands). Which path the Internet market takes from here will affect the shape of future innovation. Keeping the Internet Open Provision of open IP service ensures that whichever service provider a consumer or business picks, the consumer or business can reach all the parties it wishes to communicate with. Much as the PSTN dial tone offers customers the ability to connect to everyone else connected to the global PSTN network, open IP service offers access to all the services and content available on the public Internet. In the absence of an open IP service, who you can communicate with is a function of the service provider you pick. (Note that the quality of service is still a function of the service provider you pick.) Open IP service is also an enabler of continued innovation and competition. Anyone who creates a better service that runs over IP can distribute software supporting it to thousands or millions of users, who can then use it regardless of who their service provider is. Open IP service requires support of the Internet Protocol (IP), glo- bally unique Internet addresses, and full interconnection with the rest of the networks that make up the Internet. (Some additional capabilities

150 THE INTERNET'S COMING OF AGE may be required; some perspectives on what else should be included as a core service were presented above.) Open IP service is content indepen- dent that is, the service provider does not filter the customer's traffic based on content, except with the consent and knowledge of the cus- tomer. However, because the Internet's default service is best effort, this definition can make no promises about the quality of the access. The quality of connectivity will depend on the agreement a customer has with its service provider and the agreements that this provider has with other Internet service providers. Indeed, in a free market, it is reasonable to have differentiation of services to satisfy customers who want to pay more for a service they deem better. It is important to point out that one possible outcome of the tension between open service and ISP service differentiation is that the current best-effort service will continue to be provided as a transparent, end-to-end service but that end-to-end trans- parency across multiple providers will not be provided for the more de- manding (and potentially more interesting from a commercial standpoint) applications such as telephony and audio and video streaming that may depend on QOS mechanisms. Because IP connectivity affords users the potential to misbehave or pose unacceptable demands on the network, this definition of open IP service is not intended to preclude service providers that want to ensure safe, effective operation or meet the desire of customers to block certain types of IP traffic from restricting how their network is used. An ISP may, for example, block particular traffic to prevent its customers from launch- ing attacks on other customers of the ISP (or users elsewhere on the Internet). It may also filter particular types of traffic to protect its cus- tomer computers from attacks. And an ISP may restrict traffic volumes where bandwidth resources are limited to ensure that all users have fair access. Of course, ISPs and their customers may differ over whether a particular filter enhances the operation of the ISP's network or unneces- sarily restricts the behavior of a customer; full disclosure of filtering prac- tices provides consumers with the means to make informed choices when selecting an ISP.

Next: 4 Collisions Between Existing Industries and Emerging Internet Industries: Telephony as a Case Study »
The Internet's Coming of Age Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

What most of us know as "the Internet" is actually a set of largely autonomous, loosely coordinated communication networks. As the influence of the Internet continues to grow, understanding its real nature is imperative to acting on a wide range of policy issues.

This timely new book explains basic design choices that underlie the Internet's success, identifies key trends in the evolution of the Internet, evaluates current and prospective technical, operational, and management challenges, and explores the resulting implications for decision makers. The committee-composed of distinguished leaders from both the corporate and academic community-makes recommendations aimed at policy makers, industry, and researchers, going on to discuss a variety of issues:

  • How the Internet's constituent parts are interlinked, and how economic and technical factors make maintaining the Internet's seamless appearance complicated.
  • How the Internet faces scaling challenges as it grows to meet the demands of users in the future.
  • Tensions inherent between open innovation on the Internet and the ability of innovators to capture the commercial value of their breakthroughs.
  • Regulatory issues posed by the Internet's entry into other sectors, such as telephony.
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!