Click for next page ( 108


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 107
3 Keeping the Internet the Internet Interconnection, Openness, and Transparence What is referred to as "the Internet" is actually a set of independent networks interlinked to provide the appearance of a single, uniform net- work. Interlinking these independent networks requires interconnection rules, open interfaces, and mechanisms for common naming and address- ing. (The issues associated with interlinking the Internet with the Public Switched Telephone Network are considered separately in Chapter 4.) The architecture of the Internet is also designed to be neutral with respect to applications and context, a property referred to here as transparency. This chapter examines the current and expected future state of these inter- connections and interfaces. INTERCONNECTION: MAINTAINING END-TO-END SERVICE THROUGH MULTIPLE PROVIDERS The Internet is designed to permit any end user ready access to any and all other connected devices and users. In the Internet, this design translates into a minimum requirement that there be a public address space to label all of the devices attached to all of the constituent networks and that data packets originating at devices located at each point through- out the networks can be transmitted to a device located at any other point. Indeed, as viewed by the Internet's technical community in a document that articulates the basic architectural principles of the Internet, the basic 107

OCR for page 107
108 THE INTERNET'S COMING OF AGE goal of the Internet is connectivity. Internet users expect that their Internet service provider will make the arrangements necessary for them to access any desired user or service. And those providing services or content over the Internet expect that their Internet service providers will similarly allow any customer to reach them and allow them reach any potential customer. (Subject, of course, to whatever controls are imposed at the behest of the subscriber for security purposes.) To support these customer expectations, an Internet service provider must have access to the rest of the Internet. Because these independent networks are organized and administered separately, they have to enter into interconnection agreements with one or more other Internet service providers. The number and type of arrangements are determined by many factors, including the scope and scale of the provider and the value it places on access for its customers. Without suitable interconnection, an Internet service provider cannot claim to be such a provider being part of the Internet is understood to mean having access to the full global Internet. In 1995, interconnection relied on public network access points where multiple providers could exchange traffic.2 Today, there is a much larger set of players and a much greater reliance on private interconnects that is, direct point-to-point links between major network providers. In- deed, there are multiple arrangements for interconnecting Internet ser- vice providers, encompassing both public and private (bilateral) mecha- nisms, connections between commercial networks and public network facilities, and even arrangements for connecting networks defined by ownership or policy as "national" to the international Internet complex. Some of these international connections are constrained by concerns raised by national governments about specific kinds of content being carried over the Internet. Connections among Internet service providers are driven primarily by economics in essence who may have access to whom with what qual- ity of access and at what price but all kinds of considerations are trans- lated into policies, frequently privately negotiated, that are implemented in the approaches to interconnection and routing. A significant feature of today's competitive Internet service marketplace is that direct competi- tors must reach interconnection agreements with each other in order to provide the overall Internet service that their customers desire. These JIB. Carpenter, ed. 1997. Architectural Principles of the Internet, RFC 1958. Network Work- ing Group, Internet Engineering Task Force, June. 2Private interconnections existed then as well, but since everyone was also connected via the government-funded NSFNet backbone, they were viewed as backdoor connections to handle instances of high traffic volume.

OCR for page 107
KEEPING THE INTERNET THE INTERNET 109 business agreements cover the technical form of interconnection, the means and methods for compensation for interconnection based on the services provided, the grades and levels of service to be provided, and the processing and support of higher level protocols. Interconnection also requires that parties to an agreement establish safeguards chiefly in the form of rules and procedures to ensure that one provider's network is not adversely affected by hostile behavior of customers of the other pro- vider. While, as evidenced by the Internet's continued growth as an inter- connected network of networks, the existing interconnection mechanisms have proven adequate thus far, concerns have been expressed about inter- connection. Interprovider, public-private, and international connections all raise questions of public policy, or Internet governance. This section focuses on interprovider connections because it is these connections that drive the shape and structure of the Internet. Structure of the Internet Service Provider Industry There are several thousand Internet service providers in the United States.3 These providers cover a range of sizes, types of services they provide, and types of interconnections they have with other service pro- viders. The Internet service provider business has grown substantially, with entry by many new players, following the phasing out in the mid- l990s of the government-supported NSFNet backbone. Changes in the nature of these players are as significant as changes in the number. As the mix has evolved, so have business strategies. One sees ISPs chasing par- ticular segments of the market (e.g., they specialize in consumers or busi- nesses or they run Web server farms), trends toward consolidation though mergers and acquisitions, and moves to vertically integrate a full range of services, from Internet access to entertainment, news, and e-commerce. The interlinked networks that are the Internet form a complex web with many layers and levels; the discussion that follows should not be taken to suggest simplicity.4 30ne source of information on Internet service providers is Boardwatch magazine s Direc- tory of Internet Service Providers. Golden, Colo.: Penton Media, June 1999. Available online from ), it lists 5078 ISPs in North America, a figure that covers a wide range of sizes and business models. 4see, for example, the results of sell Labs, Internet Mapping Project, which provides a visualization of data gathered in mid-l999 indicating the complexity of the Internet. A number of maps are available online at .

OCR for page 107
110 THE INTERNET'S COMING OF AGE A straightforward and useful way to categorize ISPs is in terms of the interconnection arrangements they have in place with other providers. The backbone service providers, which include commercial companies as well as several government-sponsored networks like DOE's ESNET, use trunk capacities that are measured in gigabits, or billions of bits, per sec- ond. Roughly a dozen of the ISP companies provide the backbone ser- vices that carry a majority of Internet traffic. These providers, termed "tier 1," are (recursively) defined as those providers that have full peering with at least the other tier 1 backbone providers. Tier 1 backbones by definition must keep track of global routing information that allows them to route data to all possible destinations on the Internet which packets go to which peers. They also must ensure that their own routing informa- tion is distributed such that data from anywhere else in the Internet will properly be routed back to its network. Tier 1 status is a coveted position for any ISP, primarily because there are so few of them and because they enjoy low-cost interconnection agreements with other networks. They do not pay for exchanging traffic with other tier 1 providers; the peering relationship is accompanied by an expectation that traffic flows and any costs associated with accepting the other provider's traffic between tier 1 networks are symmetrical. Tier 1 status also means, by definition, that an ISP does not have to pay for transit service. Much of the Internet's backbone capacity is concentrated in the hands of a small number of tier 1 providers, and there is some question as to whether it is likely to become even more concentrated, in part through mergers and acquisitions. Concerns about market share in this segment have already emerged in the context of the 1998 merger between MCI and WorldCom, at that time the largest and second largest Internet backbone providers. In that instance, European Union regulators expressed con- cerns about the dominant market share that would have resulted from such a combination. In the end, to get approval for the merger, some of MCI's Internet infrastructure as well as MCI's residential and business customer base was sold off to Cable & Wireless and the merger went forward.5 Some of the advantage held by the very large players lies in their ability, owing to their large, global networks, to provide customers will- ing to pay for it an assured level and quality of service. These very large companies provide customers with solutions intended to allow those cus- tomers, in turn, to connect with higher levels of performance to other 5see, for example, Mike Mills. 1998. '~Cable & Wireless, MCI Reach Deal; British Firm to Buy Entire Internet Assets., Washington Post. July 14, p. C1.

OCR for page 107
KEEPING THE INTERNET THE INTERNET 111 users in the same network, using such technologies as virtual private networks, and they also offer widely dispersed customers the convenience of one-stop shopping. Such large players also allow customers to inter- connect to the public Internet but generally without making the service guarantees. Part of their dominant position also stems from their tier 1 status, which assures their customers (including tier 2 and tier 3 ISPs) of their ability to provide a high quality of access to the public Internet. In addition, tier 1 providers, by determining how and with whom they inter- connect, affect the position of would-be competitors. Below tier 1 sit a number of so-called tier 2 and tier 3 service provid- ers, which connect corporate and individual clients (which, in turn, con- nect users) to the Internet backbone and offer them varying types of ser- vice according to the needs of the target marketplaces. This group spans a wide range of sizes and types of providers, including both a small set of very large providers aimed at individual/household customers (e.g., America Online) and a large number of smaller providers. These include providers of national or regional scale as well as many small providers offering dial-up service in only a limited set of area codes.6 A recent trend has been the emergence of so-called free ISPs, which provide residential Internet service at no charge, typically in exchange for a demographic profile of the customer and an agreement by the customer to view adver- tising material delivered along with the Internet service. This class also includes the networks operated by large organizations, including those of large corporations, educational institutions, and some parts of govern- ment. These ISPs cannot generally rely on peering alone and must enter into transit agreements and pay for delivery of at least some of their traffic. Some of these providers have not invested significantly in build- ing their own facilities; instead they act as resellers of both access facilities (e.g., dial-up modem banks) and connectivity to the Internet backbone. While industry analysts have long predicted increased consolidation and the demise of the smaller providers, recent trends indicate that the business remains open to a large number of players.7 However, optimism here is tempered by two considerations. First, many of the very small players are only active in small markets or geographical regions. Second, 6Matt Richtel. 1999. "Small Internet Providers Survive Among the Giants." New York Times. August 16, p. D1. 7Boardwatch magazine's directory of Internet service providers in North America showed continual growth in the number of ISPs from February 1996 to July 1999. See Boardwatch magazine's Directory of Internet Service Providers. Golden, Colo.: Penton Media, June 1999. Available online from .

OCR for page 107
2 THE INTERNET'S COMING OF AGE subscriber data show that a single player, America Online, with more than 20 million subscribers, has a significant share of the consumer mar- ket.8 Another area of interest is the emerging broadband market. The recent flap over open access illustrates the concerns that some have about the market share and the behavior of the providers of the communica- tions links themselves (i.e., the facilities' owners), the Internet service providers, and the content providers, with which both facilities and ser- vice providers may have business arrangements. Another recent trend has been the establishment of a new form of ISP, the hosting provider. This type of ISP operates both single-customer (dedicated) and shared-application servers, typically providing Web ser- vices on behalf of companies who would rather outsource the manage- ment of machine rooms and Internet connectivity. They offer customers a certain level of service (as seen by those throughout the Internet that make use of the customer's service) by arranging for (purchasing) transit services with a sufficient set of backbone connections. Interconnection Mechanisms and Agreements Internet interconnection arrangements in some ways echo those of telephony, since the public telephone network is also a collection of dis- tinct networks linked together to provide a uniform service. However, telephony, unlike the Internet, leverages and reflects decades of state, federal, and international regulation and standards-setting that have shaped the terms and conditions of interconnection, including financial settlements. Internet interconnection, by comparison, is relatively new, and the technology, market structure, and arrangements are evolving. Providing Internet-wide interconnectivity requires that the parties who own and operate the constituent networks reach agreement on how they will interconnect their networks. The discussion in this section looks at interconnection at three levels: the physical means of intercon- nection, the different patterns of traffic exchanged by providers (transit and peer), and the financial arrangements that underlie and support the physical means and different traffic patterns. The focus here is on teas- ing out the essential elements of interconnection, but this should not be taken to mean that interconnection is a simple matter. There are many players at many levels, and in each case there is more than one choice of physical interconnection, logical interconnection, and financial arrange- 8Data from Telecommunications Report's online census, January 2000, reported in David Lake. 2000. "No Deposit, No Return: Hard Numbers on Free ISPs." The Industry Standard, March 27.

OCR for page 107
KEEPING THE INTERNET THE INTERNET 113 meet, and implementation of each choice depends on a complex set of negotiated agreements. Physical Interconnection Public exchanges are a way of making the interconnections between a number of providers more cost-effective. If n providers were individu- ally to establish pairwise interconnections, they would require n~n-l)/2 direct circuits. A public exchange, where all n providers can connect at a common location, permits this to be done much more inexpensively, with n circuits and a single exchange point. A provider interconnects to an exchange point, either physically by installing his own equipment and circuit into a specific location (e.g., the MAE-West facility at NASA Ames Research Center or the Sprint NAP in Pensauken, New lersey) or logi- cally by using a leased network connection to an interconnect provider through an ATM or Ethernet network (e.g., the MAE-East ATM NAP in northern Virginia or the Ameritech ATM NAP in Chicago). These inter- connect networks are usually operated by large access providers, who hope to derive considerable revenue by selling access lines to ISPs wish- ing to attach to each other through the access provider's facilities.9 In recent years, the public interconnects have acquired a relatively poor reputation for quality, in part owing to congested access lines from the exchanges to tier 1 providers, which results in packet loss, and in part owing to exchange point technology that cannot operate at speeds com- parable to major backbone trunks. This trend is likely to accelerate as large backbones move to extremely high-speed wavelength division mul- tiplexing (WDM)-based bunking, which exceeds the data rates that can be handled by today's exchange point technology. Another option is to use a direct, point-to-point connection. One motivation for point-to-point connections is to bypass the bottleneck posed by a public exchange point when traffic volumes are large. Be- tween large providers, connections are usually based on high-perfor- mance private interconnects, for example point-to-point links at high speeds (DS-3 or higher). Direct connection can also provide for better management of traffic flows. The very large volume of traffic that would be associated with a major public access point can be disaggregated into smaller, more easily implemented connections (e.g., a provider manages 9If they provide direct connections to multiple provider networks, public exchanges can also turn out to be very efficient places to locate other services such as caches, DNS servers, and Web hosting services. And because public exchanges bring together connections to various providers, they are also useful places to conduct private bilateral connection through separate facilities.

OCR for page 107
4 THE INTERNET'S COMING OF AGE 10 OC-3 connections to 10 different peers in different locations rather than a single OC-48 connection to a single exchange point that then connects to multiple providers). Another reason for entering into private connections is the desire to provide support for the particular service level agreements and quality-of-service provisions that two networks agree to in their peer- ing or transit agreement. Logical (Routing) Interconnection When two or more ISPs establish an interconnection, they exchange route advertisements to specify which data packets are to be exchanged between them. Route advertisements describe the destination Internet addresses for which each provider chooses to accept packets from the other. These advertised routes are loaded, generally through automated mechanisms, into each other's routing tables and are used to determine where (including to which providers) packets should be routed based on their destination address. There are two common options for how providers accept each other's traffic: transit and peer. In the transit model, the transit provider agrees to accept and deliver all traffic destined for any part of the Internet from another provider that is the transit customer. It is possible that two pro- viders in a transit arrangement will exchange explicit routing informa- tion, but more typically the transit provider provides the transit customer with a default route to the transit network while the transit customer provides the transit provider with an explicit set of routes to the customer's network. The transit customer then simply delivers to the transit provider all packets destined for IP addresses outside its own network. Each transit provider establishes rules as to how another net- work will be served and at what cost. The transit provider will then distribute routing information from the transit customer to other back- bones and network providers and will guarantee that full connectivity is provided. Address space for the customer provider may come from its transit provider or from its own independent address space should that provider have qualified for such allocation. (The issues surrounding ad- dress allocation and assignment are discussed in Chapter 2.) 10 The preferred way for large providers today to interconnect is through peer arrangements. In contrast to transit arrangements, where one pro- vider agrees to accept from the other traffic destined for any part of the 10Some providers or customers engage in the practice of multihoming, whereby they establish transit connections with multiple ISPs, generally to provide redundancy. This can introduce both technical and management issues, including how to allocate traffic among the multiple paths, that will not be discussed in detail here.

OCR for page 107
KEEPING THE INTERNET THE INTERNET 115 Internet, in a peering relationship, each provider only accepts traffic des- tined for the part of the Internet it provides. Peers exchange explicit routing information about all of their own addresses along with all of the addresses of their transit customers. Based on that routing information, each peer only receives traffic destined for itself and its transit clients. This exchange of routing information takes the form of automated ex- changes among routers. Because the propagation of incorrect routing information can adversely affect network operations, each provider needs to validate the routing information that is exchanged. For smaller providers the only option (if any) for physical intercon- nection is typically at a public exchange point. Location at a peering point implies that the peering relationship may still suffer from poor (or at least uncontrolled) service quality, since the exchange point or the connections to it may be congested; they may, however, be very cost-effective, espe- cially for smaller providers. Once interconnectivity is established through a public exchange, providers may attempt to enter into a bilateral peering agreement with other providers located at the same interconnect. This can be a cost-effective means of bilateral peering, because connectivity to many other providers can be aggregated onto a single connection to the exchange. Financial Arrangements for Interconnection The issue of compensation for interconnection is a complex one. The essence of interconnection is the handing over of packets, according to the routing information that has been exchanged, to be routed onward to- ward their destination. Compensation reflects the costs associated with provisioning and operating sufficient network capacity between and within ISP networks. As a basic unit of interconnection, packets are some- what akin to call minutes in voice telecommunications. However, archi- tectural differences between the Internet and PSTN make accounting in terms of packets much more complicated than call-minute-based account- ing. Even if an infrastructure were to be put in place to count and charge on a packet-by-packet basis, the characteristics of packet routing would make it difficult to know what the cost associated with transmitting a given packet would be.ll As a result, interconnection schemes that are 1lSeveral of these characteristics are noted in a paper by Geoff Huston. 1999. Intercon- nection, Peering, and Settlements, Technical Report. Canberra, Australia: Telstra Corporation, Ltd., January. They include the following: packets may be dropped in the course of their transmission across the Internet; the paths that packets follow are not predetermined and can be manipulated by the end user; and complete routing information is not available at all points, so that the undeliverability of a packet may not be known until it approaches its destination.

OCR for page 107
116 THE INTERNET'S COMING OF AGE used in other contexts, such as the bilateral settlements employed in inter- national telephony, are not used in the Internet, and interconnection has generally been established on the basis of more aggregated information about the traffic exchanged between providers. Some of these issues have to do with the cost of the interconnection, traffic imbalances (e.g., one provider originates more traffic than it terminates), and relative size (one provider offers greater access to users, services, and locations than the other). Two financial models predominate; one is linked to the transit model and the other to the peer provider model discussed above. In the transit model, a transit customer buys transit service from a transit provider and pays for an access line to that larger provider's net- work. These arrangements take the form of bilateral agreements that specify compensation (if any) and the terms of interconnection, including service guarantees (level and quality of service) that each party makes. In the early days of the commercial Internet, providers did not pay for tran- sit services. Before ISPs insisted on payment for transit, nonbackbone ISPs could become free riders in the so-called hot potato scenario, whereby a network would dump traffic for destinations beyond those advertised by a particular provider, thereby forcing the backbone ISP to carry traffic it had not agreed to carry. Private interconnects help prevent free riding, because it is more straightforward to identify this condition given a direct mapping between the link and a single provider. In the peer model, two ISPs agree to a peer relationship based on a perception of comparable value. These agreements are generally barter agreements between peers that assume an exchange of a roughly compa- rable level of traffic or, on some other basis, that the costs and benefits of a peer relationship will be mutually beneficial. Peer barter arrangements echo what is called in telephony "sender keeps all" or "bill and keep"- the network to which a customer connects keeps the fees paid by that customer for traffic carried on both its and another provider's network. Peering among the tier 1 providers is perhaps the most visible, but peer- ing is also conducted among smaller players and at the regional or local level. Logical peering and financial peer relationships generally coincide, but there are exceptions. In some instances a customer will pay for a nontransit service that, logically though not financially, looks like peer- ing. For example, ISP A may pay ISP B for access to B's customers but not B's peers. The value attached to either transit or peer relationships is not based only on the number of bits exchanged nor is it based solely on the origin, destination, or distance it also reflects the value attached to particular content. Consider, for example, a large, consumer-focused ISP ("ISP A") and a major, popular content provider that is connected to the Internet through another provider ("ISP By. ISP A will be judged by its custom-

OCR for page 107
KEEPING THE INTERNET THE INTERNET 117 ers based on the quality of service that it provides. To the extent that A's customers value content directly available from ISP B. customer judg- ment of ISP A will depend on the quality of the interconnect established between A and B. Thus ISP A may be willing to pay extra for higher capacity links to ISP B in order to ensure better performance for custom- ers accessing the content provider. The complementary argument may also hold true: the content provider may well derive revenue from adver- tising that in turn depends on the return rate of viewers, so it (and, conse- quently, its ISP) will be willing to pay extra for interconnection relation- ships that ensure that customers of ISP A receive a good quality of service. (This is a major business consideration for the Internet hosting providers described below.) Accordingly, the performance that a consumer experi- ences with a particular piece of content depends in part on the capacity of the interconnects between the consumer's and content provider's com- puters, which in turn depends in part on the willingness and ability of the consumer and content provider (and their ISPs) to pay for those intercon- nections. Chapter 2 discusses a number of issues surrounding quality of service (QOS) mechanisms, including the dim prospects for deployment of inter- provider QOS; here we discuss some issues related to interconnection. If the stresses associated with the development and evolution of today's peering and transit agreements, which have generally only addressed much broader service level agreements, are any guide, establishing agree- ments that enable interprovider quality of service would prove difficult. Providing guarantees of better service to a subset of users means that resources are set aside that become unavailable for other users. This can only develop if higher grades of quality of service are sold at a premium price and if there are mechanisms to adequately compensate ISPs. If the necessary business agreements would take years to develop, then inter- provider QOS would take years to deploy. Also, congested interconnec- tions exacerbate quality-of-service differences between connections across a given provider's network, as compared with connections across mul- tiple provider networks. They often result in companies connecting all their sites through a single provider's network rather than through a variety of providers and depending on this interprovider connectivity. They also result in large content-hosting providers almost always attach- ing to each of the major backbone networks (usually as a transit customer rather than a peer) to bypass interprovider interconnects and improve overall robustness of access for their customers. Specific mechanisms for quality of service are starting to show up in parts of the Internet, but not as generally deployed, end-to-end services that any application can take advantage of to reach users Internet-wide. They are being offered only inside specific ISPs as product differentiators

OCR for page 107
140 THE INTERNET'S COMING OF AGE only until the device is disconnected from the network, reset, or powered down. As a result, an application cannot rely on the IP address to reach a device directly to complete a call a dynamically assigned IP address does not uniquely identify a particular device over time. This situation is quite unlike that of other sorts of addresses such as phone numbers, where a person's phone number is statically mapped to a telephone or a location (though there are calling features, such as call forwarding, that allow a limited form of dynamic rerouting to occur by making use of databases within the telephone network). Thus if one were to implement an IP- based telephony service, one could not use a dynamically assigned ad- dress directly. Dynamic assignment is not an insurmountable problem, however. Solutions must make use of indirection, in which a directory service is established to provide a mapping between some sort of identi- fying name and the current IP address that should be associated with that name. Keeping the directory up to date requires that each device send a message to the server on start-up notifying it of the current IP address that should be associated with its name. Maintaining an up-to-date direc- tory with accurate data and operating the directory with sufficient integ- rity that its information can be trusted is a difficult technical and social problem. Work on a protocol that provides such a capability is now a proposed standard from the IETF. Provided that a suitably robust service can be implemented, dynamic addresses are as suitable as static addresses for any sort of application, and dynamic address assignment can be thought as a situation that requires additional technology development and deployment rather than a fundamental obstacle to transparency. Another addressing-related challenge to transparency is posed by network address translation (NAT), a technology introduced in Chapter 2 in connection with addressing and routing issues. NAT provides a work- around that permits multiple computers attached to a network to share a smaller number of globally assigned Internet addresses. NATs and fire- walls including NAT functions are employed by users and ISPs for a variety of reasons. These include providing a larger number of comput- ers with Internet access using a limited pool of Internet addresses, provid- ing local control over the addresses assigned to individual computers, and providing the limited degree of security that is obtained by hiding internal addresses from the Internet. Network address translation involves the mapping of a set of local addresses, which are not visible to the outside world (i.e., not visible on the Internet), to a global address (i.e., visible on the Internet). A crucial distinction between NAT and dynamic addressing is that the mapping takes place without any explicit communication between the device and the NAT about the address assignment that has been made. The device

OCR for page 107
KEEPING THE INTERNET THE INTERNET 141 continues to use its local address without regard to the action of the NAT; the NAT takes care of translating the addresses on packets flowing in and out of the network between the two sets of addresses. A transparency problem arises because this translation is performed only on the portion of the packet that labels the destination addresses (analogous to the address on an envelope) not on any addresses that are contained within the packet (analogous to the addresses contained in the text of a letter inside the envelope). The reason that translation cannot in general be done on the addresses within the packet lies at the heart of the transparency question: because the Internet architecture permits any ap- plication to run over the Internet, the NAT cannot in general know where and in what form the addresses are placed within the packets. To make such an application work, one of two things must happen. One option is for the NAT to include an application layer gateway that has knowledge of the application's protocol, thereby allowing it to iden- tify and translate the address as it is transmitted. Many NATs provide this gateway function for commonly used applications such as File Trans- fer Protocol (FTP). This need for NATs to be application-aware violates a basic attribute provided by the hourglass architecture that one is free to employ new applications running over the network without having to make any changes whatsoever within the network. There are also costs associated with deploying computers with sufficient computing power to carry out the application-level translations. The other option would be for the application to discover that the network is making use of NAT and then make the necessary translations itself; requiring an application to learn about the details of the network is an undesirable violation of the basic Internet architecture.22 Significant problems arise if one wishes to initiate communications between two computers, each of which is sitting behind a NAT, since neither has a way of knowing the internal address of the other. Consider an application like IP telephony. With NAT, one must resort to using a third computer outside either network to act as a telephony server that bridges between the other two. A particular problem is that the only way for a computer behind the NAT to discover that it is receiving an incom- ing call is for it to repeatedly ask, or poll, the telephony server if there is a 220ne other option is to avoid passing addresses. This solution works in some cases where a protocol does not inherently require the exchange of global identifiers but was implemented that way prior to the advent of NAT. However, the applicability of this solution is limited because some types of applications require that globally unique identifi- ers be transmitted from one computer to another.

OCR for page 107
42 THE INTERNET'S COMING OF AGE call. Such a work-around places increased demands on both network capacity and the telephony server. Another set of situations where NAT raises difficulties are ones where simultaneous communications among devices that sit behind a NAT (i.e., local) and devices that sit outside a NAT (i.e., remote) are desired. Ex- amples of such situations include multiparty conferencing (telephony or video) and games; both are situations where there can be a mix of local and remote participants. Signaling becomes more complicated because an application cannot provide the same address information to applica- tions running on local and remote machines. It is not impossible to handle these situations, but they make the software more complicated to imple- ment correctly and more difficult for users to configure properly. Similar problems arise if people start installing appliances, such as security de- vices, that need to be accessed from both the inside and the outside of the house (i.e., behind the home gateway or outside of it). NAT also interferes with security protocols such as IPSec,23 though not with higher-layer security protocols such as SSL or S/MIME. The basic problem is that if the packet payload is encrypted, addresses within it cannot be translated by a NAT. Because IPSec is a more broadly appli- cable protocol, used notably for standard Internet-layer virtual private networks, the incompatibility is a significant concern for some users. Nonuniform Treatment of Bits Internet transparency also implies the uniform treatment of all traf- fic in terms of the application, protocol, and format and in terms of the content of the communications being carried across the Internet. In its idealized form, the hourglass architecture treats all bits uniformly, with their transmission through the network a function of one thing only- available capacity (and whatever controls the end points place on the communications, such as the TCP pacing algorithms). The situation is slightly different when quality-of-service technologies are built into the network (discussed in detail in the section on quality of service in Chapter 2) in order to provide for special treatment of particular classes of traffic, in accordance with a customer's contract with an ISP; in this context, "uniform" means uniform within a particular class. Transparency is limited by the blocking of particular types of Internet communications, pursuant to choices reflecting ISP policy, the prefer- ences of individual customers, or, in the case of larger organizations that 23S. Kent and R. Atkinson. 1998. Security Architecture for the Internet Protocol, RFC 2401. Available online at .

OCR for page 107
KEEPING THE INTERNET THE INTERNET 143 operate their own network infrastructure, organizational policy. These restrictions fall into two broad categories: restrictions placed at the edges in order to meet the objectives of end users and restrictions placed within the network by Internet service providers. The classic example of a restriction placed on transparency at the edge of the network is the firewall, which is a blocking device placed at the entry point to a subnetwork and operated by either the customer or the ISP on behalf of the customer. It can be configured to exclude those types of communications that are not desired or, more stringently, to block all content not explicitly designated as acceptable. Typically, these restrictions are used to block traffic that could be used to exploit vulner- abilities of the computers within the network. Communications may also be blocked on the basis of the application being run (e.g., when a business seeks to enforce a prohibition on the use of streaming media applications by its employees to reduce bandwidth use or increase worker productiv- ity) or content (e.g., filters that block objectionable content). How is undesired traffic filtered? Internet applications are generally associated with particular "ports," which are a set of numerical identifiers each of which is associated with a particular type of service or applica- tion. These are somewhat standardized; for instance, an HTTP server is frequently associated with port 80. To protect computers against certain types of attack, a firewall can block packets associated with particular ports (and thus applications) that are known to pose a risk. Firewalls will frequently block packets associated with unknown ports as well, in order to keep rogue applications from carrying on unauthorized communica- tions. An application not identified to the firewall as permissible can attempt to circumvent the firewall by making use of another port, per- haps one that is dynamically adjusted (so-called port-agile applications). For example, Real Networks software is both port- and protocol-agile, able to switch from the default UDP protocol to TCP or even HTTP run- ning over a standard port for HTTP traffic when firewalls block the pre- ferred protocol. From the perspective of the application developer, this is done for legitimate (i.e., nonmalicious) reasons, to increase access to end users. From the perspective of the operator of a particular network, how- ever, it may be viewed as subverting a policy decision that may have also been made for legitimate reasons (e.g., to reduce the traffic on a network or prevent those connected to that network from running applications that an organization has decided to prohibit). Port numbers are perhaps the easiest method of filtering, but filtering can also be performed using other information contained in packet headers or the contents of the data packets themselves. In a response to the difficulties of providing large quantities of data or a high quality of service to end users, the Internet is being overlaid by

OCR for page 107
44 THE INTERNET'S COMING OF AGE application-specific delivery networks and caching services. Content or service providers may, for example, enter into an agreement with a com- pany that delivers specialized content services located throughout the Internet so as to improve the quality of the connection seen by their end users. Local caching or overlay distribution networks do not provide end-to-end connectivity between the original content or service provider and the end user. Also, depending on the particular technical and busi- ness model, such networks may only be available to those providers who are willing and able to pay for specialized services. Such service elements in the Internet provide optimizations that make the network more usable for particular applications. If they work prop- erly, they maintain the illusion of end-to-end behavior. But if they fail to work properly, the illusion of transparency can be broken (see the section "Robustness and Auxiliary Servers" in Chapter 2~. Importantly (from a transparency perspective), these are not inherent services that end sys- tems can depend upon. This has several implications. First, where these service elements are not implemented in the network, the end user can still employ the full range of services and applications, though the perfor- mance may be degraded relative to what would be possible if the en- hancements offered by network service elements were available. Second, applications cannot depend on these enhancements being present in all networks. Third, a new application can be deployed without necessitat- ing changes within the network, although its performance may not be optimal in the absence of supporting elements within the network. In short, the introduction of supporting elements does not necessarily vio- late the end-to-end architecture but at some point makes it effectively impossible to use a nonsupported service. A related issue has to do with ISP interpositioning, in which an ISP adds facilities to the network to intercept particular requests for Web pages or elements of Web pages, such as graphics, and replace them with ISP-selected content. For example, an ISP might select information or advertisements that are locally relevant, in much the same way as local advertisements are inserted into network programming by local broad- cast stations or cable system operators, to rewrite Web pages or some portions of Web pages. Such a practice may, of course, be seen as a value- added service, but it diverges from the end-to-end content delivery model that has characterized the Internet thus far. It has the potential to deprive both end users and publishers of full control over how content is deliv- ered, particularly where it occurs without control by the end user. The port-agile tactic described above illustrates the broader point that there are limits to the extent to which the content or applications can be blocked. Since the Internet's architecture allows application writers to layer traffic of their choosing over the basic Internet protocols, it is, in

OCR for page 107
KEEPING THE INTERNET THE INTERNET 145 general, difficult to recognize all instances of an application. Application writers can also modify their application protocols to stay one step ahead of attempts to block them. A likely result of persistent attempts at block- ing would be an escalating battle in which firewall software authors and ISPs attempt to identify and block applications while application devel- opers work to find ways to slip past these filters. The long-term result of such a struggle might well be a situation where much of the traffic is hard to identify, making it difficult to implement blocking policies. Another technical development could also fundamentally limit the ability of ISPs to filter traffic: widespread adoption of encryption at the IP layer (e.g., deployment of IPSec) would preclude ISPs from examining the informa- tion being transmitted or deducing the application being run using infor- mation contained in packet headers above the IP layer. If it wished to continue to impose controls under such conditions, an ISP might be forced to adopt a policy that blocks everything that is not identifiable and ex- pressly permitted. Market and Business Influences on Openness Economic pressures as well as technical developments are having an impact on transparency and the end-to-end principle. In the consumer ISP market, there are many consumers who are more than willing to subscribe to networks that do not follow the classic Internet provider model in all respects, selecting from among a small number of ISPs that provide a somewhat sheltered environment or at least preferential offer- ing of selected services and content. For example, if one looks at mass- market consumer behavior today, with thousands of ISPs to pick from, most consumers select AOL, an offering that emphasizes access to its custom content over access to the full Internet. (Of course, AOL ended up responding to consumer pressure by adding access to the full range of Internet content.) The 2000 AOL-Time Warner merger is another sign that Internet players believe there is a business case for combining access and content offerings. Such vertical integration, where a network pro- vider attempts to provide a partial or complete solution, from the trans- mission cable to the applications and contents, could, if successful, cause a change in the Internet market, with innovation and creativity becoming more the province of vertically integrated corporations. Microsoft's "ev- eryday Internet" MSN offering further supports the notion that businesses see a market for controlled, preferred content offerings as a complement to the free-for-all of the public Internet. Vertical integration has several obvious economic motivations. Open interfaces can make it harder either to coordinate changes at more than one level, which might be needed for some forms of innovation, or to

OCR for page 107
146 THE INTERNET'S COMING OF AGE capture as much of the benefits of innovation as integration might allow. On the other hand, the telecommunications industry, once highly inte- grated vertically, provides evidence that there are limits to vertical inte- gration. Today, pressures are confronting the makers of telecommunica- tions equipment as multiple vendors whose products are organized horizontally collectively challenge the vertically integrated circuit switch. An important enabler of this trend has been the transparency and open- ness of IP technologies. IP-capable hardware and software can be pur- chased from many vendors. And it is no longer necessary to own the facilities of a network to offer voice services over it. Many businesses are customers of a fairly small number of very large networks, which have substantial incentives to hold on to their custom- ers. To do so, they can use such means as leveraging the billing relation- ship with the customer or their ability to deliver service to the customer premises. Also, providers that have a large market share have incentives to try to own or co-own the most popular services for their customers so as to keep them inside their network. These economic and business forces can act as disincentives to the continued free sharing of the best infrastructure ideas. Large scale cre- ates additional incentives to providers to build their network with inter- nal proprietary protocols that optimize the performance of both applica- tions and the network in such areas as reliability, security, or control over bandwidth and latency. A leading example of where such optimi- zations might be deployed is telephony, but other possibilities include e-mail and chat, caching, video, and routing. Hotmail's valuation as an e-mail service or ICQ's as a service for instant messaging reflects the number of customers they are able to serve. For example, Hotmail does not use the Internet-standard mail protocols internally nor does it use standard POP or IMAP for external access by users.24 Such internal deployments start to have implications for applications running at the edges of the Internet. Today Hotmail is at sufficient scale that special code to support its proprietary protocol is written into e-mail clients; the result is that a frequently used Internet service is no longer running the standard Internet protocols. Equipment suppliers are similarly willing to accommodate such customer demands; their routers are more pro- grammable than ever to support custom protocols. There is a tension here between immediate improvements and long-term benefits: today's optimization may be tomorrow's roadblock, and design choices made to optimize a particular application may or may not turn out to be benefi- 24The proprietary protocols are intended to allow it to scale better. Hotmail does, how- ever, allow users to read standard POP e-mail accounts through Hotmail.

OCR for page 107
KEEPING THE INTERNET THE INTERNET 147 cial when a new application emerges. Also, the extent to which optimi- zation will occur in a decentralized network such as the Internet is lim- ited by difficulties in reaching agreement to deploy optimizations networkwide. Another pressure for nonstandard protocols, illustrated by the recent flap over instant messaging protocols, is the desire to differ- entiate oneself from competitors and capture value above the basic IP bit-transport service. Thus, market pressures, combined with technical pressures relating to optimization, raise the prospect that we might end up with several "separate" Internets differentiated by the use of proprietary protocols or customized content. One scenario would be that some dozen or half- dozen tier 1 service providers would operate somewhat separately, al- though still using IP and other standard Internet protocols to enable some degree of interoperability among them. If a situation develops where several large providers start using proprietary protocols inside their net- works, the incentives for new content and application development could shift. Content and application developers will target the networks of these large companies rather than an abstract Internet, and at the same time the large providers will have a huge incentive to make it difficult for customers to switch to another provider. As a result, tying applications to their proprietary protocols becomes good business early on they might even pay application developers to do this. A base of, say, many millions of customers might justify the cost of the extra coding and maintenance that supporting multiple protocols would require. Some of this can be seen today, for example, in AOL's system, where providers of content and services, most of whom also do business on the Internet, register AOL keywords and develop AOL-specific content to allow AOL users to access their content and services. The potential viability of applications being developed for environments that are less inclusive than the Internet is also illustrated by the content being developed for the wireless access protocol (WAP, a standard aimed at mobile phones and similar wireless platforms) and Palm's Web clippings. However, there are forces arrayed against the possibility of this more closed model supplanting the more open Internet model. One is that anonymous rendezvous and the ability to support transitory relation- ships appear to be important capabilities. E-commerce, which is an im- portant Internet application, depends on the ability to establish connec- tions between two previously noncorresponding companies; multivendor value chains have become critically important in today's networked economy. In fact, many customers explicitly need to work across mul- tiple organizational overlays without having to agree to use a particular network. This point was demonstrated by past attempts to develop stan- dards for electronic data interchange (EDI): interoperable protocols are

OCR for page 107
148 THE INTERNET'S COMING OF AGE mandatory and balkanization appears to be useful only in the short term. Providers offering noninteroperable EDI solutions were profitable for a while, but the lack of interoperability among systems ultimately stifled the growth of EDI. On the Internet today, there is a good deal of invest- ment in a new data description standard from the World Wide Web Con- sortium, XML, to be used for business-to-business e-commerce, and many industry groups are working to define standards for describing data in specific domains so as to enable interoperability on a large scale. Suppose three ISPs develop different protocols to deliver a particular application over the Internet. To reach customers within closed networks, they would need to make their protocol work over each of the closed network's proprietary protocols and might also need the closed networks to configure their networks to enable the applications to work. From the perspective of the would-be application provider, the ISPs become a road- block to innovation. If, on the other hand, we assume that there are proprietary ISP protocols, but ISPs also support IP end-to-end in some fashion, application providers can choose to make their protocol run over IP and bypass the constraints of the closed network provider (at least until the provider notices a large fraction of its IP traffic in this new protocol). It is this sort of marketplace dynamic that is valuable. Another drawback of the closed solution is that it may end up impos- ing undesirable costs on all parties. For example, for both consumers and application developers, closed solutions represent a lock-in to a single solution (where the lock-in reflects the cost of switching). For the cus- tomer, it may mean investing in new hardware and software; for the developer, it may mean retooling a product. From the perspective of a provider, there is the risk that deviation from standards means that they will miss out on some new "killer app" developed elsewhere that offers them dramatic new business opportunities (e.g., an increased customer base or demand for enhanced services). There have been examples of proprietary solutions that found it diffi- cult to gain widespread acceptance on the Internet. For instance, the past decade saw a debate on whether to adopt the ISO X.400 standard, the Internet standard SMTP, or one of a number of proprietary systems for e- mail. The market settled on SMTP, and the other proposals have become largely moot. More generally, there are obstacles to proprietary ap- proaches being adopted Internet-wide. All of the users on the Internet will not sign up with a single provider all at once, and few users only need to be able to interact with other users connected to the same single provider. For example, in an e-mail exchange, neither the sender nor the receiver is likely to know (or care) which ISP the other is using or which e- mail standard their respective ISPs are using. They simply want to ex- change an e-mail message. The success of a proprietary solution depends

OCR for page 107
KEEPING THE INTERNET THE INTERNET 149 on the Internet provider developing and offering working gateways to all the other services, which will entail additional cost to the provider. From the perspective of customers, open standards help maximize the benefits they realize from the sheer quantity of people and services they can inter- act with. If all belong to a single Internet, the benefits of adding a new user to it accrue to everyone located anywhere on the Internet, whereas in a partitioned Internet, the benefits of an additional user would be limited to the customers of that user's ISP. The recent past has also seen pressures placed on online providers that relied on proprietary technologies. Non- Internet-based online providers have had to respond to the Internet phe- nomenon by supplementing their more closed content and services with access to the full Internet or by reinventing themselves as Internet-based services. Today, both fully open and sheltered models are being pursued with vigor in the Internet marketplace, reflecting different business models and different assumptions about the desires of consumers. Consumer ISPs cover a broad spectrum, with more closed services at one end that emphasize their custom content and services and more open services at the other end that emphasize Internet connectivity but may also provide some preferred content or services. To date, the more closed providers have also continued to offer some degree of access to the wider Internet through the connectivity afforded by the basic Internet protocol (with a few having adding this support in response to market demands). Which path the Internet market takes from here will affect the shape of future innovation. Keeping the Internet Open Provision of open IP service ensures that whichever service provider a consumer or business picks, the consumer or business can reach all the parties it wishes to communicate with. Much as the PSTN dial tone offers customers the ability to connect to everyone else connected to the global PSTN network, open IP service offers access to all the services and content available on the public Internet. In the absence of an open IP service, who you can communicate with is a function of the service provider you pick. (Note that the quality of service is still a function of the service provider you pick.) Open IP service is also an enabler of continued innovation and competition. Anyone who creates a better service that runs over IP can distribute software supporting it to thousands or millions of users, who can then use it regardless of who their service provider is. Open IP service requires support of the Internet Protocol (IP), glo- bally unique Internet addresses, and full interconnection with the rest of the networks that make up the Internet. (Some additional capabilities

OCR for page 107
150 THE INTERNET'S COMING OF AGE may be required; some perspectives on what else should be included as a core service were presented above.) Open IP service is content indepen- dent that is, the service provider does not filter the customer's traffic based on content, except with the consent and knowledge of the cus- tomer. However, because the Internet's default service is best effort, this definition can make no promises about the quality of the access. The quality of connectivity will depend on the agreement a customer has with its service provider and the agreements that this provider has with other Internet service providers. Indeed, in a free market, it is reasonable to have differentiation of services to satisfy customers who want to pay more for a service they deem better. It is important to point out that one possible outcome of the tension between open service and ISP service differentiation is that the current best-effort service will continue to be provided as a transparent, end-to-end service but that end-to-end trans- parency across multiple providers will not be provided for the more de- manding (and potentially more interesting from a commercial standpoint) applications such as telephony and audio and video streaming that may depend on QOS mechanisms. Because IP connectivity affords users the potential to misbehave or pose unacceptable demands on the network, this definition of open IP service is not intended to preclude service providers that want to ensure safe, effective operation or meet the desire of customers to block certain types of IP traffic from restricting how their network is used. An ISP may, for example, block particular traffic to prevent its customers from launch- ing attacks on other customers of the ISP (or users elsewhere on the Internet). It may also filter particular types of traffic to protect its cus- tomer computers from attacks. And an ISP may restrict traffic volumes where bandwidth resources are limited to ensure that all users have fair access. Of course, ISPs and their customers may differ over whether a particular filter enhances the operation of the ISP's network or unneces- sarily restricts the behavior of a customer; full disclosure of filtering prac- tices provides consumers with the means to make informed choices when selecting an ISP.