How the Internet Works
The Internet Compared to the Telephone Network
Although much of it runs over facilities provided by the telephone companies, the Internet operates using very different technical and business concepts than the telephone system. The telephone network operates in a connection-oriented mode in which an end-to-end path is established to support each telephone call. The facilities of the telephone network along the path are reserved for the duration of each specific call. An admissions control process checks to see if there are sufficient resources for an additional call at all points along the path between the call initiator and the recipient whenever a new call is attempted. If there are enough resources, the call is initiated. If there are insufficient resources at any point on the path, the call is refused, and the user gets a busy signal.
By contrast, the Internet is not a connection-oriented network. It is a packet-based network built on point-to-point links between special-purpose computers known as routers. In the Internet, all data, including special types of data such as digitized voice sessions, is broken into small chunks called packets. Each packet is normally no larger than 1,500 bytes, so an individual data transmission can consist of many packets. The data packets in the Internet follow the format defined by the Internet Protocol
NOTE: In this appendix (unlike the rest of this report), IP does not mean "intellectual property." For a further discussion of the Internet Protocol, see TCP/IP Illustrated (Stevens, 1996).
(IP) specifications, the basic transmission protocol for the Internet.1 All IP packets include IP addresses for the sender and the receiver of the packet. Packets travel through a series of routers as they progress from sender to receiver in IP networks. The destination IP address in each packet is used by the routers to determine what path each packet should take on its way toward the receiver. Because the forwarding decision is made separately for each packet, the individual packets that make up a single data transmission may travel different paths through the network. For this reason, someone monitoring the Internet at an arbitrary point, even a point located between a sender and receiver, might not be able to collect all of the packets that make up a complete message. As monitoring takes place closer to the end user's computer or the source of the transmission, the probability of collecting all of the packets of a given message increases. Thus, monitoring the Internet to steal content or to see what content is being transferred for rights enforcement purposes can be difficult.
There is no equivalent to the telephone system's admissions control process deployed in the current Internet (i.e., there is no busy signal). If the computers attached to the Internet try to send more traffic than the network can deal with, some of the packets are lost at the network congestion points.2 Transmission Control Protocol (TCP), which is used to carry most of the Internet's data and rides on top of IP, uses lost packets as a feedback mechanism to help determine the ideal rate at which individual data streams can be carried over a network. TCP slows down whenever a packet in a data stream is lost; it then speeds up again until packets start being lost again. If there is too much traffic, all the data transmissions through the congested parts of the network slow down. In the case of voice or video traffic, this produces lower-quality transmissions. Thus, network congestion causes all applications using the path to degrade roughly evenly.3
Another difference between the Internet and the telephone networks is in the way one service provider exchanges traffic with another. In the case of the telephone networks, the individual long distance providers do not connect to each other. All long distance telephone networks must connect to each local telephone office in which they want to do business. Interconnections between providers are far more complex in the Internet,
1Individual computers are identified on Internet Protocol (IP) networks using addresses that are 32 bits long. In the Internet these addresses, known as IP addresses, must be globally unique and can theoretically identify over four billion separate computers. (The actual limit is far less than 4 billion because of the inefficiencies inherent in the processes used to ensure that the assigned addresses are unique.)
2These packets are not lost forever but are retransmitted until they are received successfully.
3For further discussion of TCP, see Stevens (1996).
which has a rich set of interconnections between all types of providers (see below). A final difference between the way the current Internet and telephone networks are operated involves provider-to-provider payments. In the telephone system, there are traffic-based monetary settlements whenever a telephone call involves two or more providers. In the Internet, provider interconnections fall into two broad categories, peering and customer. Currently, Internet provider-to-provider peering arrangements are settlement free. Individual Internet service providers (ISPs) decide if it is in their interest to peer with each other. Peering decisions and specific arrangements are bilateral in nature, and no general Internet peering policy exists. If a particular Internet service provider cannot work out a peering agreement with another provider, then it must become a customer of that provider or of another provider that does have a peering arrangement with that other provider in order to be able to exchange traffic with that provider's customers. Currently, all inter-ISP agreements are bilateral and follow no set model.
George Gilder (1993) summarized the differences between a telephone network and an Internet-like network as being the difference between a smart and a dumb network. Telephone networks include many computers that provide application support services to the users of the telephone network. This makes them "smart." These support computers are required because the user's access device, a telephone, is very simple and must rely on the network to provide all but the most basic functionality. A by-product of this design is that new applications must be installed within the network itself. This process can take quite a long time because the telephone service provider must first be convinced that the service is worthwhile and then must integrate the support software with the existing server software. A dumb network like the Internet assumes that the user will access the network through smart devices, for example, desktop computers. In this type of network, applications are loaded onto the user's computers rather than into the network. This means that new applications can be deployed whenever users decide they want to install a new application. The network itself is designed to merely transport data from one user to another, although some centralized support services, such as the domain name system, are required in the network. These support services are quite simple and are generally not application specific, so they can support a wide range of old and new applications without modification.
The Internet's architecture also means that there is no control over what applications can be run over the Internet.4 For example, there is no
4Controls can, of course, be implemented on end-user computers. For example, central information technology departments in large organizations can install software to prevent users from performing certain functions.
general technical way to prohibit users from downloading an application and running it themselves, even if there were a good reason to do so. In a technical sense, Internet users can run any software that they and their friends (if the software is interactive) want to, even if that software can be used to violate the rights of copyright holders or perform some other illegal or unauthorized function (e.g., breaking into a computer).
The trend in the telecommunications industry is to treat both voice and video as data with somewhat different transmission characteristics. Thus, telephony and, to a lesser extent, video are starting to migrate to the Internet. New features are being added to the Internet to support the more stringent timing requirements that are needed for these applications. To the user, Internet-based telephone services can be indistinguishable from those offered by a traditional telephone company, including being able to provide a "fast busy" signal if a new request would overload a network resource. However, a fast-busy-signal type of telephony service is only one of the optionsanother service provider could offer a differently priced service where a call could still be placed in times of network congestion but the call would be of a lower quality. This trend is likely to cause additional services to be added to the network infrastructure but does not necessarily mean that the Internet will suddenly become a "smart network." For example, there are two different approaches to Internet-based telephony. In one model Internet connections are added to the existing large telephone switches. A user placing a call would connect to the switch, which in turn would connect to the target telephone. In the other model, Internet-enabled telephones connect to each other directly without going through a central switch. The second model fits the traditional Internet model and thus has no point of control or monitoring.
The Physical Topology of the Internet
The Internet started with the ARPANET, a research network established by the U.S. Department of Defense in the late 1960s and early 1970s. The ARPANET was a single backbone network that interconnected a number of university, federal, and industry research centers. Initially very simple, the topology got somewhat more complex by the early 1980s when a number of other data networks were established including CSNET (Computer Science NETwork), BITNET (Because It's There NETwork) and Usenet (a dial-up UNIX network).
With the establishment of the National Science Foundation's NSFNET in the late 1980s and with the demise of the ARPANET in 1990, the Internet topology seemed to simplify, became the NSFNET was being used in the same way that the ARPANET had beenas a national backbone networkbut the topology was actually not so simple. Instead of intercon-
necting individual campuses as the ARPANET did, the NSFNET interconnected regional data networks, which in turn connected the individual sites. In addition the regional networks had their own private interconnections, and commercial public data networks began to appear (e.g., ALTERNET). The commercial networks interconnected with each other and to the regional networks. The resulting topology is quite complex with multiple interconnected backbones interconnecting individual sites and regional networks.
The current Internet is even more complex. Individuals and corporations obtain Internet connectivity from ISPs. Thousands of ISPs exist in the United States, ranging in size from "mom and pop" providers of dial-up service for a few dozen customers each to providers who offer services in all parts of the world and have hundreds of thousands or millions of dial-up and thousands of directly connected customers. ISPs are not confined to delivering service over telephone wires and now include cable TV companies and satellite operators. The smallest ISPs purchase Internet connectivity from larger ISPs. The somewhat larger ISPs purchase connectivity from still larger ISPs, frequently from more than one to establish redundant connections for reliability. The largest ISPs generally peer with each other. But peering is not limited to the largest ISPs. ISPs of all sizes peer with each other. Peering is done at regional exchanges so that regional traffic can be kept local and done at national or international exchanges to ensure that all Internet users can reach all other Internet users. Peering is done at public exchanges, MAE-East in the Washington, D.C., area for example, or increasingly at private peering points between pairs of ISPs. The result is that the current Internet consists of thousands of ISPs with a complex web of connections between them. These connections are both purposeful and predictable, as well as random in nature. Traffic flows between Internet users through a series of ISPs (there can be six or more ISPs along the way) but does not pass through any specific subset of large or backbone ISPs, where someone could monitor the traffic to check for illegal use of content or for applications that could be used to violate copyright.
The Logical Architecture of the Internet
The logical architecture of the Internet is, in one way, quite simpleit is peer to peer, meaning that in theory any computer on the Internet can connect directly to any other computer on the Internet. In practice situations exist where Internet traffic does not travel end to end. For example, some e-mail systems are set up with a local e-mail repository at each location. A user interacts with his or her local repository to retrieve or send mail. The mail repository can then act on the user's behalf and
exchange messages with another e-mail repository or directly with another user's computer. But this configuration is not required and cannot be depended on for monitoring Internet e-mail traffic. Most Internet applications do operate in a peer-to-peer mode.
Pricing and Quality of Service
The Internet started out being "free" in the sense that it was subsidized by governments around the world. Governmental support for Internet connectivity still exists in some parts of the world, including the United States where subsidies support public schools and libraries going "online." Now, virtually all the cost of the Internet in the United States is covered by the private sector, through the fees that users pay to Internet service providers or through advertising revenues that are used to support ''free" online access.5
Many fee structures are in place throughout the Internet with each ISP deciding what types of pricing models they wish to support. Most ISPs charge large corporate Internet users on a traffic-based basis, the more traffic they exchange with the Internet, the higher the bill. Many ISPs offer individual dial-up users simple pricing options. These can range from a flat monthly fee to a fee per hours of usage. Currently few ISPs offer traffic-based services to individual dial-up users because of the costs associated with billing based on the level of use and the resulting increase in complexity for the customer.
Although the Internet Protocol was originally designed to support multiple levels of service quality in the network, these features have never been widely used. Internet traffic is delivered in a mode known as "best-effort" where all traffic, regardless of importance, gets equal treatment. However, this practice is in the process of changing. The Internet Engineering Task Force (see Box C.1) has been working on a new set of standards that will enable ISPs and private networks to provide different service qualities for different applications. An Internet telephony application could request a low-latency service, whereas a student surfing the Web might be willing to accept longer response times in exchange for a lower-cost service.
Many policy-related issues must be resolved before ISPs will be able to start offering the new features. Policies covering such issues as user authentication, service authorization, and preemptive authority must be developed and agreed to. In addition, a cost-effective infrastructure must
5For example, free e-mail is now available through Web browsers from sites such as <http://www.hotmail.com> and many others.
be available to do the accounting and billing that will be required. In spite of these requirements, however, some ISPs are already experimenting with the new quality of service (QoS) functions and expect to be offering QoS-based services soon.
The deployment of QoS-enabling technology into the Internet may significantly change current pricing models. This change will affect both ISP-to-ISP connections and the customers of these ISPs. The new QoS capabilities will include the ability for a user to identify some of his or her traffic as having a higher priority than other traffic. To prevent customers from marking all their traffic "high priority" a different fee will likely be charged for higher-priority traffic.6
6This objective can be accomplished by using a subscription-based model by which the user contracts for a specific amount of high-priority traffic on a monthly basis or by measuring the levels of traffic at each priority and charging for what was used. In any case ISP-to-ISP pricing models will have to be changed to deal with traffic of different priority levels. This type of "class-based" quality of service technology will permit the Internet to support a wide array of new applications without having to deploy specific technology to support individual applications. This is a significant advantage to the Intenet service providers and the applications developers, but it means that, even in the area of quality of service, there will be no easy way to tell what applications are being used by individual Internet users.
The Future of the Internet
As noted above, the trend in the telecommunications industry is toward convergence, as voice, video, and data are increasingly using the Internet Protocol and thus the Internet. Many of the next generation of communications devices including cellular telephones and fax machines are likely to be Internet enabled. Two factors are influencing this general trend toward convergence: the development of "always-on" Internet services and the development of "Internet on a chip" technologies.
A number of the more recent types of Internet connectivity to the home do not use dial-up modems, which users must purposefully activate when they want to have Internet connectivity. Instead, they are always connected, allowing access to or from the home at any time. This constant connectivity means that devices such as an electric power meter inside the home can be reached by the power company over the Internet whenever the power company would like to read the meter. It also means that systems can be developed that could instantly reach Internet-based servers when a user asked for information. One example used in a recent demonstration was a microwave oven that could retrieve recipes over the Internet at the touch of a button. An additional feature of these new always-on types of connectivity is that they are very high speed and thus capable of enabling widespread deployment of new download-on-demand applications, such as music players that allow the user to select from an almost unlimited menu of selections. The player would then retrieve a file of the music for playing. The advent of constantly available high-speed connectivity will go a long way toward reducing and ultimately eliminating the technological barriers to the easy downloading of digital music and video files.
This always-on capability will be well matched to the Internet-on-a-chip technology for which a number of companies are starting to put Internet Protocol software in integrated circuits. These chips are for use not only in appliances and utility meters but also in alarm systems and small appliances such as air conditioners.
At this writing it seems inevitable that in the future the Internet will become the common communications sinew that will tie our world more tightly together than it has ever been.7 Although the rate of growth may slow in the United States, because of the relatively high penetration of households with some kind of Internet access, expansion in the worldwide use of the Internet seems likely to continue at a high rate.
7A CSTB report from the Committee on the Internet in the Evolving Information Infrastructure, currently in preparation, discusses the future of the Internet in detail.